TLDR SlowMist reported a critical flaw in AI coding tools that threatens crypto developer systems. The vulnerability executes malware automatically when developersTLDR SlowMist reported a critical flaw in AI coding tools that threatens crypto developer systems. The vulnerability executes malware automatically when developers

SlowMist Warns AI Coding Tools May Expose Crypto to Silent Attacks

TLDR

  • SlowMist reported a critical flaw in AI coding tools that threatens crypto developer systems.
  • The vulnerability executes malware automatically when developers open untrusted project folders.
  • Cursor and other AI coding tools were shown to be especially vulnerable during controlled demonstrations.
  • Attackers embed malicious prompts in files like README.md and LICENSE.txt that AI tools interpret as instructions.
  • North Korean threat groups have used smart contracts to deliver malware without leaving traces on blockchain networks.

A new vulnerability in AI coding tools puts developer systems at immediate risk, according to a recent alert from SlowMist, as attackers can now exploit trusted environments without triggering alarms, threatening crypto projects, digital assets, and developer credentials alike.

AI Tools Executing Malicious Code Through Routine Operations

SlowMist warned that AI coding assistants can be exploited through hidden instructions placed inside common project files like README.md and LICENSE.txt.

The flaw activates when users open a project folder, allowing malware to execute commands on macOS or Windows systems without prompts.

This attack requires no confirmation from the developer, making it dangerous for crypto-related development environments holding sensitive data or wallets.

The attack method, called the “CopyPasta License Attack,” was first disclosed by HiddenLayer in September through extensive research on embedded markdown payloads.

Attackers manipulate how AI tools interpret markdown files by hiding malicious prompts inside comments that AI systems treat as code instructions.

Cursor, a popular AI-assisted coding platform, was confirmed vulnerable, along with Windsurf, Kiro, and Aider, according to HiddenLayer’s technical report.

The malware executes when AI agents read instructions and copy them into the codebase, compromising entire projects silently.

“Developers are exposed even before writing any code,” HiddenLayer said, adding that “AI tools become unintentional delivery vectors.”

Cursor users face the highest exposure, as documented in controlled demonstrations showcasing complete system compromise after basic folder access.

State-Backed Attacks on Crypto Projects Intensify

North Korean attackers have increased focus on blockchain developers using new techniques to embed backdoors in smart contracts.

According to Google’s Mandiant team, group UNC5342 deployed malware including JADESNOW and INVISIBLEFERRET across Ethereum and BNB Smart Chain.

The method stores payloads in read-only functions to avoid transaction logs and bypass conventional blockchain tracking.

Developers are unknowingly executing malware simply by interacting with these smart contracts through decentralized platforms or tools.

BeaverTail and OtterCookie, two modular malware strains, were used in phishing campaigns disguised as job interviews with crypto engineers.

The attacks used fake companies like Blocknovas and Softglide to distribute malicious code through NPM packages.

Silent Push researchers traced both firms to vacant properties, revealing they operated as fronts for the “Contagious Interview” malware operation.

Once infected, compromised systems sent credentials and codebase data to attacker-controlled servers using encrypted communication.

AI-Powered Exploits and Scams Escalate Rapidly

Anthropic’s recent testing revealed AI tools exploited half of smart contracts in its SCONE-bench benchmark, simulating $550.1 million in damages.

Claude Opus 4.5 and GPT-5 found working exploits in 19 smart contracts deployed after their respective training cutoffs.

Two zero-day vulnerabilities were identified in active Binance Smart Chain contracts worth $3,694, at a model API cost of $3,476.

The study showed exploit discovery speed doubled monthly, while token costs per working exploit decreased sharply.

Chainabuse reported AI-driven crypto scams rose 456% year-over-year by April 2025, fueled by deepfake videos and voice clones.

Scam wallets received 60% of deposits from AI-generated campaigns featuring convincing fake identities and real-time automated replies.

Attackers now deploy bots to simulate technical interviews and lure developers into downloading disguised malware tools.

Despite these risks, crypto-related hacks fell 60% to $76 million in December from November’s $194.2 million, according to PeckShield.

The post SlowMist Warns AI Coding Tools May Expose Crypto to Silent Attacks appeared first on CoinCentral.

Market Opportunity
Sleepless AI Logo
Sleepless AI Price(AI)
$0.041
$0.041$0.041
-0.96%
USD
Sleepless AI (AI) Live Price Chart
Disclaimer: The articles reposted on this site are sourced from public platforms and are provided for informational purposes only. They do not necessarily reflect the views of MEXC. All rights remain with the original authors. If you believe any content infringes on third-party rights, please contact service@support.mexc.com for removal. MEXC makes no guarantees regarding the accuracy, completeness, or timeliness of the content and is not responsible for any actions taken based on the information provided. The content does not constitute financial, legal, or other professional advice, nor should it be considered a recommendation or endorsement by MEXC.