The post Criminals Are Vibe Hacking With AI To Carry Out Ransoms At Scale: Anthropic appeared on BitcoinEthereumNews.com. Despite “sophisticated” guardrails, AI infrastructure company Anthropic said cybercriminals are still finding ways to misuse its AI chatbot Claude to carry out large-scale cyberattacks.  In a “Threat Intelligence” report released Wednesday, members of Anthropic’s Threat Intelligence team, including Alex Moix, Ken Lebedev and Jacob Klein, shared several cases in which criminals had misused the Claude chatbot, with some attacks demanding more than $500,000 in ransom. They found that the chatbot was used not only to provide technical advice to the criminals, but also to directly execute hacks on their behalf through “vibe hacking,” allowing them to perform attacks with only basic knowledge of coding and encryption. Vibe hacking is social engineering using AI to manipulate human emotions, trust and decision-making. In February, blockchain security firm Chainalysis forecast crypto scams may have their biggest year in 2025 as generative AI has made it more scalable and affordable for attacks. Anthropic found one hacker who had been “vibe hacking” with Claude to steal sensitive data from at least 17 organizations — including healthcare, emergency services, government and religious institutions —with ransom demands ranging from $75,000 to $500,000 in Bitcoin. A simulated ransom note demonstrates how cybercriminals leverage Claude to make threats. Source: Anthropic The hacker trained Claude to assess stolen financial records, calculate appropriate ransom amounts and write custom ransom notes to maximize psychological pressure. While Anthropic later banned the attacker, the incident reflects how AI is making it easier for even the most basic-level coders to carry out cybercrimes to an “unprecedented degree.” “Actors who cannot independently implement basic encryption or understand syscall mechanics are now successfully creating ransomware with evasion capabilities [and] implementing anti-analysis techniques.” North Korean IT workers also used Anthropic’s Claude Anthropic also found that North Korean IT workers have been using Claude to forge convincing identities, pass technical… The post Criminals Are Vibe Hacking With AI To Carry Out Ransoms At Scale: Anthropic appeared on BitcoinEthereumNews.com. Despite “sophisticated” guardrails, AI infrastructure company Anthropic said cybercriminals are still finding ways to misuse its AI chatbot Claude to carry out large-scale cyberattacks.  In a “Threat Intelligence” report released Wednesday, members of Anthropic’s Threat Intelligence team, including Alex Moix, Ken Lebedev and Jacob Klein, shared several cases in which criminals had misused the Claude chatbot, with some attacks demanding more than $500,000 in ransom. They found that the chatbot was used not only to provide technical advice to the criminals, but also to directly execute hacks on their behalf through “vibe hacking,” allowing them to perform attacks with only basic knowledge of coding and encryption. Vibe hacking is social engineering using AI to manipulate human emotions, trust and decision-making. In February, blockchain security firm Chainalysis forecast crypto scams may have their biggest year in 2025 as generative AI has made it more scalable and affordable for attacks. Anthropic found one hacker who had been “vibe hacking” with Claude to steal sensitive data from at least 17 organizations — including healthcare, emergency services, government and religious institutions —with ransom demands ranging from $75,000 to $500,000 in Bitcoin. A simulated ransom note demonstrates how cybercriminals leverage Claude to make threats. Source: Anthropic The hacker trained Claude to assess stolen financial records, calculate appropriate ransom amounts and write custom ransom notes to maximize psychological pressure. While Anthropic later banned the attacker, the incident reflects how AI is making it easier for even the most basic-level coders to carry out cybercrimes to an “unprecedented degree.” “Actors who cannot independently implement basic encryption or understand syscall mechanics are now successfully creating ransomware with evasion capabilities [and] implementing anti-analysis techniques.” North Korean IT workers also used Anthropic’s Claude Anthropic also found that North Korean IT workers have been using Claude to forge convincing identities, pass technical…

Criminals Are Vibe Hacking With AI To Carry Out Ransoms At Scale: Anthropic

3 min read

Despite “sophisticated” guardrails, AI infrastructure company Anthropic said cybercriminals are still finding ways to misuse its AI chatbot Claude to carry out large-scale cyberattacks. 

In a “Threat Intelligence” report released Wednesday, members of Anthropic’s Threat Intelligence team, including Alex Moix, Ken Lebedev and Jacob Klein, shared several cases in which criminals had misused the Claude chatbot, with some attacks demanding more than $500,000 in ransom.

They found that the chatbot was used not only to provide technical advice to the criminals, but also to directly execute hacks on their behalf through “vibe hacking,” allowing them to perform attacks with only basic knowledge of coding and encryption.

Vibe hacking is social engineering using AI to manipulate human emotions, trust and decision-making. In February, blockchain security firm Chainalysis forecast crypto scams may have their biggest year in 2025 as generative AI has made it more scalable and affordable for attacks.

Anthropic found one hacker who had been “vibe hacking” with Claude to steal sensitive data from at least 17 organizations — including healthcare, emergency services, government and religious institutions —with ransom demands ranging from $75,000 to $500,000 in Bitcoin.

A simulated ransom note demonstrates how cybercriminals leverage Claude to make threats. Source: Anthropic

The hacker trained Claude to assess stolen financial records, calculate appropriate ransom amounts and write custom ransom notes to maximize psychological pressure.

While Anthropic later banned the attacker, the incident reflects how AI is making it easier for even the most basic-level coders to carry out cybercrimes to an “unprecedented degree.”

North Korean IT workers also used Anthropic’s Claude

Anthropic also found that North Korean IT workers have been using Claude to forge convincing identities, pass technical coding tests and even secure remote roles at US Fortune 500 tech companies. They also used Claude to prepare interview responses for those roles.

Claude was also used to conduct the technical work once hired, Anthropic said, noting that the employment schemes were designed to funnel profits to the North Korean regime despite international sanctions.

Breakdown of Claude-powered tasks North Korean IT workers have used. Source: Anthropic

Earlier this month, a North Korean IT worker was counter-hacked and it was found that a team of six shared at least 31 fake identities, obtaining everything from government IDs and phone numbers to purchasing LinkedIn and UpWork accounts to mask their true identities and land crypto jobs.

Related: Telegram founder Pavel Durov says case going nowhere, slams French gov

One of the workers supposedly interviewed for a full-stack engineer position at Polygon Labs, while other evidence showed scripted interview responses in which they claimed to have experience at NFT marketplace OpenSea and blockchain oracle provider Chainlink.

Anthropic said its new report is aimed at publicly discussing incidents of misuse to assist the broader AI safety and security community and to strengthen the wider industry’s defense against AI abusers. 

It said that despite implementing “sophisticated safety and security measures” to prevent the misuse of Claude, malicious actors have continued to find ways around them. 

Magazine: 3 people who unexpectedly became crypto millionaires… and one who didn’t

Source: https://cointelegraph.com/news/cybercriminals-vibe-hacking-ai-ransoms-says-anthropic?utm_source=rss_feed&utm_medium=feed&utm_campaign=rss_partner_inbound

Market Opportunity
ALEX Lab Logo
ALEX Lab Price(ALEX)
$0.00096
$0.00096$0.00096
0.00%
USD
ALEX Lab (ALEX) Live Price Chart
Disclaimer: The articles reposted on this site are sourced from public platforms and are provided for informational purposes only. They do not necessarily reflect the views of MEXC. All rights remain with the original authors. If you believe any content infringes on third-party rights, please contact service@support.mexc.com for removal. MEXC makes no guarantees regarding the accuracy, completeness, or timeliness of the content and is not responsible for any actions taken based on the information provided. The content does not constitute financial, legal, or other professional advice, nor should it be considered a recommendation or endorsement by MEXC.

You May Also Like

Water150 Unveils Historical Satra Brunn Well: The Original Source of 150 Years of Premium Quality Spring Water Hydration

Water150 Unveils Historical Satra Brunn Well: The Original Source of 150 Years of Premium Quality Spring Water Hydration

The post Water150 Unveils Historical Satra Brunn Well: The Original Source of 150 Years of Premium Quality Spring Water Hydration appeared on BitcoinEthereumNews.com. Water150, the project developed by the Longhouse Foundation to reserve access to premium spring water through a transparent, blockchain-based ecosystem of natural water springs, is excited to introduce its first natural water well, Satra Brunn.  The Sätra Brunn well is one of Sweden’s oldest and best-preserved natural spring water wells, located in a 324-year-old Swedish village. Every water source added to the network will be measured according to the pedigree and based on the foundations of the historically reliable Satra Brunn natural spring, a well that has endured since the 18th century.   The Satra Brunn well secures the first 66 million liters of the annually replenished mineral water supply, starting in January 2027, for the next 150 years. Each liter of water secured in the Satra Brunn well is fully backed by a corresponding Water150 token, issued on the Ethereum blockchain by the Longhouse Water S.A., a Luxembourg public limited liability company.  Hence, the first batch of 66 million Water150 tokens to enter circulation will fully back the annual supply from the Satra Brunn well.  The project uses blockchain technology as a barrierless and transparent ecosystem to connect users to naturally filtered, high-quality, and sustainably managed drinking water per year for at least 150 years, starting in 2027. The amount of Water150 tokens in circulation is a verifiable measure of the volume of annual water flow available within the ecosystem, audited by independent third parties. The W150 token is one of the first real-world asset (RWA) utility tokens to get the full approval of the European Securities and Markets Authority (ESMA), the body responsible for the Markets in Crypto-Assets Regulation (MiCAR), a cryptocurrency regulatory standard recognized and adopted throughout Europe. Water150 is building a global network of 1,000 premium mineral water sources like Satra Brunn, managed according to the high…
Share
BitcoinEthereumNews2025/09/19 19:41
Amazon signs AI and cloud partnership to accelerate growth

Amazon signs AI and cloud partnership to accelerate growth

Prosus and Amazon have signed a multi-year deal with AWS to consolidate cloud and AI contracts and save costs.
Share
Cryptopolitan2026/02/04 18:05
Senate Democrats Forge Ahead with U.S. Crypto Regulation Efforts

Senate Democrats Forge Ahead with U.S. Crypto Regulation Efforts

The long-stalled CLARITY Act, designed to regulate the U.S. cryptocurrency market, is back in the spotlight as Senate Democrats quietly resume discussions.Continue
Share
Coinstats2026/02/04 18:08