TL;DR
Two North Korea-linked hacks in April drained almost $600 million from DeFi protocols Drift Protocol ($285 million) and Kelp DAO ($292 million). Cybersecurity experts believe the attackers used AI to select targets and design exploits. The Kelp DAO hack triggered $9 billion in outflows from Aave in two days, exposing DeFi’s systemic fragility.
The two hacks came a little over two weeks apart. On 1 April, attackers drained roughly $285 million from Drift Protocol, a Solana-based derivatives exchange, after spending months posing as a quantitative trading firm to trick employees into authorising malicious transactions. On 18 April, a separate group exploited a single-verifier flaw in Kelp DAO’s cross-chain bridge and extracted approximately $292 million in wrapped ether. Between them, the heists netted almost $600 million, and, according to blockchain forensics firm TRM Labs, accounted for 76% of all crypto hack losses in 2026 so far.
Both attacks are widely attributed to North Korea-linked groups, according to Bloomberg . What most alarmed cybersecurity researchers, however, was not the scale but the method. TRM investigator Nick Carlsen, a former FBI analyst who specialises in North Korean crypto crime, said the sophistication of the April heists makes it highly likely the attackers used artificial intelligence to select targets and design exploits. “This is all stuff North Korea never used to do,” he said.
The contagion effect
The Drift hack was devastating for the platform itself. The attackers manufactured a fictitious token, built an inflated trading record to make it appear legitimate, and used it as collateral to drain real assets in roughly 12 minutes. Drift’s total value locked collapsed from $550 million to under $300 million within an hour. The exchange shut down and is now planning to relaunch after securing a roughly $148 million rescue package led by stablecoin issuer Tether. A smaller DeFi project called Carrot, which had routed user funds through Drift-integrated vaults, announced on 30 April that it was shuttering entirely.
The Kelp DAO hack was worse in a different way. Rather than selling the stolen funds immediately, the attackers deposited roughly $200 million of the proceeds as collateral on Aave, the largest decentralised lending protocol. That triggered a crisis of confidence: depositors, fearing the collateral backing Aave might be worthless, pulled roughly $9 billion from the platform in two days. Total value locked across all DeFi lending protocols dropped by more than $13 billion in 48 hours. Aave ended up needing a rescue of its own.
The episode illustrated a structural vulnerability that distinguishes decentralised finance from traditional banking. Transactions over blockchains cannot be reversed. There is no central authority to freeze suspicious transfers before they settle. And the interconnected nature of DeFi protocols, where one platform’s collateral is another’s liability, means a single exploit can cascade through an ecosystem of roughly $130 billion in locked assets.
The AI accelerant
Determining whether hackers used AI is not an exact science. Investigators draw conclusions based on the sophistication of an attack, the methods employed, and the speed with which targets were identified. More than half a dozen cybersecurity researchers interviewed by Bloomberg said the abrupt rise in DeFi exploits — April saw a record 28 to 30 incidents, almost doubling the previous high, is itself a clear indicator that attackers are deploying widely available AI models.
“With AI, the cost of vulnerability detection is trending to zero,” said Aneirin Flynn, chief executive of security audit firm Failsafe. The time it takes for hackers to identify a weakness in a blockchain protocol has been compressed from months to days or even hours, he said.
Anthropic’s own research supports the premise. In December, the company published a study showing that more than half of blockchain exploits carried out in 2025 “could have been done autonomously” using AI agents. What the researchers called “potential exploit revenue” had been doubling every 1.3 months, and the average cost of scanning a smart contract for vulnerabilities had fallen to $1.22. A separate test by engineers at a16z, the largest crypto venture capital firm, found that an AI trained on past DeFi hacks “always found the vulnerability” in a given protocol, though it could not yet fully design a profitable exploit without human assistance.
The Mythos question
Hanging over the industry is Anthropic’s Mythos, the AI model the company has withheld from wide release because of its cybersecurity capabilities. In testing, Mythos autonomously discovered thousands of previously unknown zero-day vulnerabilities across every major operating system and web browser, including a flaw in OpenBSD that had gone undetected for 27 years. Anthropic chose to limit access to a handful of major technology companies and banks through what it calls Project Glasswing, rather than releasing the model publicly.
There is no evidence that the April hackers had access to Mythos. But the model’s existence underscores a broader anxiety: if existing, publicly available AI tools are already capable of accelerating crypto heists to this degree, what happens when more powerful models, whether Mythos or its successors, inevitably leak or are replicated? In November, Anthropic disclosed that attackers had manipulated its Claude model to target roughly 30 entities including technology companies, financial institutions, and government agencies, succeeding in a small number of cases. In April, reports emerged that unauthorised users had gained access to the restricted Mythos model itself.
Building defences
The urgency to respond is mounting. Failsafe’s Flynn said several clients are installing software that continuously scans devices connected to a network and alerts managers to suspicious patterns. Yuan Han Li, a partner at crypto venture firm Blockchain Capital, has called for circuit breakers that would pause or limit transactions beyond a certain threshold. Jupiter, a Solana-based trading venue, is rolling out a similar mechanism more widely. Aave is expanding its risk framework for collateral to include cybersecurity factors, according to its chief legal and policy officer, Linda Jeng.
But TRM’s Carlsen argues that purely defensive measures are ultimately insufficient against state-backed attackers armed with AI. “You don’t win this kind of campaign playing defense,” he said. The only viable response, in his view, is to turn the hackers’ own methods against them and pursue the stolen funds aggressively. “They need to be hacked.”
The crypto industry has lost billions to exploits over the past several years, and North Korea’s share of global hack losses has risen from below 10% in 2020 to 76% through April 2026, according to TRM Labs. The Drift and Kelp DAO heists suggest the threat is not plateauing. It is accelerating, and the defenders are still catching up.