Top Stories
AI Is Making Crypto Hacking Cheap — and the Drift Exploit Proves It
Ledger CTO Charles Guillemet warned this weekend that AI is fundamentally changing the economics of cryptocurrency attacks. Speaking to CoinDesk, he said the cost of finding and exploiting vulnerabilities “is going down to zero,” with AI tools making it trivially easy for attackers to discover flaws that would have taken skilled teams weeks to find.
His timing was pointed. Days earlier, Solana-based DeFi platform Drift Protocol lost $285 million in what’s now the biggest crypto hack of 2026. The attacker fabricated a fake token called CarbonVote, seeded it with a few thousand dollars in liquidity and wash trades, then exploited Drift’s oracles to treat the fictional collateral as worth hundreds of millions. Within minutes, the attacker had drained USDC, SOL, JLP, WBTC, and other assets. Drift’s total value locked dropped from $550 million to under $300 million in less than an hour.
TRM Labs’ initial investigation points to North Korean hackers. The attack used durable nonces to rapidly hijack Drift’s Security Council admin powers — a novel technique that bypassed multi-sig protections.
Guillemet’s recommendation: assume systems will eventually fail. Move to formal verification, hardware-based security, and offline storage. AI-generated exploit code and increasingly sophisticated malware have made the old assumption — that hacking is hard and expensive — dangerously outdated.
Source: CoinDesk, CCN, TRM Labs
Georgia Sends Three AI Bills to Governor Kemp’s Desk
Georgia’s legislative session adjourned today with three AI-related bills now awaiting Governor Brian Kemp’s signature. It’s a modest but meaningful package from a state not typically at the forefront of tech regulation.
SB 540 is the most consequential: a chatbot disclosure and child safety bill that would require AI chatbots to identify themselves as non-human every three hours during conversations — every hour when the user is a minor. It also mandates privacy tools for users, steps to limit certain interactions with minors, and protocols for responding to expressions of suicidal ideation or self-harm.
SB 444 prohibits insurance companies from making coverage decisions based solely on AI systems, requiring a human to be involved in approving or denying medical procedures. SR 789 creates a study committee to examine AI’s broader impact on the state.
Kemp has 40 days to sign or veto the bills. If he does nothing, they become law. Georgia joins a growing wave of state-level AI regulation in the absence of comprehensive federal legislation — at least 38 states have passed some form of AI law in the past year.
Source: Transparency Coalition, AJC
Vibe Coding Hits the Mainstream — and Brings a New Kind of FOMO
Bloomberg devoted its weekend tech newsletter to vibe coding, the practice of describing software in plain English and letting AI write the code. The term has gone from developer in-joke to a $4.7 billion market with a 38% compound annual growth rate, and 92% of U.S. developers now report using AI coding tools daily.
The tools driving it — Anthropic’s Claude Code, OpenAI’s Codex, Cursor, and others — let developers ship at speeds that would have been impossible a year ago. But Bloomberg’s piece focused on a less discussed side effect: the FOMO spreading among professionals who feel left behind if they aren’t using these tools, and the trust gap that’s emerging as companies ship AI-generated code at scale.
Fortune’s parallel coverage raised the harder question: 74% of developers report productivity gains, but who’s verifying what AI writes? Claude Code itself came under scrutiny after its source code was accidentally leaked due to a packaging error. Enterprises want to move faster, but the verification bottleneck isn’t going away — it’s getting worse as the volume of AI-generated code increases.
Quick Hits
-
AI tackles retail’s $850 billion returns problem: A new wave of AI startups is deploying virtual try-on technology to attack the “silent killer” of retail profitability — the 15.8% of annual sales returned in 2025, totaling $849.9 billion. Retailers using virtual try-on report 30-50% reductions in return rates. Google’s virtual try-on integrates directly into product search results starting April 30. CNBC
-
Slackbot goes autonomous: Salesforce’s 30-feature AI overhaul of Slackbot, powered by Anthropic’s Claude under the hood, is now generally available. The bot can join Zoom and Google Meet calls, summarize decisions, log action items directly into Salesforce CRM, and follow users across their desktop for real-time assistance. It’s Salesforce’s biggest shot at Microsoft Copilot yet. SiliconANGLE
-
Colorado AI Act deadline looms: Colorado’s first-in-the-nation AI regulation, targeting high-risk systems that influence employment, housing, and healthcare decisions, now has a June 30 enforcement date after being pushed back from February 1. Companies deploying AI for consequential decisions must implement risk management programs, complete annual impact assessments, and disclose AI interactions to consumers. TrustArc
-
Semiconductor revenues set to surge 49%: Goldman Sachs projects global semiconductor revenue will grow 49% from current levels by the end of 2026, driven almost entirely by AI infrastructure demand. Amazon, Meta, Google, and Microsoft are expected to invest a combined $650 billion in AI within a single year, with most of it flowing to data centers, chips, networking, and energy. ANI News
Worth Watching
The state-level AI regulation pipeline keeps filling. With Georgia’s three bills, Colorado’s approaching enforcement date, and 38 states having already passed some form of AI law, the U.S. is building a regulatory framework from the bottom up while Congress stalls on comprehensive federal legislation. The question is whether this patchwork approach creates meaningful consumer protection or just compliance headaches for companies navigating 50 different rule sets. Meanwhile, in crypto, the Drift exploit and Ledger’s warnings suggest the AI security arms race in DeFi is just getting started — and the defenders are losing.