Two major threat intelligence reports released this week paint a grim picture: artificial intelligence has fundamentally transformed cyber warfare, and most organizations are losing.
Cloudflare’s inaugural 2026 Threat Report, published March 3, documents how the company’s network now blocks over 230 billion threats daily. Microsoft Threat Intelligence followed on March 6 with detailed analysis of how nation-state actors are weaponizing large language models. Together, they reveal an attack landscape that has shifted decisively in favor of adversaries.
The Numbers Are Staggering
AI-powered cyberattacks surged 72% year-over-year, with global incidents rising from 10,870 in 2024 to 16,200 in 2025, according to aggregated industry data. That’s not the outlier - it’s the baseline.
87% of organizations experienced AI-enabled attacks in the past year. 85% faced deepfake attacks specifically. Among financial organizations, nearly half confronted AI-enhanced phishing and deepfake attacks.
The financial toll: average data breach costs hit $4.88 million globally, with U.S. breaches averaging $10.22 million. Organizations without AI governance paid $670,000 more per breach than those with proper controls.
DDoS Attacks Shatter Records
Cloudflare mitigated 47.1 million DDoS attacks in 2025 - more than double the 2024 figure. The company recorded 19 world-record attacks during the year.
The largest: a 31.4 Tbps UDP flood launched by the Aisuru botnet in November 2025. That’s nearly six times the peak volume of the largest attack recorded in 2024.
Aisuru and its successor Kimwolf collectively control between 1 and 4 million infected hosts. In early 2026, Cloudflare null-routed over 550 command-and-control nodes belonging to Kimwolf - but the botnets continue to evolve.
Most attacks now last under 10 minutes, limiting human response capability. Without automated defenses, organizations can’t react fast enough.
Nation-States Jailbreak LLMs for Malware
Microsoft’s March 6 report details how nation-state actors are using jailbreak techniques to turn commercial AI into weapons.
The North Korean threat actor Coral Sleet has created new payloads by jailbreaking LLM software, “enabling the generation of malicious code that bypasses built-in safeguards and accelerates operational timelines.” Microsoft observed Coral Sleet demonstrating rapid capability growth through AI-assisted iterative development - using AI coding tools to generate, refine, and reimplement malware components.
Another North Korean group, Emerald Sleet, uses LLMs to research publicly reported vulnerabilities, accelerating the window between disclosure and exploitation.
The techniques are straightforward. Threat actors employ role-based jailbreaks - prompting models to assume trusted roles or assert that the operator is working in a legitimate context. Once the safety alignment breaks, the model becomes an efficient malware development assistant.
The Deepfake Employment Scam
North Korean operatives have scaled their fake IT worker scheme dramatically. Companies that hired North Korean software developers grew 220% over the past 12 months, with the operation now global in scope.
The mechanics: Operatives use real-time face-swapping and voice-cloning during video interviews. AI-generated synthetic identities pass background checks. Once hired, corporate laptops ship to U.S.-based “laptop farms” run by accomplices, evading geolocation controls.
On June 30, 2025, the DOJ announced sweeping enforcement actions - 29 laptop farm searches across 16 states, seizure of financial accounts and fraudulent websites. Christina Marie Chapman, an Arizona woman who operated a laptop farm enabling North Korean operatives to pose as U.S. workers, was sentenced to 102 months in prison.
But the scheme continues. Generative AI has made it trivially easy to forge identities, alter photos, guide interview answers, and make operatives appear fluent in English.
Phishing Has Been Transformed
AI-generated phishing attacks have surged 1,265% since 2023. By October 2025, AI-generated phishing had become the top enterprise email threat, surpassing ransomware, insider risk, and traditional social engineering combined.
The effectiveness gap is measurable: AI-generated phishing achieves a 54% click-through rate versus 12% for human-created attacks. Over 82% of phishing emails now use some form of AI.
Deepfake files have exploded from 500,000 in 2023 to a projected 8 million in 2025. Voice deepfakes rose 680% last year. Financial losses from deepfake-enabled fraud exceeded $200 million in Q1 2025 alone.
Perhaps most alarming: 99.9% of people cannot reliably identify deepfakes. The technology has outpaced human perception.
China’s Telecom Infiltration
Chinese state-sponsored group Salt Typhoon continues its multi-year campaign against telecommunications infrastructure. The operation, which began as early as 2022, has now compromised at least 200 companies across 80 countries, according to the FBI.
In the U.S., confirmed victims include Verizon, AT&T, T-Mobile, Spectrum, Lumen, Consolidated Communications, and Windstream. In February 2026, Singapore confirmed that Salt Typhoon targeted its four largest phone companies: Singtel, StarHub, M1, and Simba Telecom.
Cloudflare’s report notes that Chinese groups are using legitimate cloud ecosystems - Google Calendar for encrypted command-passing, Google Drive, Microsoft Teams, and Amazon S3 - to mask command-and-control traffic in what security researchers call “living off the land.”
Credential Theft Enables Everything Else
The reports reveal a fundamental shift in attack strategy. Threat actors increasingly “log in” rather than “break in.”
Cloudflare found that 94% of all login attempts now originate from bots. Of human logins, 46% involve previously compromised credentials. 63% of all logins in the past three months used credentials already exposed elsewhere.
Infostealers - malware that harvests credentials and session tokens - trace to 54% of ransomware attacks. Modern infostealers extract live session tokens, bypassing MFA entirely.
Email authentication remains broken: 43% of 450 million analyzed emails failed SPF checks, 44% lacked valid DKIM signatures, and 46% failed DMARC validation. Cloudflare intercepted over $123 million in business email compromise theft attempts, averaging $49,225 per attempt.
What’s Actually Working
Organizations using AI-powered security tools achieve 95% detection accuracy versus 85% for traditional methods, detect threats 60% faster, and reduce incident response time by 30-50%.
But that assumes organizations deploy these tools at all. As the reports make clear, most haven’t caught up.
The defensive recommendations are consistent across sources:
Treat AI as the baseline threat model. Assume attackers have access to the same AI capabilities you do. Design security controls accordingly.
Deploy automated DDoS mitigation. Human-scale response can’t handle attacks that peak in seconds and last under 10 minutes.
Implement continuous authentication. Session tokens are the new attack surface. Single-point authentication is no longer sufficient.
Verify identities beyond video calls. Deepfake-capable adversaries require multi-factor identity verification that doesn’t rely on visual confirmation alone.
Monitor for credential reuse. If credentials appear in breach databases, treat them as compromised regardless of when the breach occurred.
The Asymmetry Problem
The core problem: AI dramatically lowers the barrier for attackers while only incrementally improving defenses.
A threat actor with basic prompt engineering skills can jailbreak an LLM to generate polymorphic malware. 76% of detected malware now exhibits AI-driven polymorphism - code that changes its structure to evade detection. Meanwhile, 76% of organizations report they cannot match the speed of AI-powered attacks.
Manufacturing and critical infrastructure now account for over half of ransomware attacks. Healthcare saw 630+ ransomware incidents in the past two years. The median dwell time for ransomware - the time between initial compromise and detection - has dropped from 9 days to 5. Attackers are moving faster than ever.
The Bottom Line
These threat reports confirm what security researchers have warned for years: AI is a force multiplier for attackers, and most organizations haven’t adapted. 87% have faced AI-enabled attacks. DDoS records are being shattered by botnets controlling millions of hosts. Nation-states are jailbreaking LLMs to generate malware. Deepfakes have made identity verification unreliable. The companies with AI governance pay $670,000 less per breach - but most organizations still lack basic controls. The gap between AI attack capabilities and defensive readiness is widening, not closing.