IBM released its 2026 X-Force Threat Intelligence Index today, and the findings paint a stark picture: attackers aren’t developing new techniques so much as using AI to execute old ones faster than defenders can respond.
The headline number: a 44% increase in attacks exploiting public-facing applications, driven largely by AI-enabled vulnerability discovery. Meanwhile, over 300,000 ChatGPT credentials have been harvested and sold on the dark web, signaling that AI platforms now face the same credential theft risks as any enterprise SaaS tool.
The Numbers
IBM X-Force tracked several alarming trends in 2025:
- 44% increase in attacks on public-facing applications, with missing authentication controls as the primary entry point
- 49% surge in active ransomware and extortion groups compared to the prior year
- 300,000+ ChatGPT credentials exposed via infostealer malware
- 40% of all incidents began with vulnerability exploitation
- 4x increase in supply chain compromises since 2020
“Attackers aren’t reinventing playbooks, they’re speeding them up with AI,” said Mark Hughes, IBM’s Global Managing Partner for Cybersecurity Services.
AI as Attack Accelerator
The report describes a new dynamic in cybersecurity: AI helps attackers move faster than traditional security teams can adapt. Specifically, threat actors are using AI to:
- Research vulnerabilities in public-facing applications
- Analyze large datasets to identify high-value targets
- Iterate on attack paths in real-time
- Scale phishing campaigns with improved personalization
- Create synthetic identities for fraud schemes
North Korean IT worker operations, for example, now leverage AI to create synthetic identities at scale, enabling fraud campaigns that would have been impossible to maintain manually.
Your ChatGPT Password is on the Dark Web
Perhaps the most concerning finding for everyday AI users: infostealer malware has turned ChatGPT into just another credential to harvest. The 300,000+ exposed credentials came primarily from infostealers running on compromised personal devices.
The risk isn’t just losing access to your chat history. Many users reuse passwords across personal and enterprise accounts, creating “indirect pathways for attackers to breach high-value systems,” according to the report.
This puts AI platforms on par with email, cloud storage, and other core enterprise tools in terms of credential risk. If you’re using ChatGPT (or any AI chatbot) with the same password you use elsewhere, that credential is now a viable attack vector.
Manufacturing and North America Hit Hardest
Manufacturing remained the most targeted industry for the fifth consecutive year, accounting for 27.7% of all incidents. Data theft - not ransomware - was the primary objective in most cases.
North America became the most-attacked region for the first time in six years, accounting for 29% of cases (up from 24% in 2024). The report doesn’t speculate on why, but the concentration of AI infrastructure and enterprise SaaS adoption in the region seems like an obvious factor.
The Ransomware Fragmentation
While ransomware attacks continued to rise, the ecosystem itself is fragmenting. IBM identified 109 distinct extortion groups in 2025, up from 73 in 2024. But the top 10 groups’ dominance dropped by 25%.
This fragmentation makes attribution harder and creates a longer tail of smaller, “transient operators” running low-volume campaigns. Leaked tooling and AI-assisted automation have lowered the barrier to entry, letting less sophisticated actors run ransomware operations.
What Organizations Should Do
IBM’s recommendations focus on four areas:
-
Lock down public-facing applications - Missing authentication controls are the primary entry point. If it’s exposed to the internet, it needs authentication.
-
Treat AI platforms as enterprise SaaS - Apply the same credential hygiene, monitoring, and access controls to ChatGPT as you would to email or cloud storage.
-
Monitor the dark web - Credential exposure often precedes breaches. Organizations should track stolen credentials across surface, deep, and dark web marketplaces.
-
Deploy AI-enhanced detection - The report suggests fighting AI with AI, using autonomous SOC capabilities and agentic AI to match attacker speed.
The Bottom Line
The AI security landscape is entering a dangerous asymmetry: attackers are using AI to accelerate their operations while most defenders are still relying on traditional approaches. The 300,000 stolen ChatGPT credentials prove that AI platforms are now prime targets, not just tools. If you’re using AI chatbots with reused passwords or without enterprise-grade security, you’re already behind.