AI Finds 12 Zero-Days in OpenSSL That Humans Missed for 25 Years

An autonomous security analyzer using Claude Opus 4.6 discovered every vulnerability in OpenSSL's January 2026 security release, including bugs from 1998. It marks a turning point for AI in cybersecurity.

In January 2026, OpenSSL announced a coordinated security release patching twelve zero-day vulnerabilities. Every single one was discovered by an AI system.

The security research firm AISLE used an autonomous analyzer powered by Claude Opus 4.6 to find flaws that had evaded decades of human security audits, millions of CPU-hours of fuzzing, and scrutiny from Google’s security teams. Three of the bugs had been present since 1998-2000. One predated OpenSSL entirely, inherited from Eric Young’s SSLeay implementation in the 1990s.

This isn’t a benchmark or a demo. It’s real vulnerability discovery in production code that secures most of the internet’s encrypted traffic.

What They Found

The twelve vulnerabilities ranged from parsing crashes to a critical remote code execution risk. The most severe was CVE-2025-15467, a stack buffer overflow in CMS AuthEnvelopedData parsing. OpenSSL rated it HIGH severity. NIST’s CVSS v3 score was 9.8 out of 10 - CRITICAL, an extremely rare rating for such projects.

The vulnerability could potentially be exploited remotely without valid key material. It had been hiding in plain sight for over two decades.

According to Bruce Schneier, this happened “in a codebase that has been fuzzed for millions of CPU-hours and audited extensively for over two decades by teams including Google’s.”

In five of the twelve cases, the AI system didn’t just find the bugs - it directly proposed patches that were accepted into the official release.

Not a One-Off

AISLE’s OpenSSL findings weren’t isolated. Over the second half of 2025 and early 2026, the team discovered over 100 externally validated CVEs across more than 30 projects. The list includes the Linux kernel, glibc, Chromium, Firefox, WebKit, Apache HTTPd, GnuTLS, OpenVPN, Samba, and NASA’s CryptoLib.

Beyond the CVEs, they identified “several hundred similar zero-day discoveries in projects that are not assigning CVEs.”

For curl specifically, AISLE found 5 CVEs, including 3 of the 6 disclosed in the curl 8.18.0 release. The researchers emphasize that their work affected “billions of devices across the browser and mobile ecosystem.”

The Method

The AISLE system isn’t a one-shot scanner. It handles the complete vulnerability lifecycle: discovery, validation, triage, patch generation, and verification. The team started their autonomous analysis in August 2025, examining code paths and edge cases continuously rather than in periodic reviews.

What separates AISLE’s approach from the flood of AI-generated security garbage that has plagued bug bounty programs is integration and responsibility. They work directly with maintainers through responsible disclosure and collaborative remediation. They contribute patches. They maintain ongoing relationships with projects.

“The number that actually matters is how many of those findings made the software more secure,” the team wrote. They tracked not just bugs found but patches landed.

The Contrast: curl’s Bug Bounty Collapse

The same month AISLE announced their OpenSSL discoveries, Daniel Stenberg - the creator of curl - shut down the project’s HackerOne bug bounty program.

After six years, $86,000 paid out, and 78 confirmed vulnerabilities fixed, curl couldn’t take it anymore. The problem: “AI slop” - long, confident, and completely fabricated vulnerability reports generated with LLMs.

One report described a supposed HTTP/3 “stream dependency cycle exploit” complete with GDB sessions and register dumps. The function it referenced doesn’t exist in curl. The whole thing was AI-generated hallucination dressed up as security research.

By late 2025, only one in twenty or thirty security reports to curl were real. The rest were noise that wasted maintainer time.

The contrast is stark. Both situations involve AI and security research. One produced 12 critical vulnerabilities and patches accepted into production. The other flooded a project with fabricated reports until they gave up on accepting external submissions entirely.

What Made the Difference

The answer isn’t that AI can or can’t do security research. It’s how.

AISLE’s system:

  • Validates findings before reporting
  • Generates working patches
  • Integrates with maintainer workflows
  • Operates continuously rather than as drive-by scanning
  • Takes responsibility for the complete remediation lifecycle

The AI slop drowning bug bounties:

  • Generates plausible-sounding but unverified reports
  • Provides no working fixes
  • Wastes maintainer time on triage
  • Operates as one-shot hunting for bounty payouts
  • Takes no responsibility for accuracy

The difference is between a tool in the hands of competent researchers and a tool being used to automate low-effort bounty hunting.

The Uncomfortable Implication

One comment on Schneier’s blog raised an uncomfortable point: finding vulnerabilities benefits attackers more than defenders. Defenders must find all vulnerabilities. Attackers only need one.

If AI can systematically discover zero-days in heavily-audited code, the same capability is available to attackers. AISLE found bugs that existed for 25 years. How many more are waiting? And who else is looking?

The researchers counter that proactive discovery is better than leaving bugs for adversaries to find. If AI can find vulnerabilities faster than traditional methods, using it defensively is better than hoping attackers don’t.

But this cuts both ways. The AISLE team reported 100+ CVEs through responsible disclosure. A malicious actor using the same techniques wouldn’t disclose anything.

What This Means for Open Source

OpenSSL and curl are two of the most critical pieces of open-source infrastructure. OpenSSL secures encrypted traffic. curl handles HTTP requests in everything from enterprise systems to embedded devices.

Both projects are maintained by small teams handling enormous responsibility. Both are heavily scrutinized. Both had bugs that slipped through anyway.

AI vulnerability discovery at this level means:

For maintainers: The burden of security review may shift. If autonomous analyzers can find bugs faster than human auditors, integrating them into development workflows becomes essential rather than optional.

For users: Software you assumed was secure because “everyone audits it” may have decade-old vulnerabilities. The auditing didn’t catch them. AI might.

For the security industry: Bug bounty programs designed around human researchers may need rethinking. The noise problem from AI slop is real. But so is the capability demonstrated by serious AI-assisted research.

The Bottom Line

An AI system found every vulnerability in OpenSSL’s January 2026 security release. Some of those bugs were older than many working security researchers. The same month, curl shut down its bug bounty because AI-generated reports made it unsustainable.

AI in security research isn’t a binary good or bad. It’s a capability amplifier. In the hands of researchers who validate findings, generate patches, and work with maintainers, it discovers critical vulnerabilities that humans missed for decades. In the hands of bounty hunters automating low-effort reports, it destroys the systems designed to find bugs.

The question isn’t whether AI will be used for security research. It’s whether the people using it will be responsible or not.