AI Coding Tools Are Flooding Open Source With Security Holes

Georgia Tech researchers tracking real CVEs find 35 vulnerabilities from AI-generated code in March alone—and estimate the actual number is 10x higher.

Close-up of code on a computer monitor with colorful syntax highlighting

Georgia Tech security researchers have been quietly tracking something most of the industry would rather ignore: actual CVEs—not benchmarks, not theoretical risks—caused by AI coding tools.

The numbers are getting worse. In March 2026 alone, the team documented 35 new CVEs directly attributable to AI-generated code. That’s up from 15 in February and just 6 in January. The trend line isn’t subtle.

The Vibe Security Radar

The project is called Vibe Security Radar, run by the Systems Software & Security Lab (SSLab) at Georgia Tech’s School of Cybersecurity and Privacy. Since May 2025, researcher Hanqing Zhao and team have been doing what nobody else was: tracing vulnerabilities in public databases back to their source commits to determine if AI tools introduced them.

“Nobody is actually tracking it,” Zhao told The Register. “We want real numbers. Not benchmarks, not hypotheticals, real vulnerabilities.”

Which Tools Are Responsible

Out of 74 confirmed CVEs tracked since the project started:

  • Claude Code: 49 CVEs (11 critical severity)
  • GitHub Copilot: 15 CVEs (2 critical)
  • Devin, Google Jules, Cursor, Aether: 2 each
  • Atlassian Rovo, Roo Code: 1 each

Claude Code’s dominance in the statistics comes with a caveat: it “always leaves a signature” in commits, making it easier to trace. Copilot’s inline suggestions are harder to detect because they blend into human-written code. The actual proportions may look different than the tracking suggests.

The Real Number Is Much Worse

The 74 confirmed CVEs represent what the researchers call “a lower bound.” Their estimate? The actual count is likely 5 to 10 times higher—somewhere between 400 and 700 cases across the open-source ecosystem.

Why the gap? Most AI tool traces get stripped during development. Commit messages are rewritten. Code is refactored before pushing. The signature that lets researchers track Claude Code often disappears before it hits a public repository.

What Kind of Bugs

The vulnerabilities aren’t exotic. They’re the kinds of security holes that human code review should catch—but doesn’t when reviewers are overwhelmed by machine-generated output:

  • Directory traversal (CVE-2025-55526, severity 9.1, in n8n-workflows)
  • Improper input handling (GHSA-3j63-5h8p-gf7c in the x402 SDK)
  • SQL injection, XSS, authentication bypasses

A 2024 Georgetown University study found that 48% of AI-generated code snippets that compiled successfully contained flagged security bugs. Only 30% passed verification and were deemed secure.

The Behavioral Shift

Here’s the real problem: how people are using these tools has changed fundamentally.

“A year ago most developers used AI for autocomplete,” Zhao noted. “Now people are vibe coding entire projects, shipping code they’ve barely read.”

Claude Code alone now accounts for more than 4% of all public commits on GitHub—over 15 million total commits. That’s not autocomplete. That’s wholesale delegation of coding to AI, and the security implications are showing up in CVE databases.

What This Means

The AI coding revolution is real, and it’s making developers more productive. But “more productive” doesn’t mean “more secure.” These tools are trained on public code, including all its security mistakes, and they reproduce those patterns at scale.

“Even teams that do code review aren’t going to catch everything when half the codebase is machine-generated,” Zhao said.

The gap between AI-generated code volume and security review capacity is widening. The CVE count is one measure of that gap. It’s not a theoretical concern anymore—it’s a documented, measurable reality that’s getting worse month over month.

What You Can Do

If you’re using AI coding tools:

  1. Review everything. Don’t assume AI-generated code is secure because it runs. The 48% vulnerability rate in compilable code should be sobering.

  2. Use security linters and SAST tools. Automated scanning catches patterns that AI tools reproduce from their training data.

  3. Keep the signatures. If you’re using Claude Code or similar tools, preserving commit attribution helps the security community track problems.

  4. Watch the Vibe Security Radar. Georgia Tech is doing the tracking that tool vendors aren’t. The dashboard shows real-time CVE attribution.

The tools are here to stay. The security practices need to catch up.