A vibe-coding platform left thousands of users’ source code, database credentials, and AI chat histories exposed through a trivial API flaw — for at least 48 days. Anthropic’s Model Context Protocol has a design-level vulnerability that enables remote code execution across 200,000 servers, and Anthropic says that’s your problem, not theirs. Three major AI coding agents will leak your secrets if someone writes the right comment on a pull request. And NIST, the organization responsible for scoring every vulnerability in the national database, just declared it can’t keep up anymore and abandoned 29,000 unprocessed CVEs. This week’s security roundup is bleak — and the common thread is that AI tooling is creating vulnerabilities faster than anyone can track them.
Lovable’s 48-Day Open Door
On April 20, security researcher weezerOSINT demonstrated that Lovable’s API — the popular AI-powered “vibe coding” platform — had a broken object-level authorization (BOLA) flaw that let any authenticated free-account user access data belonging to other users. Five API calls from a free account was all it took to pull source code, database credentials, AI chat histories, and customer data from thousands of projects.
The vulnerability was running for at least 48 days — from the first HackerOne report on March 3 to public disclosure on April 20. Every project created before November 2025 was potentially affected.
It gets worse. The bug was first reported via HackerOne on March 3 — 48 days before the public disclosure. HackerOne triagers closed the reports without escalation, concluding that seeing public projects’ data was “intentional behaviour.”
Lovable’s response cycled through three positions: first it posted on X that it “did not suffer a data breach” and called the exposure “intentional behaviour.” Then it blamed its own documentation, saying the word “public” was “unclear.” Then it blamed HackerOne.
Because Lovable stores full AI conversation histories, an attacker could read every prompt a developer ever sent — including pasted error logs, business logic discussions, and credentials shared mid-session. If you’ve used Lovable and ever pasted an API key or database URL into a chat, assume it was exposed.
What You Can Do
If you’ve used Lovable: rotate every credential you’ve ever mentioned in a Lovable chat session. Check your project’s Supabase or database credentials, API keys, and any secrets referenced in your AI-assisted development conversations. Don’t wait for Lovable to tell you whether you were affected.
Anthropic’s MCP: Remote Code Execution by Design
On April 15, OX Security published an advisory disclosing a command injection vulnerability in Anthropic’s Model Context Protocol (MCP) — the increasingly popular standard for connecting AI agents to external tools and data sources.
The flaw is in MCP’s stdio transport layer. When an MCP server is configured with transport: stdio, the command field in StdioServerParameters gets passed directly to the operating system as a subprocess. An authenticated user with permission to create MCP servers can execute arbitrary commands on the host machine.
The scale is significant. The vulnerability affects MCP’s official SDKs across Python, TypeScript, Java, and Rust — spanning more than 150 million package downloads and an estimated 200,000 server instances.
Anthropic’s response: “Sanitization is the responsibility of the client developer.”
LiteLLM, one of the most widely-used MCP implementations, patched the issue in v1.83.6 by restricting stdio commands to a whitelist of known launchers (npx, uvx, python, node, docker, deno). But every other MCP implementation that uses stdio transport remains potentially vulnerable unless it has independently added its own sanitization.
The CVE assignments keep coming — CVE-2026-30623 for the core flaw, CVE-2026-22252 for the LiteLLM-specific variant — and 7,000 publicly accessible MCP servers remain exposed.
Why This Matters
MCP is being positioned as the USB-C of AI integrations — a universal standard that every AI agent should support. When the universal standard has a design-level vulnerability and the protocol maintainer says it’s not their problem to fix, every implementation inherits the risk. If you’re running MCP servers with stdio transport, audit your configuration now.
Comment and Control: Your AI Coding Agent Will Leak Your Secrets
Researcher Aonan Guan, along with Zhengyu Liu and Gavin Zhong from Johns Hopkins University, disclosed a prompt injection attack called “Comment and Control” that works against Claude Code’s security review, Google’s Gemini CLI Action, and GitHub Copilot Agent.
The attack is simple: write a malicious instruction in a PR title, issue comment, or hidden HTML comment. When an AI coding agent processes the repository, it reads that text, interprets it as instructions, and executes them — including running shell commands like whoami and ps auxeww, extracting environment variables containing ANTHROPIC_API_KEY, GITHUB_TOKEN, and GEMINI_API_KEY, and posting the results back into PR comments or Actions logs.
The name “Comment and Control” is a nod to “Command and Control” — because GitHub itself becomes the C2 channel. The attacker never needs direct access to your infrastructure. They just need to get text in front of your AI agent.
Anthropic classified the Claude Code vulnerability as CVSS 9.4 Critical and paid a $100 bounty. Google paid $1,337. GitHub paid $500. But none of the vendors published broad advisories or CVEs, leaving repositories potentially pinned to vulnerable agent versions.
What You Can Do
If you’re running AI coding agents on your repositories: restrict what environment variables are accessible during agent execution, audit which GitHub Actions have access to secrets, and never give an AI agent unrestricted shell access to a machine that holds production credentials.
NIST Gives Up on 29,000 CVEs
On April 17, NIST announced that it will no longer provide full enrichment — CVSS scores, CPE mappings, and detailed descriptions — for every CVE submitted to the National Vulnerability Database.
The reason: CVE submissions have increased 263% since 2020, with first-quarter 2026 submissions running a third higher than the same period last year. NIST enriched 42,000 CVEs in 2025, 45% more than any prior year, and it still wasn’t enough.
NIST moved approximately 29,000 unenriched vulnerabilities published before March 1, 2026 into a new “Not Scheduled” category — meaning they will likely never be scored or analyzed. Going forward, only three categories get full treatment: CVEs in CISA’s Known Exploited Vulnerabilities catalog, CVEs for software used within the federal government, and CVEs for critical software as defined by Executive Order 14028.
Everything else gets a CVE number and… that’s it.
NIST cites AI-driven vulnerability discovery as a key driver of the submission surge. The tools that can find vulnerabilities in seconds are burying the database that’s supposed to track them.
Why This Matters
The NVD is the foundation that vulnerability scanners, compliance frameworks, and security teams worldwide depend on for threat assessment. When NIST stops scoring most vulnerabilities, the entire downstream ecosystem — from Nessus to Snyk to your company’s compliance audits — loses data quality. Security teams that rely solely on CVSS scores from the NVD for prioritization need a backup plan, and they need it now.
The Week’s Pattern
Every story this week traces back to the same dynamic: AI tools are creating attack surface and discovering vulnerabilities faster than the security infrastructure built to handle them can keep up. Lovable’s AI-generated apps ship with exposed credentials. MCP’s AI integration protocol ships with command injection by design. AI coding agents execute attacker instructions from PR comments. And the vulnerability database can’t process the flood of bugs that AI tools are finding.
We’re building the car while driving it, and this week we found out the brakes weren’t installed yet.