AI Security Roundup: Vercel Breached Through AI Tool, n8n's CVSS 10 Nightmare, and the Supply Chain Keeps Breaking

An AI productivity tool compromise led to Vercel customer data theft, n8n's workflow platform had an unauthenticated RCE scoring a perfect 10, and Mercor's LiteLLM-linked breach exposed training data for OpenAI and Anthropic.

Close-up of a padlock against a dark background

Last week’s security roundup covered Lovable’s 48-day open door, MCP’s design-level command injection, and NIST abandoning 29,000 CVEs. This week, the AI supply chain keeps finding new ways to break. A breach at an AI productivity tool cascaded into Vercel, exposing customer data. The n8n workflow platform — used by thousands of organizations to orchestrate AI agents — had an unauthenticated remote code execution flaw scoring a perfect CVSS 10.0. And Mercor, the AI training data startup valued at $10 billion, confirmed that a supply chain attack through LiteLLM exposed data it was collecting for OpenAI, Anthropic, and Meta.

Vercel Breached Through a Compromised AI Tool

On April 19, Vercel disclosed a security incident that started not with Vercel itself, but with Context.ai — a third-party AI productivity tool used by a Vercel employee. Attackers compromised Context.ai, hijacked the employee’s Google Workspace account through stolen OAuth tokens, and used that access to pivot into Vercel’s internal systems.

The attack chain, detailed by Trend Micro, began with a Lumma Stealer malware infection at Context.ai around February 2026 — reportedly triggered when a Context.ai employee downloaded Roblox game exploit scripts. The stolen credentials gave attackers access to Context.ai’s AWS environment, where they exfiltrated OAuth tokens for the consumer product. From there, they pivoted to a Vercel employee’s account and enumerated and decrypted environment variables belonging to a subset of Vercel customers.

Vercel CEO Guillermo Rauch confirmed the attack chain publicly. A threat actor using the ShinyHunters handle claimed responsibility and listed the stolen data for $2 million on BreachForums.

Vercel described the attacker as “highly sophisticated based on their operational velocity and in-depth understanding of Vercel’s product API surface.” The company is working with Google Mandiant on the investigation and has notified affected customers to rotate credentials.

What You Can Do

If you host projects on Vercel: rotate all environment variables and secrets, especially if you received a notification from Vercel. Even if you didn’t, treat this as a reminder that environment variables stored in any cloud platform are only as secure as the weakest link in the provider’s supply chain. Review which third-party integrations have access to your deployment credentials.

n8n’s Perfect 10: Unauthenticated RCE in the AI Workflow Platform

CVE-2026-21858 scored a CVSS 10.0 — the maximum possible severity rating — for an unauthenticated remote code execution vulnerability in n8n, the open-source workflow automation platform increasingly used to orchestrate AI agents and LLM pipelines.

The vulnerability, discovered by Cyera Research Labs and dubbed “Ni8mare,” exploits a content-type confusion flaw in n8n’s Form Webhook node. An attacker sends a crafted request that tricks the parser into treating JSON as multipart upload data, overriding internal file references. From there, the attacker can read arbitrary files from disk, extract the SQLite database containing encrypted secrets, recover the encryption key, forge a valid admin session, and execute arbitrary OS commands — all without ever logging in.

The impact is significant: an estimated 100,000+ self-hosted n8n instances are potentially exposed globally. Cloud-hosted n8n deployments are not affected. The vulnerability affects all self-hosted versions prior to 1.121.0, with the comprehensive fix landing in version 1.121.3.

This matters beyond n8n specifically because workflow automation platforms have become the connective tissue of AI agent deployments. A compromised n8n instance doesn’t just give an attacker one system — it gives them access to every API key, database credential, and service connection configured in that instance’s workflows.

What You Can Do

If you self-host n8n: update to version 1.121.3 immediately. If you can’t update right away, restrict network access to n8n’s webhook and form endpoints. Audit which secrets and API keys are stored in your n8n workflows — if the instance was publicly accessible before the patch, assume those credentials may be compromised.

Mercor’s LiteLLM Supply Chain Breach

Mercor, the AI training data startup valued at $10 billion and providing data to OpenAI, Anthropic, and Meta, confirmed in early April that it suffered a supply chain attack through a vulnerability in LiteLLM, the widely-used AI integration proxy.

Attackers exploited a deserialization vulnerability in LiteLLM that allowed arbitrary code execution on any server running an affected version. A malicious serialized Python payload submitted to an API endpoint enabled lateral movement to Mercor’s candidate databases, potentially exposing sensitive data about the contract workers supplying training data to frontier AI companies.

Mercor was “one of thousands of companies” affected by the LiteLLM vulnerability, attributed to a hacking group called TeamPCP. The incident highlights a growing problem: the AI supply chain is building on a handful of shared open-source libraries — LiteLLM, LangChain, various MCP implementations — and a vulnerability in any of them cascades across thousands of deployments simultaneously.

This is the second major LiteLLM security issue in weeks. As we covered last week, LiteLLM was also affected by the MCP stdio command injection vulnerability (CVE-2026-30623), which it patched in v1.83.6 by restricting commands to a whitelist.

Quick Hits

  • Another MCP command injection surfaces. CVE-2026-5831 affects taskflow-ai, an MCP server implementation, through the same class of vulnerability we’ve been tracking: a terminal_execute handler that validates only the first token of a command against an allowlist while passing the rest unsanitized to execSync. Any shell metacharacter after the first whitelisted word executes arbitrary commands. Patched in version 2.1.9. GitHub

  • Meta’s internal AI agent leaked sensitive data. An AI agent operating inside Meta’s internal systems hallucinated incorrect access instructions, briefly exposing HR records, financial projections, and product timelines to unauthorized employees for approximately 40 minutes. The agent had overly broad read permissions across multiple internal data stores. This is a new category of security incident: AI-induced misconfiguration that bypasses conventional access controls.

  • Prompt injection attacks up 32% since November. Google researchers documented a 32% increase in malicious prompt injection attempts between November 2025 and February 2026. The sophistication of observed attacks remains relatively low, but the volume is climbing steadily as more AI agents are deployed in production environments.

The Pattern

The supply chain problem in AI security is compounding. Vercel was breached through an AI tool. Mercor was breached through an AI proxy library. n8n’s AI workflow orchestrator was wide open. Every week, the attack surface grows — not because anyone is building bad software on purpose, but because the AI tooling ecosystem is stacking dependencies faster than anyone can audit them.

Last week we wrote that “AI tools are creating vulnerabilities faster than anyone can track them.” This week’s stories are the proof. The Vercel breach started with a Context.ai employee downloading game cheats. The Mercor breach exploited a deserialization flaw in a library that thousands of AI deployments depend on. n8n’s CVSS 10 was sitting in the open, waiting for anyone who thought to send a malformed multipart request.

The common thread: each of these systems trusted something it shouldn’t have. Context.ai trusted its employee’s endpoint hygiene. Vercel trusted Context.ai’s OAuth tokens. Mercor trusted LiteLLM’s input handling. n8n trusted that webhook requests would arrive in the expected format. When you’re building AI systems on top of AI libraries on top of AI services, every trust assumption is a potential break point — and this week, several of them broke at once.