Three of the most popular AI workflow platforms — n8n, Langflow, and Flowise — all have critical remote code execution vulnerabilities being actively exploited right now. A supply chain attack on a library downloaded 95 million times per month exposed 4 terabytes of AI training data. And Microsoft just dropped its second-largest Patch Tuesday ever, with AI-driven vulnerability discovery tripling submission rates. The AI security situation this week is bad, and getting worse in a very specific direction.
The Mercor Breach: When Your AI Supply Chain Breaks
The biggest AI security story of the past two weeks started with 40 minutes of bad code on PyPI.
On March 27, a threat actor group called TeamPCP compromised the CI/CD pipeline of LiteLLM, an open-source Python library used by millions of developers to connect applications to AI services. The attack chain was sophisticated: TeamPCP first hit Trivy, a widely used security scanner, to steal credentials belonging to a LiteLLM maintainer. They then used those credentials to publish two malicious package versions — 1.82.7 and 1.82.8 — directly to PyPI.
The tainted packages were live for roughly 40 minutes before being caught and pulled. That was enough.
Mercor, a $10 billion AI startup that provides training data and contractors to OpenAI, Anthropic, and Meta, was among the victims. Lapsus$, the extortion gang, claimed to have obtained 4 terabytes of data — source code, database records, video interviews, and the personal information of over 40,000 contractors. Unconfirmed reports suggest datasets used by some of Mercor’s customers and details about their AI training methodologies may have been compromised.
The fallout has been swift. Meta indefinitely paused all work with Mercor. Five contractor lawsuits have been filed. And the broader lesson is uncomfortable: the AI industry’s supply chain is built on the same fragile foundations as the rest of software, with critical dependencies maintained by small teams and single points of failure.
AI Workflow Platforms: Three Critical RCEs, Three Active Exploits
If you run Flowise, Langflow, or n8n in production, stop reading and go patch. Seriously.
Flowise: CVSS 10.0, 12,000+ Exposed Instances
CVE-2025-59528 in Flowise’s CustomMCP node allows unauthenticated attackers to execute arbitrary code via a single POST request. The vulnerability exists because user-supplied configuration for MCP server connections gets passed directly to JavaScript’s Function() constructor without validation. Successful exploitation gives full Node.js runtime access — including child_process for command execution and fs for file system access.
The patch has been available since September 2025. Six months later, active exploitation was detected in early April 2026, with over 12,000 instances still exposed to the internet.
Langflow: Exploited 20 Hours After Disclosure
CVE-2026-33017 affects Langflow’s public flow build endpoint. It accepts attacker-supplied flow data containing arbitrary Python code in node definitions, which gets executed server-side without sandboxing. No authentication required. One HTTP request. CVSS 9.3.
The Sysdig Threat Research Team observed the first exploitation attempts within 20 hours of the advisory’s publication. Attackers built working exploits directly from the advisory description. A companion vulnerability, CVE-2026-33309, adds arbitrary file write capabilities. The fix is in Langflow 1.9.0.
n8n: Unauthenticated RCE via Web Forms
CVE-2026-21858 (CVSS 10.0) chains multiple weaknesses in n8n’s web form handling. Attackers with access to public-facing n8n forms can leak internal server files, including the secret key stored at /home/node/.n8n/config and the user database at /home/node/.n8n/database.sqlite. With those, they forge admin cookies, create new workflows, and use n8n’s built-in “Execute Command” node to run arbitrary OS commands.
A second vulnerability, CVE-2026-27493, is even simpler: a double-evaluation bug in n8n’s Form nodes lets attackers execute arbitrary shell commands by typing a payload into a public “Contact Us” form’s Name field. Upgrade to n8n 1.121.0 or later.
The Pattern
All three platforms share a common failure mode: they accept user-supplied code or configuration through public-facing endpoints and execute it without proper validation or sandboxing. These aren’t obscure tools — they’re the backbone of thousands of AI automation pipelines. And in every case, the vulnerability was either known for months before exploitation (Flowise) or exploited within hours of disclosure (Langflow).
taskflow-ai: Another MCP Vector
Add CVE-2026-5831 to the list. Agions’ taskflow-ai has an OS command injection flaw in its terminal_execute MCP server handler. Versions up to 2.1.8 are affected — upgrade to 2.1.9. The vulnerability is in src/mcp/server/handlers.ts, where input parameters aren’t sanitized before being passed to the system shell.
MCP server handlers are emerging as a consistent attack surface across AI tooling. When every AI tool needs to connect to external services via standardized protocols, every handler that touches user input becomes a potential entry point.
AI-Powered Attacks Are Scaling
The Fortinet firewall campaign reported by Amazon in February continues to reverberate. A small group of Russian-speaking hackers — possibly a single person — used AI to breach 600+ FortiGate devices across 55 countries in just five weeks. The attack didn’t use any exploits. Instead, the hackers used a tool called ARXON that fed reconnaissance data from compromised appliances into DeepSeek and Claude, which generated structured attack plans — including instructions for gaining Domain Admin, locating credentials, and spreading laterally.
The attackers targeted Veeam Backup & Replication servers to destroy backups, a classic ransomware preparation step. The campaign demonstrated that AI doesn’t need to find zero-days to be devastating. It just needs to be faster and more systematic at exploiting weak credentials and misconfigurations than human defenders can keep up with.
Microsoft’s Record Patch Tuesday
Microsoft’s April 2026 Patch Tuesday fixed 167 vulnerabilities, including two zero-days — one actively exploited. Eight critical flaws, seven of them RCE. Nearly 60 browser vulnerabilities alone.
The volume itself tells a story. Vulnerability programs are reporting that AI-driven discovery has essentially tripled their incoming submission rates. This isn’t a spike — it’s the new baseline. As AI models get better at finding bugs (see: Mythos), the number of vulnerabilities being reported will keep climbing, and patch cycles that already strain security teams will become even more demanding.
What This Means
Three trends are converging:
AI tools are becoming attack surfaces. The rush to deploy AI workflow platforms — Flowise, Langflow, n8n, taskflow-ai — has created a new category of internet-facing services that routinely execute code based on user input. These platforms were designed for flexibility, not security, and attackers have noticed.
Supply chains remain the weakest link. The Mercor breach started with a compromised security scanner, moved through a Python library, and ended with 4TB of AI training data exposed. The AI industry’s dependency chains are long and fragile, and a single maintainer account can be the starting point for a billion-dollar breach.
AI is accelerating both offense and defense. The Fortinet campaign showed what AI-assisted attackers can do with no zero-days and weak passwords. Microsoft’s patch volume shows what AI-assisted vulnerability discovery looks like. We’re in a period where both sides are scaling simultaneously, and it’s not clear who’s winning.
What You Can Do
If you run AI workflow tools:
- Patch Flowise, Langflow, n8n, and taskflow-ai immediately
- Never expose these platforms directly to the internet without authentication
- Audit all MCP server configurations for command injection vectors
- Monitor for unusual workflow creation or execution patterns
If you use AI supply chain tools:
- Pin your dependencies to specific, verified versions
- Monitor PyPI and npm for package substitution attacks
- Don’t auto-update production dependencies — review changes first
- Watch for LiteLLM-specific advisories if you use it
If you maintain infrastructure:
- Check FortiGate devices for indicators of compromise from the ARXON campaign
- Ensure backup systems (especially Veeam) are isolated from production networks
- Assume that AI-assisted attackers will find and exploit weak credentials faster than you expect
- Apply the April Patch Tuesday updates, especially the two zero-day fixes