OpenClaw's Supply Chain Collapse: How 1,184 Malicious Skills Poisoned the AI Agent Ecosystem

One in five packages in OpenClaw's ClawHub registry contain malicious code. The first coordinated attack on AI agent infrastructure reveals systemic vulnerabilities that enterprises are only beginning to understand.

Computer screen showing code with security lock overlay

In late January, security researchers at Koi Security audited ClawHub, the package registry that powers OpenClaw’s AI agent ecosystem. They expected to find vulnerabilities. What they found was a coordinated poisoning campaign already in progress.

Of the 2,857 skills available on ClawHub at the time, 341 were malicious. Within three weeks, as the marketplace grew to over 10,700 skills, that number climbed to 824. By March, Antiy CERT confirmed 1,184 malicious packages - roughly one in five skills in the entire ecosystem contained code designed to steal credentials, exfiltrate data, or compromise systems.

The AI agent gold rush has produced its first major supply chain attack. And unlike npm or PyPI compromises that affect developers, these attacks target the AI agents themselves - and through them, every user who trusts an agent to act on their behalf.

How the Attack Works

The campaign, dubbed “ClawHavoc” by researchers, exploited a fundamental weakness in how AI agents process instructions. Malicious skills embedded harmful commands inside SKILL.md files - the configuration files that tell OpenClaw agents what a skill does and how to use it.

According to Trend Micro’s analysis, the attackers used AI agents as “trusted intermediaries.” When users installed seemingly legitimate skills with names like “solana-wallet-tracker” or “code-formatter-pro,” the embedded instructions would present fake setup requirements to unsuspecting users.

The infection chain followed a consistent pattern:

  1. The skill claimed an “OpenClawCLI” prerequisite was needed
  2. Users were directed to download the CLI from attacker-controlled URLs
  3. A fake password dialog collected system credentials
  4. The Atomic macOS Stealer payload executed, harvesting everything

The stolen data was comprehensive: Apple and KeePass keychains, browser data from 19 different browsers, cryptocurrency wallets (150+ wallet types), Telegram and Discord messages, and files from Desktop, Documents, and Downloads folders.

The Cline CLI Breach

On February 17, the attack expanded beyond ClawHub. Between 3:26 AM and 11:30 AM PT, attackers compromised the Cline CLI, a popular AI-powered coding assistant with over 600,000 weekly downloads.

The method was sophisticated. Attackers exploited “Clinejection” - a prompt injection vulnerability in the repository’s GitHub issue triage workflow. By manipulating cache entries and leveraging GitHub Actions cache poisoning, they stole production npm publish credentials.

The compromised Cline CLI 2.3.0 contained a simple addition to package.json: "postinstall": "npm install -g openclaw@latest". Every developer who ran npm install during those eight hours automatically installed OpenClaw globally on their system - roughly 4,000 downloads before the attack was discovered.

Microsoft Threat Intelligence observed a “noticeable uptick” in OpenClaw installations. The affected developers were now running an AI agent framework that, while not inherently malicious, connected them to an ecosystem where one in five skills could compromise their systems.

ClawJacked: When Websites Hijack Your AI Agent

Even developers who avoided ClawHub’s poisoned packages weren’t safe. The ClawJacked vulnerability allowed any website to commandeer locally running OpenClaw agents.

The attack exploited WebSocket connections to localhost. As researchers explained: “Any website you visit can open one to your localhost. Unlike regular HTTP requests, the browser doesn’t block these cross-origin connections.”

When developers visited attacker-controlled sites, embedded JavaScript would:

  1. Open a WebSocket to the local OpenClaw gateway port
  2. Brute-force the gateway password (no rate-limiting for localhost)
  3. Register as a trusted device - automatically approved without user prompts
  4. Gain complete control over the AI agent

The vulnerability stemmed from misplaced trust assumptions. OpenClaw’s gateway relaxed security for local connections, assuming localhost traffic was inherently safe. It wasn’t.

OpenClaw patched ClawJacked in version 2026.2.25, released February 26 - under 24 hours from responsible disclosure. But the underlying architectural assumption that local AI agents can trust browser connections remained a design flaw across the ecosystem.

The PleaseFix Vulnerability: Zero-Click Agent Hijacking

While OpenClaw scrambled to fix its infrastructure, security firm Zenity Labs disclosed PleaseFix, a family of critical vulnerabilities affecting agentic browsers including Perplexity Comet.

PleaseFix represents the evolution of ClickFix, a social engineering technique where attackers trick users into executing malicious actions. The twist: PleaseFix works on AI agents, “allowing malicious actions to be triggered without human involvement.”

The first exploit demonstrated zero-click compromise. When a user asked their AI agent to perform routine tasks - like accepting calendar invites - malicious content embedded in the invite could hijack the agent’s session. The agent would execute attacker commands while returning expected results to the user, masking the breach.

The second exploit abused agent-authorized workflows to steal credentials from password managers. Rather than exploiting the password managers directly, attackers manipulated task execution to extract stored credentials - all within a legitimate authenticated session.

“Attackers can push untrusted data into AI browsers and hijack the agent itself, inheriting whatever access it has been granted,” explained Michael Bargury, Zenity’s CTO.

Perplexity addressed the vulnerability before public disclosure. But PleaseFix illustrated a broader problem: AI agents that inherit user permissions become high-value targets, and the attack surface extends to every data source the agent can access.

The Scale of Exposure

SecurityScorecard’s STRIKE team found over 135,000 OpenClaw instances exposed to the public internet across 82 countries. Of those:

  • More than 50,000 were directly vulnerable to remote code execution
  • Over 53,000 correlated with prior breach activity
  • 21,639 were publicly accessible with minimal or no authentication

The United States had the largest share of exposed instances, with China accounting for approximately 30% of global deployments.

Misconfigured instances leaked API keys, OAuth tokens, and plaintext credentials. The Moltbook platform breach alone exposed 35,000 email addresses and 1.5 million agent API tokens.

OpenClaw integrations with corporate SaaS apps compound the risk. Agents with access to emails, Slack messages, calendar entries, and cloud documents become lateral movement vectors - compromise the agent, inherit access to everything it touches.

Why AI Agents Are Different

Traditional software supply chain attacks compromise developer tools or dependencies. The attacker gains access to build systems, source code, or production environments through the compromised package.

AI agent supply chain attacks are worse. The agent operates with delegated user authority, often across multiple services simultaneously. It can read emails, access files, make API calls, and execute code - all based on the user’s permissions.

When you install a compromised npm package, the damage is contained to what that package can access. When your AI agent installs a compromised skill, the damage extends to everything the agent can access on your behalf.

Snyk’s ToxicSkills research found 36% of AI agent skills contain security vulnerabilities, with 1,467 vulnerable skills and active malicious payloads targeting OpenClaw, Claude Code, and Cursor users. The attack surface isn’t limited to one framework - it spans the emerging agentic AI ecosystem.

What Enterprises Should Do Now

The OpenClaw crisis offers a preview of enterprise AI security challenges. Organizations deploying AI agents should:

Audit agent permissions ruthlessly. AI agents should follow least-privilege principles. An agent that helps with calendar scheduling doesn’t need access to your financial systems.

Treat skill registries like untrusted package repositories. ClawHub had no meaningful vetting process. Until agent marketplaces implement security audits comparable to major app stores (imperfect as those are), assume any third-party skill could be malicious.

Monitor agent behavior continuously. Traditional endpoint detection doesn’t track AI agent actions. New tooling is needed to identify when agents access unexpected resources or execute unusual commands.

Isolate agents from sensitive systems. Don’t let AI agents operate within the same trust boundary as critical infrastructure. Sandbox agent execution, limit network access, and maintain clear boundaries between agent capabilities and production systems.

Assume compromise when evaluating agents. Before deploying any AI agent, ask: if this agent were compromised, what could an attacker access? The answer should guide your deployment architecture.

The Bottom Line

The OpenClaw supply chain attack represents a new category of security threat. AI agents that users trust to act on their behalf become force multipliers for attackers - compromise once, inherit access to everything.

One in five skills in ClawHub’s registry contained malicious code. Browser-based attacks hijacked local agents through WebSocket connections. Zero-click vulnerabilities in agentic browsers allowed silent credential theft.

The agentic AI market is projected to reach $28 billion by 2027. If security doesn’t catch up to capability, the OpenClaw crisis will look like a trial run. The question isn’t whether the next major AI agent compromise will happen - it’s which agents will be next, and how many enterprises will learn the hard way that autonomous AI requires autonomous security.