341 Malicious AI Agent Plugins Were Hiding in Plain Sight on ClawHub

OpenClaw's skills marketplace was weaponized to steal passwords and crypto wallets. A single attacker published 314 fake tools. This is what happens when AI agents get app stores.

OpenClaw has 149,000 GitHub stars, runs on millions of machines, and can execute terminal commands, manage files, and control APIs autonomously. It’s exactly the kind of tool attackers dream about compromising.

They did.

Security researchers at Koi Security and VirusTotal audited all 2,857 skills available on ClawHub - OpenClaw’s public marketplace for third-party plugins - and found 341 that were intentionally malicious. A single attacker account, “hightower6eu,” had published 314 of them. The campaign has been dubbed ClawHavoc.

The incident received an 8.4 out of 10 severity rating, with supply chain risk scored at 9.0 - the highest component.

How ClawHavoc Works

The attack exploits something fundamental about how AI agent plugins function: skills aren’t just configuration files. They’re third-party code that runs in an environment with real system access.

The malicious skills were disguised as legitimate tools - finance trackers, crypto analytics dashboards, social media managers. They looked useful. The setup steps asked users to execute downloaded binaries or scripts, which is normal for many legitimate skills too.

On macOS, 335 of the malicious skills deployed Atomic Stealer (AMOS), a well-known infostealer designed to harvest passwords, browser cookies, and cryptocurrency wallet data. On Windows, the payloads were packed trojans detected by multiple security vendors.

The attack chain typically ran through obfuscated scripts that pulled remote payloads, executed them with the permissions OpenClaw already had, and sent stolen credentials to attacker-controlled servers. Because OpenClaw routinely operates with terminal access and file system permissions, the malware blended right in with normal agent behavior.

The One-Click RCE Vulnerability

ClawHavoc wasn’t the only problem. On January 30, researchers disclosed CVE-2026-25253 - a separate, high-severity vulnerability in OpenClaw itself (CVSS 8.8) that allowed one-click remote code execution.

The flaw worked through cross-site WebSocket hijacking. OpenClaw’s server didn’t validate WebSocket origin headers, meaning any website could connect to a user’s local instance. A malicious webpage could retrieve the authentication token, establish a WebSocket connection, and use the stolen credentials to gain operator-level access.

Security researcher Mav Levin of depthfirst, who discovered the vulnerability, described it as “a one-click RCE exploit chain that takes only milliseconds” after a victim visits a single malicious page. Critically, the attack worked even on instances configured to listen only on localhost - the victim’s own browser made the outbound connection.

Once in, attackers could disable user confirmation prompts and escape container isolation by modifying sandbox configurations. OpenClaw patched the flaw in version 2026.1.29, released on January 30.

Why AI Agent Marketplaces Are Uniquely Dangerous

Traditional software package managers like npm and PyPI have dealt with malicious packages for years. But AI agent skills create a qualitatively different risk.

Standard malicious packages need to trick a developer into importing them into code. AI agent skills run with direct access to local systems - terminal commands, file operations, network connections, API tokens. A compromised agent doesn’t just steal a password. It can autonomously conduct reconnaissance, move laterally across infrastructure, and exfiltrate data, all using the legitimate access it was granted by design.

CrowdStrike’s analysis put it directly: successful attacks can “hijack the agent’s reachable tools and data stores”, and indirect prompt injection means “untrusted data can reshape intent, redirect tool usage, and trigger unauthorized actions without tripping traditional input validation.”

There’s also the fake VS Code extension angle. Attackers distributed a counterfeit “Moltbot” extension through the Visual Studio Code Marketplace - targeting developers who might be familiar with OpenClaw’s earlier name - that delivered malware payloads to developer workstations.

The Bigger Pattern

Palo Alto Networks’ Chief Security Intel Officer Wendi Whitmore called AI agents “2026’s biggest insider threat” in January. Nearly half of surveyed security professionals believe agentic AI will be the top attack vector for cybercriminals by the end of the year.

The core problem: organizations are handing AI agents wide access without accounting for them as identities that need governance. One security researcher compared it to “letting thousands of interns run around in our production environment, and then giving them the keys to the kingdom.”

Gartner estimates that 40 percent of all enterprise applications will integrate with AI agents by the end of 2026, up from less than 5 percent in 2025. That’s a massive expansion of attack surface happening faster than security frameworks can adapt.

What You Can Do

If you’re running OpenClaw:

  • Update immediately. Version 2026.1.29 patches CVE-2026-25253. Anything older is vulnerable to one-click RCE.
  • Audit your installed skills. Check them against known malicious lists. VirusTotal has added OpenClaw skill scanning to its Code Insight tool.
  • Don’t install skills that require binary downloads. If a setup step asks you to run a downloaded executable, treat it as a red flag.
  • Restrict agent permissions. OpenClaw doesn’t need root access. Run it in a sandboxed environment with minimal file system and network permissions.
  • Check your credentials. If you installed any skills from ClawHub before the audit, rotate your passwords, API keys, and check crypto wallets for unauthorized transactions.

For organizations using any AI agent platform: treat agent identities like employee identities. They need access controls, audit logs, and permission boundaries. The era of giving autonomous code unrestricted system access because “it’s just a tool” is over.

The Bottom Line

ClawHavoc is the first major supply chain attack targeting an AI agent marketplace, but it won’t be the last. The economics are too attractive - one attacker, one account, 314 malicious plugins, access to an unknown number of machines running an agent with system-level permissions. As AI agents proliferate into enterprise environments, every skills marketplace, plugin store, and extension registry becomes a high-value target. The question for every AI agent platform is no longer whether their marketplace will be compromised, but whether they’ll catch it when it happens.