OpenClaw's Security Nightmare: 341 Malicious Skills, RCE Vulnerabilities, and the GlassWorm Campaign

The open-source AI agent with 135,000+ GitHub stars has become the center of 2026's first major AI security crisis

Matrix-style green code streaming down a dark screen

OpenClaw, the open-source AI agent that rocketed to 135,000+ GitHub stars, has become ground zero for what security researchers are calling the first major AI agent security crisis of 2026. The situation involves multiple attack vectors: a critical one-click remote code execution vulnerability, hundreds of malicious skills in the official marketplace, fake npm packages deploying info-stealers, and connections to the broader GlassWorm supply chain campaign.

If you’re running OpenClaw, you need to read this.

The One-Click RCE Vulnerability

CVE-2026-25253 scored 8.8 on the CVSS scale. The vulnerability allowed attackers to achieve full remote code execution with a single malicious link.

The attack worked through cross-site WebSocket hijacking. OpenClaw’s Control UI accepted a gatewayUrl parameter from query strings without validation. On page load, it automatically connected to whatever URL was specified and transmitted the user’s authentication token. Attackers could set up a malicious WebSocket server, send victims a link, and capture their credentials.

Once armed with the token, attackers could:

  • Disable user confirmation prompts by setting exec.approvals.set to “off”
  • Escape the container sandbox by setting tools.exec.host to “gateway”
  • Execute arbitrary commands directly on the host machine

The irony isn’t lost on anyone: an AI agent designed to run code on your behalf became a backdoor to run anyone’s code.

OpenClaw released version 2026.1.29 on January 30, patching the vulnerability before public disclosure. But that’s only part of the story.

21,000+ Instances Running Exposed

According to Censys research, over 21,000 OpenClaw instances are publicly accessible on the internet. Some researchers have reported even higher numbers, with estimates exceeding 220,000 exposed instances globally.

Many of these instances are running without authentication, exposing personal configuration data, API keys, and connected services. Users spinning up OpenClaw with default configurations often don’t realize their instance is accessible from anywhere.

341 Malicious Skills in the Official Marketplace

ClawHub, OpenClaw’s official skill marketplace, has been infiltrated. Koi Security’s audit of 2,857 skills found 341 were malicious—about 12% of the marketplace.

The attack campaign, dubbed “ClawHavoc,” used social engineering rather than technical exploits. Malicious skills like “solana-wallet-tracker” and “crypto-portfolio-manager” included fake prerequisite instructions. When users followed the setup steps:

On Windows: Instructions directed users to download a password-protected ZIP file from GitHub and run the executable inside. That executable was a keylogger.

On macOS: Instructions told users to run a code snippet hosted on glot.io. The base64-encoded script fetched a second-stage payload that installed Atomic Stealer (AMOS), a malware-as-a-service capable of:

  • Stealing Keychain credentials
  • Harvesting browser data and cryptocurrency wallets
  • Exfiltrating Telegram sessions and chat logs
  • Grabbing SSH keys
  • Copying files from Documents and Desktop folders

Koi Security has released Clawdex, a skill that scans other skills before installation, checking them against a database of known malicious packages. It’s a band-aid on a bullet wound, but it’s something.

The Fake npm Package Attack

Separately, a package named @openclaw-ai/openclawai appeared on npm on March 3. It wasn’t from the OpenClaw team.

According to JFrog’s analysis, the package deployed a remote access trojan with capabilities including:

  • SOCKS5 proxy functionality
  • Live browser session cloning
  • System credential theft
  • Persistent backdoor access

The package has since been removed from npm, but it was live for a week.

Connection to the GlassWorm Campaign

The timing of these attacks coincides with the GlassWorm campaign, which has compromised over 433 open-source components since March 3. GlassWorm attackers use stolen GitHub tokens to force-push malicious code into legitimate repositories, keeping the original commit messages and timestamps intact.

The campaign specifically targets Python projects—Django apps, ML research code, Streamlit dashboards, and PyPI packages. The malware payload uses a novel technique: it queries the transaction memo field of a Solana wallet address to extract the command-and-control server URL, making the C2 infrastructure harder to takedown.

While direct links between GlassWorm and the OpenClaw attacks haven’t been confirmed, security researchers note similarities in targeting (AI/ML developers) and timing. The AI developer ecosystem appears to be a primary target.

What This Means

The OpenClaw situation demonstrates a broader problem: the AI agent ecosystem is rushing to production without security fundamentals. When your AI agent can execute arbitrary code, approve its own actions, and connect to external services, every misconfiguration becomes a critical vulnerability.

The marketplace model compounds risks. ClawHub, like package managers before it, prioritizes discoverability over security. One in eight skills being malicious is an unacceptable ratio for software that runs with elevated privileges.

What You Can Do

If you’re running OpenClaw:

  1. Update immediately to version 2026.1.29 or later
  2. Check your exposure—is your instance accessible from the internet? It shouldn’t be
  3. Audit your installed skills against the Clawdex database
  4. Review connected services for unauthorized access
  5. Rotate any API keys that OpenClaw had access to

For AI agent users generally:

  • Treat AI agents like any privileged application—assume they can be compromised
  • Run agents in isolated environments when possible
  • Never follow setup instructions that ask you to run arbitrary code
  • Be skeptical of skills/plugins from unknown authors, even in “official” marketplaces
  • Monitor for unusual network activity from agent processes

The convenience of AI agents comes with real security trade-offs. Until the ecosystem matures, treat every installation like you’re handing your credentials to a stranger—because you might be.