OpenClaw's Month From Hell: 40,000 Exposed Instances, Poisoned Marketplace, and Corporate Bans

The viral AI agent went from 135K GitHub stars to enterprise blacklists in three weeks. Here's what went wrong and why it matters for every AI agent.

In less than a month, OpenClaw went from the most exciting open-source project on GitHub to the poster child for why autonomous AI agents scare the hell out of security teams. The viral AI agent framework - which lets chatbots like ChatGPT and Claude directly control your computer, send emails, browse the web, and manage your calendar - has racked up over 135,000 GitHub stars and a security rap sheet that reads like a horror novel.

As of this week, Meta has banned OpenClaw from corporate machines. Other companies have followed. Over 40,000 instances sit exposed on the public internet, most of them vulnerable. A supply-chain attack poisoned roughly 20% of the project’s plugin marketplace. And infostealers are now specifically targeting OpenClaw’s configuration files to hijack users’ AI agents remotely.

This is the first major AI agent security crisis of 2026. It probably won’t be the last.

What Is OpenClaw?

OpenClaw started as a side project by Austrian developer Peter Steinberger, originally called Clawdbot. It’s a free, open-source framework that turns LLMs into autonomous agents - software that doesn’t just answer questions but takes action. Connect it to your messaging apps (WhatsApp, Slack, Telegram, Discord, iMessage), and it can execute shell commands, read and write files, browse the web, send emails, manage your calendar, and chain together complex multi-step tasks with minimal human oversight.

The appeal is obvious. Instead of copying text between apps, you tell your AI agent what you want done and it handles the plumbing. OpenClaw went viral in late January 2026, crossing 180,000 GitHub stars and becoming one of the fastest-growing open-source projects in history.

On February 14, Steinberger announced he was joining OpenAI to lead next-generation personal agents. Sam Altman called him “a genius.” The project transitioned to an independent foundation, with OpenAI’s backing.

But by then, the security situation had already spiraled.

The Vulnerability: One Click to Full Takeover

On January 30, security researcher Mav Levin disclosed CVE-2026-25253, a remote code execution vulnerability rated CVSS 8.8. OpenClaw had patched it the day before in version 2026.1.29, but the disclosure revealed just how fragile the architecture was.

The attack chain worked in three stages:

  1. Token theft: A malicious link manipulated OpenClaw’s gatewayUrl parameter, redirecting authentication tokens to an attacker’s server
  2. WebSocket hijacking: Because OpenClaw didn’t validate Origin headers on WebSocket connections, attacker-controlled JavaScript could connect to a victim’s localhost instance through their own browser
  3. Full takeover: With stolen tokens, attackers gained operator-level access - meaning arbitrary command execution on the victim’s machine

The “localhost-only” binding that was supposed to protect users was meaningless. The attack pivoted through the victim’s browser, bypassing the restriction entirely.

OpenClaw patched this specific flaw. Then researchers from Endor Labs found six more vulnerabilities, including three high-severity server-side request forgery bugs (CVE-2026-26322, CVE-2026-26319) and a path traversal flaw in the browser upload feature. Version 2026.2.12 fixed over 40 security issues. The latest release, v2026.2.17, continues patching while adding new model support.

The pattern is familiar: a project designed for usability, not security, scrambling to retrofit defenses after going viral.

The Poisoned Marketplace

While the core vulnerabilities were bad, the supply-chain attack was worse.

OpenClaw uses a plugin system called “skills,” distributed through ClawHub, its public marketplace. Starting around January 27, attackers launched what researchers dubbed the ClawHavoc campaign. They uploaded hundreds of malicious skills disguised as legitimate tools - cryptocurrency wallet trackers, YouTube utilities, Google Workspace integrations, finance tools.

The numbers kept climbing. Initial reports found 341 poisoned skills. By February 16, the count exceeded 800 - roughly 20% of the entire expanded registry. Bitdefender estimated approximately 900 malicious packages total.

The attack used a well-known pattern called ClickFix. Malicious skills included professional-looking documentation with fake “Prerequisites” sections that directed users to run shell commands. Those commands installed Atomic macOS Stealer (AMOS), a commercial malware-as-a-service product that costs attackers $500 to $1,000 per month and harvests iCloud Keychain passwords, browser credentials, cryptocurrency wallets (60+ types), SSH keys, and Telegram sessions.

The marketplace’s barriers to entry were minimal: you needed a GitHub account older than one week. That’s it.

40,000 Instances Exposed to the Internet

The vulnerability and marketplace problems would have been manageable if most OpenClaw instances were properly locked down. They weren’t.

SecurityScorecard identified 40,214 publicly accessible instances across 28,663 unique IP addresses, spread across 52 countries. Independent researcher Maor Dayan found 42,665 instances, with 5,194 confirmed vulnerable.

The headline stat: 63% of observed deployments were vulnerable, and 12,812 exposed instances were directly exploitable via remote code execution.

Even more damning: 93.4% of verified instances had authentication bypass. The majority ran on cloud infrastructure - DigitalOcean, Alibaba Cloud, Tencent - with roughly 30% of Chinese instances hosted on Alibaba Cloud.

A separate breach hit Moltbook, a social network built for OpenClaw agents. An unsecured database exposed 35,000 email addresses and 1.5 million agent API tokens.

Infostealers Are Now Hunting AI Agents

Perhaps the most forward-looking threat came from Hudson Rock, which discovered that existing commodity infostealers - likely a Vidar variant - had begun specifically targeting OpenClaw’s configuration files.

The malware’s file-grabbing routine searches for and exfiltrates three files:

  • openclaw.json: Gateway authentication tokens and workspace paths
  • device.json: Cryptographic keys used for secure pairing and signing
  • soul.md: The agent’s operational principles, behavioral guidelines, and ethical boundaries

All stored in plaintext in ~/.openclaw/ directories.

Stealing the gateway token lets attackers connect remotely to a victim’s local OpenClaw instance (if the port is exposed) or impersonate the client in authenticated AI gateway requests. Stealing the “soul” file means attackers can understand exactly how the agent is configured to behave - and potentially craft attacks that exploit its specific permissions and trust relationships.

Hudson Rock described this as “a significant milestone in the evolution of infostealer behavior: the transition from stealing browser credentials to harvesting the ‘souls’ and identities of personal AI agents.”

The Corporate Response: Bans and Firings

By mid-February, enterprise security teams had seen enough.

A Meta executive told his team to keep OpenClaw off their work laptops or risk losing their jobs. Jason Grad, CEO of Massive, sent a company-wide Slack message on January 26: “Please keep Clawdbot off all company hardware and away from work-linked accounts,” calling the tool “unvetted and high-risk.” Guy Pistone, CEO of Valere, implemented a strict ban after warning that if the tool accessed a developer’s machine, it could compromise cloud services and client data, including credit card information and GitHub codebases.

The core problem isn’t unique to OpenClaw. It’s the “shadow AI” pattern: employees install tools that promise productivity gains without security team review. By the time anyone notices, the agent has OAuth tokens for Slack, email, cloud storage, and a dozen other services. As Zafran Security CTO Ben Seri told Fortune: “The only rule is that it has no rules. That’s part of the game.”

Traditional security tooling doesn’t help much. Endpoint security sees processes running but doesn’t understand agent behavior. Identity systems see OAuth grants but don’t flag AI agent connections as unusual. Network monitoring sees traffic to legitimate APIs. The agent operates in a blind spot that most security stacks weren’t designed to cover.

Community Response: SecureClaw

Not everyone’s response has been to ban and retreat. On February 18, an open-source tool called SecureClaw launched on GitHub, providing 55 automated audit and hardening checks for OpenClaw deployments. The tool maps protections to the OWASP Agentic Security Initiative top 10 categories, MITRE ATLAS, and CoSAI Agentic AI Security guidance - the kind of compliance documentation that enterprise security teams need before they’ll approve anything.

It’s a start, but it’s also a band-aid on an architectural problem. OpenClaw’s security model assumed a single user running the agent locally. The viral adoption pattern - tens of thousands of cloud-hosted instances, many with default configurations - broke that assumption completely.

What This Means

OpenClaw is a preview of a much bigger problem. As AI agents gain the ability to act autonomously - executing code, accessing APIs, managing credentials - the security surface area expands in ways that traditional tools can’t track.

The issues are structural, not incidental:

AI agents don’t fit existing security categories. They’re not traditional apps, they’re not users, and they’re not services. They’re autonomous actors with inherited permissions that can chain together unpredictable sequences of actions. Your security stack has no good way to model that.

Open marketplaces are supply-chain attack goldmines. The same dynamics that made npm and PyPI attractive targets for attackers apply to AI skill registries - except AI skills can request far broader permissions than a typical software package.

Credential storage is still an afterthought. OpenClaw stored API keys, OAuth tokens, and configuration in plaintext files. This isn’t unusual for developer tools, but when those tools have autonomous access to your entire digital life, the consequences of credential theft change dramatically.

Speed of adoption outpaces security review. OpenClaw went from obscure side project to 180,000 stars in weeks. By the time security researchers caught up, tens of thousands of instances were already exposed.

Colin Shea-Blymyer of Georgetown’s Center for Security and Emerging Technology summed up the fundamental tension: increased autonomy makes AI agents more capable but simultaneously more dangerous. “AI systems can fail in ways we can’t even imagine,” he told Fortune.

What You Can Do

If you’re running OpenClaw:

  • Update immediately to version 2026.2.17 or later. Every version since 2026.1.29 addresses known critical vulnerabilities
  • Run SecureClaw to audit your deployment against known threat classes
  • Rotate all credentials connected to your OpenClaw instance - API keys, OAuth tokens, everything in ~/.openclaw/
  • Run in an isolated container or VM, not directly on your main machine
  • Audit installed skills and remove anything you didn’t explicitly choose. Check ClawHub’s security advisories for known malicious packages
  • Don’t expose port 18789 to the internet. If you need remote access, use a VPN or SSH tunnel
  • Review OAuth scopes: Does your agent really need full access to your email, calendar, and file storage?

If you’re managing a team or organization:

  • Conduct an inventory of AI agent tools in use. Assume shadow installations exist
  • Establish a review process for AI tools before they touch corporate data
  • Monitor for OpenClaw’s known indicators: process names containing “openclaw” or “clawdbot,” directory paths like ~/.openclaw/, TCP port 18789, and user-agent strings containing “openclaw”

The broader lesson: giving an AI agent access to your digital life is a decision with the same weight as giving a new employee access to your systems. The difference is that employees go through background checks and onboarding. Right now, most people are handing the keys to software they downloaded twenty minutes ago.