DockerDash: How Poisoned Container Metadata Turned Docker's AI Assistant Into an Attack Vector

Two independent security firms found that Docker's Ask Gordon AI could be hijacked through image metadata, enabling remote code execution and data theft across millions of developer machines.

Two independent security research teams found separate ways to weaponize Docker’s built-in AI assistant, Ask Gordon, by hiding malicious instructions in ordinary container metadata. One path led to remote code execution. The other to silent data exfiltration. Both exploited the same fundamental weakness: an AI that trusts everything it reads.

Docker patched the vulnerabilities in Desktop version 4.50.0 back in November 2025, but the technical details only became public in February 2026. The findings matter well beyond Docker - they demonstrate how every AI assistant bolted onto a developer tool creates a new, exploitable trust boundary.

What Ask Gordon Does

Ask Gordon is Docker’s AI assistant, integrated into Docker Desktop and the Docker CLI. It helps developers troubleshoot containers, understand images, and manage their Docker environments. To do this, it reads container metadata, connects to Docker Hub, and executes actions through a Model Context Protocol (MCP) Gateway - a system that translates the AI’s requests into actual Docker commands.

That MCP Gateway is the critical piece. It gives Gordon the ability to act, not just talk. And both research teams found ways to make it act on an attacker’s behalf.

Attack Path One: Meta-Context Injection (Noma Security)

Noma Labs researcher Sasi Levi discovered that Docker image LABEL fields - standard metadata that describes an image’s purpose, version, and maintainer - could carry weaponized instructions that Gordon would execute without question.

The attack chain works in three stages:

Stage 1: Poison the image. An attacker creates a Docker image with malicious instructions embedded in its LABEL metadata. These labels are standard practice - every well-maintained Docker image has them. The malicious payload looks no different from a legitimate description.

Stage 2: Gordon reads the metadata. When a developer asks Gordon about the image, the AI reads all metadata fields. It can’t distinguish between a benign label like [email protected] and an embedded instruction telling it to run commands. Both are just text in the same metadata field.

Stage 3: MCP Gateway executes. Gordon forwards the interpreted instructions to the MCP Gateway, which executes them with the user’s Docker privileges. No validation at any stage. No confirmation prompt. No sandbox.

“Every stage happens with zero validation,” Levi explained, “taking advantage of current agents and MCP Gateway architecture.”

The impact depends on the environment. In cloud and CLI deployments, this means full remote code execution - arbitrary Docker commands running with the developer’s privileges. In Docker Desktop, where Ask Gordon operates with read-only permissions, the same technique enables large-scale data exfiltration: container configurations, environment variables, network topology, installed tools, and MCP tool inventories.

Levi characterized the severity bluntly: “It has not been this easy to compromise an IT environment since the early days.”

Attack Path Two: Repository Description Poisoning (Pillar Security)

Separately, Pillar Security found an even simpler entry point: Docker Hub repository descriptions.

Their attack needed nothing more than a short instruction planted in a repository’s description field - something like a pointer to fetch additional instructions from an external URL. When a developer asked Gordon to describe the repository, the AI:

  1. Fetched the repository metadata from Docker Hub
  2. Ingested the malicious instruction as trusted context
  3. Automatically called the fetch tool to retrieve the attacker’s external payload
  4. Executed internal tools (list_builds, build_logs) as directed by the payload
  5. Exfiltrated the results - full chat logs, build metadata, build IDs - via a GET request to an attacker-controlled endpoint

No user consent at any step. The AI classified the external content as a legitimate task because it arrived through the same channel as all other Docker Hub metadata.

Pillar Security categorized this as CWE-1427 - Improper Neutralization of Input Used for LLM Prompting - and identified what they called the “lethal trifecta”: a system with access to private data, exposure to untrusted content, and the ability to communicate externally. Ask Gordon had all three.

Why This Is a Supply Chain Problem

Docker Hub hosts millions of images. According to Docker’s own index, 13 billion container image downloads run monthly. Docker Desktop hit 92% adoption among IT professionals in 2025. Any one of those images could have carried a DockerDash payload before the patch.

This isn’t a traditional software vulnerability where bad code produces incorrect behavior. It’s an AI trust boundary failure - the kind that emerges whenever an AI system is given tools and access but no way to evaluate the trustworthiness of its inputs. The image metadata that feeds Ask Gordon comes from the same Docker Hub that anyone can push to. The AI treats it all as authoritative.

A Futurum Group survey found that 60% of developers already use AI coding tools, with 40% planning increased investment. Every one of those tools that can read external content and take actions is a potential DockerDash waiting to happen.

Docker’s Fix

Docker addressed both attack paths in Desktop version 4.50.0, released November 6, 2025:

  • Ask Gordon no longer renders user-provided image URLs, blocking one exfiltration channel
  • All MCP tool invocations now require explicit user confirmation - a human-in-the-loop safeguard that breaks the automated execution chain

The timeline: Noma Labs reported the issue on September 17, 2025. Docker confirmed on October 13, 2025. The patch shipped 24 days later. Public disclosure came on February 3, 2026. Docker’s response was fast by industry standards.

But the fix is a band-aid on a structural problem. Adding confirmation prompts works until users start clicking “Allow” reflexively - the same UI fatigue that undermines browser permission dialogs and mobile app permission screens.

The Pattern

DockerDash arrived alongside three RCE vulnerabilities in GitHub Copilot disclosed in the same month. Those were command injection through unsanitized inputs. Before that, Pillar Security documented a “Rules File Backdoor” where invisible Unicode characters in AI configuration files could hijack coding assistants.

Every AI developer tool that has been seriously audited has been found vulnerable. The attack surfaces differ - container metadata, repository descriptions, file contents, configuration files - but the root cause is consistent: AI systems that execute actions based on untrusted inputs without validation.

Noma Security’s recommendation is to implement “Zero-Trust for AI Context” - treating all context provided to AI agents as potentially malicious, deploying deep-content inspection for instruction patterns, and enforcing human-in-the-loop controls for anything with elevated privileges.

What You Should Do

Update Docker Desktop to version 4.50.0 or later if you haven’t already. The patch has been out since November 2025.

Audit your AI tool chain. Ask Gordon isn’t the only AI assistant with MCP or tool-calling capabilities. Any AI tool that reads external content and can execute commands is a potential vector. List your AI integrations and evaluate what each one can access and do.

Treat container images like untrusted code. You wouldn’t run a random binary from the internet. Apply the same skepticism to Docker images from unknown publishers - especially if your AI tools automatically inspect them.

Watch the confirmation prompts. Docker’s human-in-the-loop fix only works if you actually read the prompts. Don’t train yourself to auto-approve.

Restrict MCP tool permissions. If your development environment uses MCP-based tools, enforce least-privilege principles. Not every AI assistant needs the ability to execute commands, access the network, or read your environment variables.

The AI tools developers rely on are being integrated faster than they’re being secured. DockerDash is a case study in what happens when execution capability outpaces trust verification. It won’t be the last.