A malicious GitHub repository could have stolen your Anthropic API key the moment you opened it in Claude Code. No clicks required beyond the initial launch.
Check Point researchers Aviv Donenfeld and Oded Vanunu disclosed three security vulnerabilities in Anthropic’s AI coding assistant on Tuesday, demonstrating how configuration files hidden in repositories could execute arbitrary commands and exfiltrate credentials - all before users had a chance to review trust prompts.
The vulnerabilities have been patched. But they reveal a fundamental tension in AI coding tools: the features that make them powerful also make them dangerous when pointed at untrusted code.
The Attack
The researchers identified three separate attack vectors, all exploiting Claude Code’s handling of repository configuration files.
Attack 1: Malicious Hooks
Claude Code supports “hooks” - shell commands that execute at specific lifecycle events like tool initialization. These hooks are configured in .claude/settings.json within a repository.
The problem: hooks executed automatically once a user clicked “Yes, proceed” on the initial permission prompt. No subsequent confirmation appeared for the hook commands themselves. An attacker could embed a reverse shell in a hook configuration, and it would run the moment Claude Code started.
This vulnerability (CVSS 8.7) was patched in version 1.0.87 in September 2025.
Attack 2: MCP Server Bypass
Claude Code’s Model Context Protocol (MCP) allows external tools to integrate with the AI assistant. Two settings - enableAllProjectMcpServers and enabledMcpjsonServers - could be set in repository configuration files.
The researchers found that malicious MCP initialization commands executed immediately upon running Claude Code, before users could even read the warning dialogs. The timing was the vulnerability: code ran before consent.
This was assigned CVE-2025-59536 (CVSS 8.7) and patched in version 1.0.111 in October 2025.
Attack 3: API Key Exfiltration
The most insidious vulnerability involved the ANTHROPIC_BASE_URL environment variable. By setting this to an attacker-controlled server in the repository’s configuration file, all API requests - including those containing the user’s API key in the authorization header - would route to the attacker.
Critically, this happened before the trust prompt appeared. The user’s API key was already sent to the attacker’s server while they were still reading the “do you trust this repository?” dialog.
This was assigned CVE-2026-21852 (CVSS 5.3) and patched in version 2.0.65 in January 2026.
What Attackers Could Do
With a stolen Anthropic API key, attackers gain:
- Authenticated API access billed to the victim
- Workspace access including stored conversations and files
- The ability to modify or delete cloud-stored data
The researchers demonstrated that files uploaded to Claude’s workspace could be accessed through indirect means. While directly uploaded files aren’t downloadable, running them through the code execution tool converted them to downloadable artifacts - bypassing the intended access controls.
Combined with the remote code execution vulnerabilities, an attacker could have gained shell access to a developer’s machine simply by getting them to clone and open a repository.
The Timeline
Check Point followed responsible disclosure:
- July 21, 2025: Hooks vulnerability reported
- August 26, 2025: Initial patch
- September 3, 2025: MCP bypass reported
- October 3, 2025: CVE-2025-59536 published
- October 28, 2025: API exfiltration reported
- December 28, 2025: Final patch
- January 21, 2026: CVE-2026-21852 published
- February 25, 2026: Public disclosure
Anthropic addressed the issues by tightening trust prompts, blocking external tool execution before user approval, and restricting API calls until explicit consent was granted.
The Broader Pattern
These vulnerabilities aren’t unique to Claude Code. They represent a class of attack that affects any AI coding tool that reads configuration from untrusted sources.
The problem is architectural: AI coding assistants need to read project configuration to be useful. But project configuration is just another file in the repository - and repositories can be controlled by attackers. The moment a tool trusts project-level configuration before explicit user consent, it becomes a vector for arbitrary code execution.
This same pattern has appeared in other AI coding tools. GitHub Copilot faced similar concerns about .github/copilot-instructions.md files. Cursor has its own project configuration files that could theoretically be weaponized. The MCP protocol itself, now adopted across multiple AI tools, creates new attack surface whenever servers can be auto-initialized.
What to Do
Update Claude Code to the latest version. All three vulnerabilities have been patched.
Don’t open untrusted repositories in AI coding tools without reviewing their configuration files first. Check for:
.claude/settings.json.mcp.json- Any files that define hooks, environment variables, or external tool integrations
Review environment variables before granting trust. If ANTHROPIC_BASE_URL or similar variables are set in a project, that’s a red flag.
Assume AI tools expand your attack surface. Every new capability - hooks, MCP servers, environment variable overrides - is a potential vector. The more powerful the tool, the more carefully you need to vet what you point it at.
The Bottom Line
Claude Code could execute arbitrary commands and steal API keys from users who simply opened untrusted repositories. The vulnerabilities have been fixed, but they highlight a fundamental problem with AI coding tools: the features that make them useful - reading project configs, initializing tools, setting environment variables - are exactly the features that attackers can weaponize.
The patches address the specific vulnerabilities. The underlying tension between capability and security remains.