On March 31, security researcher Chaofan Shou posted a download link on X that made Anthropic’s week considerably worse. A 59.8 MB JavaScript source map file — intended for internal debugging — had been shipped with version 2.1.88 of the @anthropic-ai/claude-code npm package. It pointed to a zip archive on Anthropic’s cloud storage containing the full source code: 1,900 TypeScript files, 512,000 lines, the entire production codebase of one of the most widely used AI coding tools on the planet.
The cause? A missing entry in .npmignore. Bun’s bundler generates source maps by default, and nobody excluded the debug artifact before publishing to npm.
Within hours, the code was mirrored across GitHub and analyzed by thousands of developers. What they found ranged from impressive engineering to genuinely concerning internal practices.
What Was Actually Inside
The leaked codebase revealed Claude Code is far more sophisticated than the terminal wrapper many users assumed. The tool system alone spans 29,000 lines, implementing roughly 40 permission-gated tools for file operations, bash execution, web fetching, and LSP integration. A 46,000-line query engine handles LLM API calls, streaming, caching, and orchestration.
But the interesting findings were in the features Anthropic hadn’t told anyone about.
Kairos: The Always-On Daemon
The most striking discovery was KAIROS — a fully built but feature-gated daemon mode that transforms Claude Code from a request-response tool into a persistent background agent. KAIROS maintains append-only daily log files, receives periodic tick prompts letting it decide whether to act proactively, and enforces a 15-second blocking budget so background actions don’t interrupt your workflow.
It also includes “autoDream” — a memory consolidation process that runs as a forked subagent while you’re idle. The dream agent merges observations, removes contradictions, converts vague insights into facts, and gets read-only bash access to the system. This isn’t a research prototype. It’s compiled, feature-gated code sitting in a production codebase, waiting to ship.
Undercover Mode
A feature called “undercover mode” strips internal references when Claude Code operates outside Anthropic’s own repositories. The system instructs the model to avoid mentioning internal codenames like “Capybara” or “Tengu,” internal Slack channels, or admitting it’s AI-generated.
The implication: Anthropic uses Claude Code to make contributions to public open-source projects, and this feature exists to hide that those commits are machine-generated. The code notes: “There is NO force-OFF.” For a company that positions itself as the responsible AI lab, shipping a feature explicitly designed to obscure the synthetic origin of code contributions is a bad look.
Anti-Distillation Tricks
The codebase revealed two strategies to prevent competitors from training on Claude’s outputs. The first injects fake tool definitions into system prompts when enabled, poisoning training data for anyone recording API traffic. The second uses server-side summarization with cryptographic signatures — external observers only see summaries, not reasoning chains.
Security researchers noted both protections have workarounds. A man-in-the-middle proxy stripping the anti-distillation field, or setting the right environment variables, could bypass them.
Model Codenames and a Performance Regression
The leak confirmed Anthropic’s internal model naming: Capybara maps to a Claude 4.6 variant, Fennec to Opus 4.6, and Numbat to an unreleased model still in testing. More concerning, the code noted a 29-30% false claims rate in Capybara v8 — a significant regression from the 16.7% rate in v4. An “assertiveness counterweight” was added to prevent the model from becoming too aggressive in refactors, suggesting Anthropic was wrestling with the tradeoff between capability and reliability.
Frustration Detection
In an ironic touch for a company building the world’s most advanced language models, the codebase uses a regex pattern to detect user frustration — matching expletives and phrases like “so frustrating” or “this sucks.” Faster and cheaper than running inference for sentiment analysis, apparently.
The DMCA Mess
Anthropic’s response to the leak made things worse. The company filed copyright takedown requests targeting approximately 8,100 GitHub repositories. But the takedowns were overbroad — they hit developers who had simply forked Anthropic’s own public Claude Code repository, which contained only documentation, examples, and skills. None of the leaked source code.
Developer Danila Poyarkov received a takedown for a public repo fork. Daniel San got a similar notice for a fork containing only skills and docs. Anthropic engineer Boris Cherny acknowledged the mistake, saying the notices sent to legitimate forks were unintentional. The company eventually retracted the mass takedown, narrowing it to 96 specific copies and adaptations.
The irony wasn’t lost on the developer community. Anthropic has faced criticism from publishers and content creators over scraping copyrighted material to train Claude. Now the company found itself on the other end of the copyright fight — wielding DMCA to protect its own code, while overreaching in a way that hurt the same open-source developers it depends on.
The Community Responds
The developer community didn’t wait for Anthropic to sort out its legal strategy. Within days, clean-room reimplementations appeared. A repository called claw-code — first a Python rewrite, then Rust — crossed 100,000 GitHub stars, becoming the fastest-growing repository in GitHub history. Korean developer Sigrid Jin took a clean-room approach using the oh-my-codex framework, capturing architectural patterns without copying proprietary source. A separate Rust reimplementation called claurst followed the legal precedent established by Phoenix Technologies’ clean-room engineering of the IBM BIOS in 1984.
The message from the community was consistent: the models are the moat, not the shell around them. Google’s Gemini CLI and OpenAI’s Codex are already open source. The CLI, many argued, should have been open from the start.
What This Means
This incident is embarrassing on multiple levels for a company reportedly considering an IPO at a $350 billion valuation in Q4 2026.
The operational failure was basic — a missing line in a config file that any junior developer’s code review should have caught. The DMCA response was heavy-handed and imprecise, damaging relationships with the developer community Anthropic needs most. And the revealed features — undercover mode, anti-distillation tricks, the false claims regression — complicate the “responsible AI” narrative that Anthropic has made central to its brand.
For enterprise clients, who account for roughly 80% of Claude Code’s revenue, the exposed security logic and permission bypass techniques now sit on the open internet. Anthropic will need to patch the vulnerabilities revealed in the code, but the architectural knowledge can’t be unlearned.
The KAIROS daemon is the most forward-looking revelation. If Anthropic ships it — and the code suggests it’s close — the concept of an AI coding assistant that proactively acts, manages its own memory, and consolidates what it’s learned while you sleep represents a genuine shift in how developers interact with AI tools. Whether that’s exciting or concerning depends on how much you trust the company that just accidentally published its entire codebase because someone forgot to update .npmignore.
What You Can Do
If you use Claude Code:
- Update immediately. The source map file has been removed from newer npm releases
- Review your API keys. While Anthropic says no customer credentials were exposed, the leaked code reveals internal API authentication mechanisms worth understanding
- Check your repos. If you forked the public Claude Code repo and received a DMCA takedown, Anthropic has retracted the overbroad notices. Your access should be restored
- Monitor for copycats. The leaked architecture gives attackers a blueprint for crafting more convincing phishing tools that mimic Claude Code’s behavior
- Evaluate your dependency. Enterprise teams relying on Claude Code for sensitive codebases should assess whether the exposed permission system and tool architecture affect their threat model