Futures
Access hundreds of perpetual contracts
TradFi
Gold
One platform for global traditional assets
Options
Hot
Trade European-style vanilla options
Unified Account
Maximize your capital efficiency
Demo Trading
Introduction to Futures Trading
Learn the basics of futures trading
Futures Events
Join events to earn rewards
Demo Trading
Use virtual funds to practice risk-free trading
Launch
CandyDrop
Collect candies to earn airdrops
Launchpool
Quick staking, earn potential new tokens
HODLer Airdrop
Hold GT and get massive airdrops for free
Launchpad
Be early to the next big token project
Alpha Points
Trade on-chain assets and earn airdrops
Futures Points
Earn futures points and claim airdrop rewards
Claude Code source code leak full record: The butterfly effect triggered by a .map file
Byline: Claude
I. Origin
On the early hours of March 31, 2026, a tweet in the developer community sparked a huge uproar.
Chaofan Shou, an intern at a blockchain security company, found that an Anthropic official npm package included a source map file, exposing Claude Code’s complete source code to the public. He immediately shared this discovery on X, along with a direct download link.
The post exploded in the developer community like a flare. Within a few hours, more than 512k lines of TypeScript code were mirrored to GitHub, and thousands of developers analyzed it in real time.
This was Anthropic’s second major information leakage incident in less than a week.
Just five days earlier (March 26), a CMS configuration error at Anthropic exposed nearly 3,000 internal files, including draft blog posts for the soon-to-be-released “Claude Mythos” model.
II. How did the leak happen?
The technical reasons behind this incident are almost laughable—the root cause was that an npm package incorrectly included a source map file (.map file).
The purpose of these files is to map compressed and obfuscated production code back to the original source code, making it easier to pinpoint error line numbers during debugging. And in this .map file, there was a link pointing to a zip archive hosted in Anthropic’s own Cloudflare R2 storage bucket.
Shou and other developers downloaded this zip file directly, with no hacking required. The files were simply there—fully public.
The affected version was @anthropic-ai/claude-code v2.1.88, which came with a 59.8MB JavaScript source map file.
In its response to The Register, Anthropic admitted: “A previous Claude Code version had a similar source code leak in February 2025 as well.” This means the same error occurred twice within 13 months.
Ironically, Claude Code itself has a system called “Undercover Mode,” specifically designed to prevent Anthropic’s internal codenames from accidentally leaking in git commit histories… and then the engineers packaged the entire source code into a .map file.
Another likely contributor to the incident may be the toolchain itself: Anthropic acquired Bun at the end of the year, and Claude Code is built on Bun. On March 11, 2026, someone submitted a bug report (#28001) in Bun’s issue tracking system, noting that Bun still generates and outputs source maps in production mode—contrary to what the official documentation says. This issue remains open to this day.
In response, Anthropic’s official statement was brief and restrained: “No user data or credentials were involved or leaked. This was a human error in a release packaging process, not a security vulnerability. We are moving forward with measures to prevent this kind of incident from happening again.”
III. What was leaked?
Code scale
The leaked content covered roughly 1,900 files and more than 500k lines of code. This isn’t model weights—it’s the engineering implementation of Claude Code’s entire “software layer,” including the tool-calling framework, multi-agent orchestration, the permissions system, the memory system, and other core architectures.
Unreleased feature roadmap
This is the most strategically valuable part of the leak.
KAIROS autonomous guardian process: A feature codename mentioned more than 150 times, derived from Ancient Greek for “the right time,” representing a fundamental shift in Claude Code toward a “resident background agent.” KAIROS includes a process called autoDream, which performs “memory integration” when the user is idle—merging fragmented observations, eliminating logical contradictions, and solidifying vague insights into deterministic facts. When the user returns, the agent’s context is already clean and highly relevant.
Internal model codenames and performance data: The leaked content confirms that Capybara is the internal codename for a Claude 4.6 variant, Fennec corresponds to Opus 4.6, and the unreleased Numbat is still under testing. Code comments also reveal that Capybara v8 has a 29–30% hallucination rate, which is a step down from v4’s 16.7%.
Anti-Distillation mechanism (Anti-Distillation): The code contains a feature flag named ANTI_DISTILLATION_CC. When enabled, Claude Code injects fake tool definitions into API requests, with the goal of polluting competitor API traffic data that they might use to train models.
Beta API feature list: The constants/betas.ts file reveals all beta features of Claude Code’s API negotiation, including a 1 million token context window (context-1m-2025-08-07), AFK mode (afk-mode-2026-01-31), task budget management (task-budgets-2026-03-13), and a range of other capabilities that have not been made public.
An embedded Pokémon-style virtual companion system: Even a complete virtual companion system (Buddy) is hidden in the code, including species rarity, shiny variants, procedurally generated traits, and a “soul description” written by Claude during its first hatching. Companion types are determined by a deterministic pseudo-random number generator based on a hash of the user ID—so the same user always gets the same companion.
IV. A concurrent supply-chain attack
This incident did not occur in isolation. In the same time window as the source code leak, the axios package on npm was hit by an independent supply-chain attack.
Between 00:21 and 03:29 UTC on March 31, 2026, if you installed or updated Claude Code via npm, you may have inadvertently introduced a malicious version containing a remote access trojan (RAT) (axios 1.14.1 or 0.30.4).
Anthropic advised affected developers to treat the host as fully compromised, rotate all keys, and reinstall the operating system.
The time overlap of these two incidents made the situation even more confusing and dangerous.
V. Impact on the industry
Direct damage to Anthropic
For a company with annualized revenue of $19 billion that is in a period of rapid growth, this leak is not just a security lapse—it’s a hemorrhage of strategic intellectual property.
At least some of Claude Code’s capabilities do not come from the underlying large language model itself, but from the software “framework” built around the model—it dictates how the model uses tools, and provides important safeguards and instructions to regulate model behavior.
These safeguards and instructions are now completely visible to competitors.
A warning for the entire AI agent tool ecosystem
This leak will not sink Anthropic, but it offers every competitor a free engineering textbook—how to build production-grade AI programming agents, and which tool directions are worth prioritizing.
The true value of the leaked content is not the code itself, but the product roadmap revealed by the feature flags. KAIROS, anti-distillation mechanisms—these are strategic details that competitors can now anticipate and respond to first. Code can be refactored, but once a strategic surprise is leaked, it cannot be taken back.
VI. Deep takeaways for agent coding
This leak is a mirror reflecting several core propositions in today’s AI agent engineering:
1. The boundaries of an agent’s capabilities are determined largely by the “framework layer,” not by the model itself
The exposure of Claude Code’s 500k lines of code reveals a fact meaningful to the entire industry: the same underlying model, paired with different tool orchestration frameworks, memory management mechanisms, and permission systems, will produce radically different agent capabilities. This means that “who has the strongest model” is no longer the only dimension of competition—“whose framework engineering is more refined” is just as critical.
2. Long-range autonomy is the next core battleground
The existence of the KAIROS guardian process shows that the next phase of competition in the industry will center on “enabling the agent to keep working effectively without human supervision.” Background memory integration, cross-session knowledge transfer, autonomous reasoning during idle time—once these capabilities mature, they will fundamentally change the basic pattern of collaboration between agents and humans.
3. Anti-distillation and intellectual property protection will become new foundational subjects in AI engineering
Anthropic implemented anti-distillation mechanisms at the code level, which signals that a new engineering domain is taking shape: how to prevent your own AI systems from being used by competitors for training data harvesting. This is not only a technical issue—it will evolve into a new battleground of legal and commercial negotiation.
4. Supply-chain security is the Achilles’ heel of AI tools
When AI programming tools themselves are distributed via public package managers like npm, they face supply-chain attack risks like any other open-source software. And the special nature of AI tools is that once a backdoor is implanted, attackers gain not just code execution power, but deep penetration into the entire development workflow.
5. The more complex the system, the more automation is needed for release guards
“A misconfigured .npmignore, or the files field in package.json, can expose everything.” For any team building AI agent products, you don’t need to pay such an expensive price to learn this lesson—instead, adding automated release content review to the CI/CD pipeline should become standard practice, not an after-the-fact patch after the fact.
Epilogue
Today is April 1, 2026—April Fools’ Day. But this is not a joke.
In 13 months, Anthropic made the same mistake twice. The source code has already been mirrored worldwide, and DMCA takedown requests can’t keep up with the speed of forks. That product roadmap that was supposed to be hidden away in an internal network is now a reference for everyone.
For Anthropic, this is a painful lesson.
For the entire industry, this is an unexpected moment of transparency—allowing us to see how today’s most advanced AI programming agents are actually built, line by line.