Anthropic Accidentally Leaks Entire Claude Code Source Code Online

Claude Code. © Anthropic


At the conclude of March 2026, Anthropic suffered an embarrassing mistake: the complete source code of their AI tool Claude Code accidentally concludeed up on the internet. The culprit was a single misconfigured file during the publication of the program. We explain here in plain terms exactly what happened, what the code reveals, and what it means.

What happened?

Anthropic publishes Claude Code as a package in the so-called npm registest, a public directory for software packages. When uploading a new version, a so-called source map was accidentally included — a technical auxiliary file that is not normally intconcludeed for the public. This file was 59.8 megabytes in size and contained the complete, readable original code of the tool.

A security researcher named Chaofan Shou discovered the problem and created it public. Within a few hours, complete copies of the code were circulating on GitHub. Anthropic responded quickly, reshiftd the affected version, and replaced it with a cleaned-up variant. However, the company was no longer able to undo the damage.

What are source maps anyway?

Professional software is usually heavily compressed and obfuscated before release so that it runs rapider and is harder to read. Source maps are files that can reverse this process. They display what the original, easily readable code viewed like. For developers they are applyful during testing, but they have no place in finished products.

What is in the code?

The leaked material comprises around 1,900 files with over 512,000 lines of code. A view inside reveals some interesting details.

Technology in the background

  • Bun instead of Node.js: Claude Code relies on a more modern runtime environment that starts rapider and supports TypeScript natively.
  • React in the terminal: The terminal applyr interface is built with React, a technology otherwise more commonly associated with websites.
  • Three-layer memory: The tool manages information across three levels, coordinated via a central file called Memory.md.

Internal codenames and planned features

  • A feature called Kairos appears over 150 times. It describes a background process that summarizes information while the applyr is idle.
  • Internal codenames for models are visible: Capybara stands for Claude 4.6, Fennec for Opus 4.6, and a model called Numbat is still in the testing phase.
  • An internal roadmap hints at plans for longer autonomous tinquires, improved memory, and the collaboration of multiple AI agents.

The “Undercover Mode”: The most explosive discovery

A section of the code referred to as “Undercover Mode” has attracted particular attention. According to its description, it would allow Claude Code to contribute to public open-source projects without disclosing its AI origin.

A section labeled Undercover Mode allegedly describes a feature that would allow Claude Code to contribute to public open-source projects without revealing its origin.

This raises serious questions. Open-source communities are built on transparency and mutual trust. If AI contributions are not identifiable as such, this trust is undermined. At the same time, there is a risk that other companies will develop similar features while being less careful in doing so.

Opportunities and risks of the leak

   
Legitimate apply Problematic apply
Security researchers can examine the code for vulnerabilities Attackers can specifically search for weaknesses
Developers can learn from the architecture Third parties could rebuild the tool with malicious code
Academic research into AI agent systems Protective mechanisms become simpler to circumvent
Competitors gain insight into design decisions Planned features such as the Undercover Mode could be misapplyd

What does this mean for applyrs?

For people currently utilizing Claude Code, there is no immediate risk based on what is known so far. According to Anthropic, no access credentials or personal data were compromised. The company described the incident as human error, not a hacker attack.

The medium-term risk lies elsewhere: anyone who knows the code can tarreceive future versions of the tool more precisely. Anthropic has so far not commented beyond an initial confirmation of the incident. The case illustrates strikingly how a single tiny mistake when publishing software can have far-reaching consequences.


Rank My Startup: Erobere die Liga der Top Founder!



Source link

Get the latest startup news in europe here

Leave a Reply

Your email address will not be published. Required fields are marked *