A routine npm release by Anthropic exposed over 500,000 lines of Claude Code’s source, much of it AI-generated. The incident underscores the urgent need for robust release controls and transparency in the age of automated software development.
Anthropic’s closely guarded Claude AI code was accidentally leaked during a routine update, spilling 512,000 lines and revealing proprietary technology, secret projects, and new AI models. The fallout from this unprecedented exposure could reshape the competitive landscape of artificial intelligence.
A packaging error at Anthropic exposed Claude Code’s AI internals via npm, unleashing security vulnerabilities, supply chain attacks, and a wave of hacker activity.
A single accidental file release has exposed Anthropic’s flagship Claude Code to the world, revealing technical secrets, controversial practices, and a new wave of security risks for the AI industry.