Rival developers just gained insight into how Anthropic built its popular AI-powered coding assistant tool, Claude Code.
Loading audio narration…
The company accidentally leaked part of the internal source code for Claude Code during a release, a spokesperson confirmed to Business Insider on Tuesday.
“Earlier today, a Claude Code release included some internal source code. No sensitive customer data or credentials were involved or exposed,” an Anthropic spokesperson said in a statement. “This was a release packaging issue caused by human error, not a security breach. We’re rolling out measures to prevent this from happening again.”
An X post with a link and a screenshot of what appeared to be internal source code for Claude Code had racked up 26 million views on X as of Tuesday evening.
The exposed code was related to Claude Code itself, not the underlying AI models.
The leak could give a leg up to Anthropic’s rivals by offering an inside look into one of its most popular products. It also raises security questions about a company that has positioned itself as focused on AI safety.
The leak comes after a period of growth for Anthropic, fueled by a very public breakup with the Pentagon in February. After CEO Dario Amodei refused to back down in a dispute over how its AI could be used, the Defense Department instead struck a deal with OpenAI.
Last week, a US District Judge Rita Lin granted a temporary injunction blocking the supply chain risk designation.
Following the dispute, Anthropic’s Claude chatbot saw a surge of downloads over the past month, briefly rising to No. 1 in the US Apple App Store.

