Claude Code
Anthropic's agentic coding tool. Launched August 2025, hit $1B ARR by February 2026. 90% of its own codebase is self-written. As of February 2026, accounts for 4% of all public GitHub commits, with Anthropic projecting over 20% by end of 2026. The most widely deployed serious coding agent.
Claude Code is Anthropic’s agentic coding tool — and the product that proved the dark factory concept at commercial scale.
The Self-Writing Story
The headline fact: 90% of Claude Code’s own codebase was written by Claude Code.
Boris Churny (project lead): “I haven’t personally written code in months.”
Anthropic leadership has stated they estimate 100% of code produced at the company is now AI-generated.
This isn’t just noteworthy — it’s the recursion point. The tool used to build dark factories is itself built by a dark factory.
Market Position
- Launched: August 2025
- $1B ARR: Reached February 2026 (6 months post-launch)
- GitHub commits: 4% of all public GitHub commits as of February 2026
- Projection: Anthropic projects >20% by end of 2026
The Inflection Point
Claude 3.5 Sonnet was the model that made long-horizon agentic coding viable. Earlier models could handle discrete tasks but struggled with sustained coherent work across sessions. Sonnet introduced what practitioners call “sustainable coherent work” — the ability to maintain context and make sensible decisions over hours of autonomous operation.
How to Use It
Claude Code operates via the CLI (claude) and accepts instructions in natural language and via CLAUDE.md context files in your project. It can:
- Read, write, and modify files
- Run tests and iterate on failures
- Search documentation
- Coordinate multiple tasks in a single session
- Spawn sub-agents for parallel work (agent swarm mode)
At Level 3–4 (Dan Shapiro’s framework), Claude Code is the implementation engine. At Level 5, it is the implementation — running autonomously from spec to shipped code.
Limitations
- Context window limits mean very large codebases require careful context management
- Long-horizon tasks still benefit from explicit checkpointing
- Security review of AI-generated code at volume requires new tooling and approaches