tool
Dark Factories Website

GitHub Copilot

The most widely deployed AI coding tool with 20 million users and 42% market share. Lab results show 55% faster code completion on isolated tasks. Production data shows larger PRs, higher review costs, and more security vulnerabilities. The canonical example of the J-curve problem.

GitHub Copilot is the most widely deployed AI coding tool in the world. It’s also the tool that most clearly illustrates the gap between AI coding’s marketing narrative and its operational reality.

The Numbers

  • Users: 20 million
  • Market share: 42% among AI coding tools
  • Lab performance: 55% faster code completion on isolated tasks
  • Production reality: Larger PRs, higher review costs, more security vulnerabilities

The J-Curve Problem

Copilot is the canonical example of the J-curve: adding a powerful tool to an existing workflow can make you slower, not faster, because workflow disruption outweighs generation speed.

The METR 2025 study captured this precisely: experienced developers using AI tools (similar to Copilot) took 19% longer to complete tasks, while believing they were 24% faster. The tool made them worse at the level of their existing workflow.

One senior engineer’s observation: “Copilot makes writing code cheaper but owning it more expensive.”

The Distinction That Matters

Copilot operates at Level 0–1 (Dan Shapiro’s framework): suggesting lines, handling discrete well-scoped tasks. It doesn’t redesign the workflow — it bolts onto the existing one.

Teams seeing 25–30% productivity gains are doing something different: they’ve redesigned their entire workflow around AI, not just added a tool. That’s the distinction between Level 1 and Level 3+.

Workspace / Autonomous Mode

GitHub has been developing Copilot Workspace, which aims to handle full feature implementation from a task description. This pushes Copilot toward Level 3–4 functionality, though adoption is early as of early 2026.

Security Concerns

Production data showing more security vulnerabilities in Copilot-assisted code is one of the strongest arguments for StrongDM’s external scenario testing approach: tests the agent never sees during development, specifically designed to catch what the agent might miss.