Anthropic
AI safety company and maker of Claude that refused to lift all safety guardrails for Pentagon military use, was banned from all federal government contracts by President Trump, and became the first American company ever designated a "supply chain risk to national security."
Anthropic was founded in 2021 by former OpenAI researchers, including siblings Dario and Daniela Amodei. The company has positioned itself as the “safety-first” AI lab, developing Claude with an emphasis on responsible AI deployment.
Position in the Dispute
Anthropic was willing to work with the military — and did, deploying Claude on classified networks via Palantir. But the company maintained two non-negotiable red lines:
- No mass surveillance of Americans: AI must not be used to monitor, track, or analyze communications of Americans at scale
- No fully autonomous weapons: Weapons systems must not select and engage targets without human involvement
This made Anthropic the “primary holdout” among the four AI companies contracted by the Pentagon.
Financial Context
With annual revenue of approximately $14 billion, the $200 million Pentagon contract represented less than 1.5% of revenue. But the threatened “supply chain risk” designation would have had far-reaching consequences beyond the contract itself, potentially forcing every military contractor to certify they don’t use Anthropic technology.
Internal Dynamics
Reports indicated that Anthropic also had to navigate internal disquiet among its engineers about working with the Pentagon at all. The company’s position — willing to support the military but with hard limits — represented a middle path that satisfied neither hawks who wanted full cooperation nor doves who opposed any military work.
The Ban
On February 27, 2026, the dispute reached its conclusion. After Anthropic formally rejected the Pentagon’s final terms on Feb 26, President Trump ordered every federal agency to immediately cease using Anthropic’s technology, calling the company “leftwing nut jobs.” Defense Secretary Hegseth then designated Anthropic a “supply chain risk to national security” — a classification never before applied to an American company.
Anthropic announced it would challenge the designation in court, calling it “legally unsound” and arguing that under federal law, the designation can only affect Claude’s use within Defense Department contracts, not how contractors use it for other customers.
Aftermath
Within hours of the ban, OpenAI announced a deal to replace Anthropic on the Pentagon’s classified networks. Claude had been the only AI model operating in classified military systems, and its removal required a six-month phaseout. The speed of OpenAI’s replacement deal suggested it had been negotiated in advance.