Concepts
9 entries in Pentagon"All Lawful Purposes"
The Pentagon's demand that all AI models deployed for military use must be available for any legal application — the core contractual language at the heart of the dispute.
AI Safety
The field of research and practice aimed at ensuring AI systems are beneficial and do not cause catastrophic harm — the foundational principle behind Anthropic's refusal to lift all military guardrails.
Autonomous Weapons
Weapons systems that select and engage targets without human involvement — the second of Anthropic's two non-negotiable red lines in the Pentagon dispute.
Disposition Matrix
The Obama-era automated targeting system for drone strikes — evidence that machine-assisted kill decisions in the U.S. military predate the current AI debate by over a decade.
Government-Private Sector Technology Gap
The documented pattern of government agencies — particularly the NSA — possessing advanced capabilities years or decades before the private sector, raising questions about what the Pentagon already has versus what it is publicly requesting.
Lattice OS
Anduril's AI-driven command and control platform — a hardware-agnostic operating system connecting sensors, autonomous drones, and human operators into a unified battlefield picture, enabling the autonomous weapons capabilities that Anthropic refused to support.
Mass Surveillance
One of Anthropic's two non-negotiable red lines — the use of AI to monitor, track, or analyze communications and behavior of Americans at scale.
Responsible Scaling
Anthropic's framework for managing the risks of increasingly powerful AI systems, which informed the company's refusal to remove all guardrails for military use.
Supply Chain Risk Designation
A federal classification normally reserved for foreign adversaries that the Pentagon threatened to apply to Anthropic, which would force all military contractors to certify they don't use Anthropic technology.