analysis assessment
Mar 12, 2026 4 min read Pentagon

The Governance Vacuum: Pentagon Is Setting AI Policy Through Procurement

No Congressional framework governs military AI deployment. The Pentagon is setting AI policy through procurement decisions and supply chain designations, filling a legislative vacuum with executive fiat.

governancemilitary-aiprocurementregulation

The Structural Problem

The Pentagon’s supply chain risk designation of Anthropic and simultaneous contract award to OpenAI did not occur in a policy vacuum by accident. Congress has not passed comprehensive legislation governing the development, procurement, or deployment of artificial intelligence in military contexts. There is no statutory framework defining which AI capabilities require Congressional authorization, no mandatory testing regime for AI systems used in combat or nuclear operations, and no legislated process for determining when an AI company’s safety policies conflict with national security requirements versus when they advance them.

Into this vacuum, the Department of Defense has stepped with procurement authority. The supply chain risk designation — a tool designed by Congress to address foreign adversary infiltration of technology supply chains — has been repurposed as a policy instrument for punishing a domestic company’s public safety commitments. The OTA contract mechanism — designed to accelerate engagement with nontraditional defense contractors — has been used to award a sole-source classified AI contract without competitive bidding or public transparency. In both cases, existing authorities designed for narrow purposes are being stretched to fill the space where legislation should exist.

The EFF and Brennan Center Analyses

The Electronic Frontier Foundation published an analysis on March 4, 2026, arguing that the Anthropic designation represents “the use of supply chain security authorities as a content-based restriction on corporate speech” and warning that it establishes a precedent under which “any technology company’s public communications about the limitations or appropriate use of its products can be used as the basis for government exclusion.” The EFF noted that FASARA was never intended to evaluate domestic companies’ policy positions and that its procedural protections — designed for cases involving classified intelligence about foreign threats — are wholly inadequate for adjudicating disputes about a US company’s published safety commitments.

The Brennan Center for Justice reached a complementary conclusion through a different analytical lens, arguing that the designation exposes a fundamental gap in the separation of powers. “When the executive branch can unilaterally define which AI safety positions are compatible with national security and which are not — without Congressional authorization, without judicial review, and without any adversarial process — it has effectively claimed the power to regulate AI company speech through procurement exclusion.” The Brennan Center analysis called for Congressional action to establish an independent body for military AI procurement review, analogous to the role CFIUS plays in foreign investment.

Policy by Procurement

The deeper pattern is not unique to AI. The federal government has a long history of using procurement authority as a policy lever — from environmental standards imposed through federal contracting requirements to diversity mandates enforced through supplier qualification. But those procurement conditions have historically been forward-looking requirements (“if you want to sell to the government, meet these standards”) rather than retroactive punishments (“because you said something we disagree with, you are now excluded”).

The Anthropic designation inverts the traditional procurement-as-policy model. Instead of the government setting standards that companies must meet, the government is punishing a company for setting its own standards. The message to the AI industry is not “here is what we require” but “do not tell us what you require.” This distinction matters because it eliminates the possibility of compliance. Anthropic cannot comply with the designation by changing its behavior — the designation is based on speech that has already occurred. The only way to avoid future designations is to avoid future public safety commitments, which is precisely the chilling effect that Microsoft’s amicus brief identified.

What Legislation Would Look Like

The absence of legislation is not because the issues are too complex or too new for Congressional action. The EU AI Act, the UK’s AI Safety Institute framework, and Canada’s Artificial Intelligence and Data Act all demonstrate that legislative frameworks for AI governance are achievable. What is missing in the US context is political will — partly because the AI industry has lobbied against prescriptive regulation, and partly because the current administration has demonstrated that it prefers executive authority over legislative frameworks that would constrain its discretion. The result is a governance vacuum in which the most consequential AI policy decisions in American history — which AI systems are trusted with nuclear command and control, which companies are excluded from national defense, which safety commitments are rewarded and which are punished — are being made through procurement actions that receive less Congressional oversight than a highway construction contract.

Sources