Red Cell: Who Actually Controls Military AI?
Nobody controls military AI. The Pentagon governs through procurement power. Companies govern through acceptable use policies. Congress has legislated nothing. Courts will rule on narrow questions. The EFF is right: civil liberties should not depend on contract negotiations between entities with spotty records on rights.
RED CELL ASSESSMENT — THE GOVERNANCE VACUUM
This assessment challenges all parties. Every actor in the Anthropic-Pentagon crisis claims to be acting responsibly. None of them have the authority, accountability, or mechanisms to deliver on that claim. The uncomfortable conclusion: the question is not who controls military AI. The question is whether anyone does.
Challenge: Anthropic
Confidence in critique: VERIFIED
Anthropic’s red lines are admirable and unprecedented among major AI companies. They are also self-imposed, self-enforced, and contingent on the values of current leadership. There is no external body that audits Anthropic’s compliance with its own Acceptable Use Policy. There is no contractual mechanism that survives a change in CEO, a hostile acquisition, or a board decision to relax restrictions under financial pressure. Dario Amodei’s personal commitment to safety is real — sourced from years of consistent public statements and organizational decisions. But safety-by-CEO is not governance. It is a benevolent autocracy, and those have a historical shelf life.
What happens in 2030 if Anthropic is acquired by a company with different values? What happens if the board replaces leadership with someone who views military contracts as essential to survival? The red lines vanish because they were never structural — they were personal. Anthropic deserves credit for drawing lines nobody else would draw. It does not deserve credit for building a governance system, because it has not built one.
Challenge: The Pentagon
Confidence in critique: VERIFIED
The supply chain risk designation was designed by Congress to address foreign adversary infiltration of military technology supply chains. Huawei. Kaspersky. Companies with documented ties to hostile intelligence services. Using this tool against a domestic American company because it refused to relax safety restrictions on a policy question — not a security threat — is a category error at best and an abuse of authority at worst.
The precedent is the problem. If the Pentagon can designate any AI company as a supply chain risk for imposing safety restrictions the Pentagon finds inconvenient, then no AI company has meaningful independence. The designation becomes a compliance tool: agree to our terms or we destroy your government business and pressure your commercial customers to follow. This is governance by threat, not governance by law. The Pentagon has legitimate operational needs. Meeting those needs by wielding a foreign-adversary tool against domestic companies is not legitimate.
Challenge: OpenAI
Confidence in critique: VERIFIED
“You’re going to have to trust us.” This is not a policy. It is not governance. It is not enforceable, auditable, or accountable. OpenAI’s guardrails on military use are policy statements without enforcement mechanisms, transparency requirements, or external oversight. The Atlantic asked the right question: who is watching?
The answer is nobody. No independent board reviews OpenAI’s military AI deployments for compliance with stated red lines. No public reporting mechanism exists. No whistleblower protection framework specific to military AI use has been established. If a classified Pentagon program using GPT models crosses OpenAI’s stated red lines, the violation occurs in a classified environment where no external party can observe, document, or report it. The guardrails are not glass — they are vapor. They exist as long as OpenAI says they exist, and verification is structurally impossible.
Challenge: Congress
Confidence in critique: VERIFIED
The SF Examiner stated it plainly: “This week exposed a real governance vacuum.” Congress has failed to legislate on military AI governance despite years of escalating deployment. 120+ nations support international regulation of autonomous weapons systems. The United States, Russia, and Israel are the primary holdouts. Congress has held hearings. Congress has issued statements. Congress has drafted bills that die in committee. Congress has not passed a single binding law governing AI use in military targeting, intelligence analysis, or autonomous weapons systems.
The Schiff and Cruz signals are early-stage at best. Neither has introduced legislation with bipartisan co-sponsors, committee markup schedules, or floor vote timelines. The defense lobby — Palantir, Anduril, Lockheed Martin, Northrop Grumman — actively opposes constraints. Congressional inaction is not an accident. It is the product of lobbying, institutional inertia, and political risk avoidance. Members of Congress who vote to constrain military AI face attack ads about “weakening national defense.” Members who do nothing face no political cost until something goes catastrophically wrong.
The Uncomfortable Answer
The real problem is not that the wrong entity controls military AI. The real problem is structural: no entity controls it. The Pentagon sets policy through procurement decisions that are not subject to public review. Companies set policy through acceptable use terms that are not externally enforced. Congress sets nothing. Courts will eventually rule on narrow legal questions — was this particular designation lawful — not on broad governance frameworks.
The EFF’s framing is the sharpest analysis available: “Privacy protections shouldn’t depend on contract negotiations between tech companies and the government — two entities with spotty track records for caring about your civil liberties.” This is correct. The current system places the most consequential technology governance decisions of the century in the hands of procurement officials and corporate counsel, mediated by courts ruling on administrative law questions. None of these actors were designed for, trained for, or accountable for the decisions they are making.
Assessment: The governance vacuum is the crisis. Everything else — Anthropic’s designation, OpenAI’s contracts, the court proceedings — is symptom. Until Congress legislates or an international framework emerges, military AI governance will remain an improvised negotiation between actors with misaligned incentives and no external accountability. Confidence: VERIFIED. This is not a prediction. It is a description of current conditions.