analysis assessment
Mar 12, 2026 4 min read Pentagon

Blacklisted and Deployed: The AI Combat Paradox

Claude is simultaneously blacklisted by the Pentagon and actively used in Iran combat operations during Operation Epic Fury — a paradox that exposes the gap between political theater and operational reality.

claudecombat-aioperation-epic-furymilitary-aiparadox

The Paradox

On February 27, 2026, the Pentagon designated Anthropic as a supply chain risk and the President ordered all federal agencies to cease using its technology. On the same day and in the weeks that followed, Claude — Anthropic’s AI system — continued to operate in classified military environments supporting Operation Epic Fury, the US military campaign in Iran. The system that the government publicly declared a threat to national security remained embedded in the operational infrastructure of an active combat theater.

NBC News first reported the paradox on March 5, citing multiple defense officials who confirmed that Claude was “actively supporting intelligence analysis and targeting workflows” in CENTCOM operations even as the Pentagon’s own supply chain designation was being processed through procurement channels. The officials described the situation as an “implementation gap” — the designation triggers a procurement review process that takes months, while combat operations cannot pause for bureaucratic timelines. One official characterized it more bluntly: “We told Congress it’s a security threat and we told CENTCOM to keep using it. Both things happened on the same day.”

Operational Dependency

Bloomberg’s subsequent reporting revealed the depth of Claude’s integration into combat operations. According to their sources, Claude models were being used in three operational contexts within Operation Epic Fury: processing and correlating signals intelligence from Iranian military communications, generating targeting packages from multi-source intelligence fusion, and providing real-time analytical support for strike authorization decisions. These are not peripheral applications. They are core functions of the kill chain — the sequence of steps between identifying a potential target and authorizing a strike.

The Washington Post reported that military commanders in the theater had been informed of the supply chain designation but had requested — and received — operational exemptions allowing continued use pending transition to alternative systems. The exemptions were granted because, as one senior military official told the Post, “there is no alternative system that provides equivalent capability on the timeline we need it.” The Pentagon CIO’s 180-day removal memo, obtained by CBS News on March 10, implicitly acknowledged this dependency by establishing a transition timeline that extends through September 2026 — seven months into a combat operation that the military’s own AI systems were actively supporting.

What the Paradox Reveals

The simultaneous blacklisting and operational deployment of Claude exposes a fundamental disconnect between the political and operational layers of the national security establishment. The political decision — designating Anthropic as a supply chain risk — was driven by the administration’s displeasure with Anthropic’s public safety positions and its desire to reward a more politically aligned competitor. The operational decision — continuing to use Claude in combat — was driven by the military’s assessment that Claude provides capabilities that no alternative system currently matches in the specific domains where it is deployed.

These two decisions are in direct logical contradiction. A system that genuinely poses a supply chain risk to national security should not be used in the most sensitive military operations the country is conducting. A system that is reliable enough for combat targeting should not be classified as a supply chain threat. Both propositions cannot be true simultaneously. The fact that the government is acting as if both are true reveals that the supply chain designation was never about security. It was about politics. And the continued operational use was never about policy compliance. It was about necessity.

The Accountability Void

The paradox also creates a novel accountability problem. If Claude contributes to a targeting decision in Operation Epic Fury that produces civilian casualties or a strike on a protected site, who bears responsibility? The government has formally declared the system a security risk, yet it is using the system in lethal decision-making. Anthropic has been excluded from the contracting relationship, yet its technology is being used in contexts the company has publicly identified as requiring the highest level of safety oversight. The military operators using the system are following orders — both the order to keep using it and the order to stop using it, which they cannot simultaneously obey. The accountability framework that should govern AI-assisted targeting — already inadequate before the designation — has been rendered incoherent by the political decision to blacklist a system the military cannot operationally abandon.

Sources