letter Verified
Pentagon Mar 8, 2026

"We Will Not Be Divided" — Open Letter from Google and OpenAI Employees

An open letter signed by 573 Google employees and 93 OpenAI employees expressing solidarity with Anthropic and opposing the weaponization of government procurement against AI safety commitments.

The Letter

On March 8, 2026, a group of 573 Google employees and 93 OpenAI employees published an open letter titled “We Will Not Be Divided” in response to the Pentagon’s supply chain risk designation of Anthropic and the simultaneous award of a major military AI contract to OpenAI. The letter was published on a dedicated website and simultaneously submitted to the Senate Armed Services Committee and the House Science Committee.

The letter states: “We are AI researchers, engineers, and safety professionals at Google and OpenAI. We compete with Anthropic in the marketplace. We are writing because the government’s attempt to punish a competitor for publicly articulating safety commitments threatens every person who works on AI safety at every company — including our own.” The signatories explicitly reject the framing that Anthropic’s blacklisting benefits their employers, arguing that “a government that punishes one company for saying ‘these are the lines we won’t cross’ is telling every company: never draw lines.”

The OpenAI Signatories

The 93 OpenAI signatures are particularly significant given that OpenAI is the direct commercial beneficiary of Anthropic’s exclusion. The letter acknowledges this tension directly: “Some of us work at the company that was awarded the contract Anthropic lost. We are not confused about the implications. If our employer’s government access depends on a competitor being punished for transparency, then our employer’s government access is built on a foundation that will eventually be used against us too.”

Multiple signatories are senior technical staff, including members of OpenAI’s own safety teams. Several told reporters they signed despite concerns about internal repercussions because “the precedent is more dangerous than the discomfort.” OpenAI’s official response was measured: the company stated it “respects the right of its employees to express their views” but declined to comment on the letter’s substance.

Industry Impact

The letter catalyzed a broader industry response. Within 48 hours, similar statements emerged from AI researchers at Meta, DeepMind, and several academic institutions. The ACM issued a statement expressing concern about “the use of procurement authority to penalize responsible disclosure practices in the technology sector.” The letter’s framing — that the government was attempting to divide the AI industry against itself by rewarding silence and punishing transparency — became the dominant narrative in technology media coverage of the crisis.

The “We Will Not Be Divided” framing was deliberately chosen to counter what signatories described as a “divide and conquer” strategy: blacklist the company that speaks publicly about safety limits, reward the company that negotiates limits privately, and thereby teach the entire industry that public safety commitments are commercially dangerous. The letter argues this produces worse outcomes for national security, not better, because it eliminates the public accountability mechanisms that prevent AI companies from quietly abandoning safety commitments when commercially convenient.

Sources