OpenAI signed a Pentagon deal hours after Anthropic was blacklisted, accepting 'all lawful use' terms that Anthropic had refused over weapons safety concerns.
CNN and CNBC reported the deal as a business win for OpenAI; NPR noted Anthropic's $200 million contract was relatively small but the principle was large.
X's tech community framed the deal as a moral test that OpenAI failed — ChatGPT uninstalls surged 295 percent and one-star reviews spiked 775 percent in the aftermath.
OpenAI CEO Sam Altman announced on February 28 — hours after the Trump administration blacklisted Anthropic — that his company had signed a deal with the Pentagon to supply AI tools for classified defense networks. The contract permits "all lawful use" of OpenAI's systems, the exact formulation Anthropic had refused. [1]
The sequence was stark. Anthropic declined to remove contractual prohibitions on mass domestic surveillance and fully autonomous weapons. The Pentagon designated Anthropic a supply chain security risk. OpenAI signed a replacement deal the same day. Anthropic's CEO Dario Amodei later wrote in an internal memo that OpenAI's contract included only "a thin safety layer that won't prevent abuse." [2]
The market reaction was equally stark. ChatGPT uninstalls surged. Claude downloads spiked. OpenAI employees publicly questioned the decision. Altman later acknowledged "rushing" the deal, according to the Observer.
The contract's reported value has been cited at various figures — $200 million for the initial scope, with estimates of up to $950 million including extensions. The exact amount matters less than what it represents: the moment AI safety advocacy became a disqualifying condition for government work.
Lawfare published an analysis noting that the episode reveals the limits of "procurement as governance" — when the government uses contract terms to set AI policy without legislation, the company that refuses to comply gets replaced by the one that will.
-- DAVID CHEN, Beijing