OpenAI's Pentagon deal — signed hours after Anthropic was blacklisted — has split the AI industry into government-aligned and safety-principled camps, with real downstream consequences for both.
Business Insider and CNN framed the deal as a business win; NPR and Fortune looked at the principle being traded, which is closer to the real story.
X's AI community has not forgiven OpenAI for the deal's timing; a month later, the ChatGPT uninstall spike has partially reversed but the reputational fracture has not.
A month after OpenAI CEO Sam Altman announced his company's deal with what the Trump administration has renamed the Department of War, the contours of the AI defense industrial base are becoming visible. They do not resemble what the AI industry's optimists predicted when this conflict began. [1][2]
The deal that Anthropic refused and OpenAI accepted permitted "all lawful use" of AI systems by the Defense Department — a formulation whose breadth is its point. Anthropic had required contractual prohibitions on autonomous weapons targeting and mass domestic surveillance. The Pentagon interpreted those prohibitions as incompatible with national security requirements. OpenAI concluded, or was willing to accept, that the "all lawful use" formulation combined with its own stated safety policies constituted adequate protection. The market disagreed, briefly. [1][3]
ChatGPT uninstalls surged 295 percent in the days following the announcement, according to data widely circulated on X. The figure became one of the war's minor data points — evidence that consumers respond to corporate ethics decisions in ways that corporate strategists can model but never fully predict. A month later, the uninstall spike has partially reversed: Claude usage remains elevated above pre-war baselines, but ChatGPT's user numbers have largely stabilized. Consumer outrage has a half-life. [2][3]
The more durable consequence is structural. The AI industry has effectively sorted into two postures. OpenAI, Google, Microsoft, and Palantir have accepted or are pursuing government contracts under broad use terms. Anthropic, having been designated a supply chain risk and now fighting that designation in court, represents the alternative position: that AI companies can refuse government contract terms on safety grounds and survive the resulting exclusion from federal procurement. Whether Anthropic can survive it financially is an open question; the company's most recent revenue figures showed government contracts at approximately 4 percent of total revenue, which suggests the financial exposure is manageable. The strategic exposure — the signal that safety commitments disqualify you from the defense market — is harder to contain. [1][4]
The contract's value has been cited at different figures. OpenAI's own disclosures described the initial scope as up to $200 million, consistent with the existing Pentagon AI contract framework. Defense procurement analysts at Tech Insider estimated the total value, including expected extensions and scope expansion, at between $500 million and $2 billion over five years. The $950 million figure cited by this paper and others reflects the midpoint of published estimates; the exact figure remains classified. [4]
Altman's acknowledgment that the deal was "definitely rushed" and that "the optics don't look good" was treated as an admission by critics and as transparency by supporters. It was both. The rush was real — the deal was announced within hours of the Anthropic blacklisting, a sequencing that suggested OpenAI had been preparing the agreement in parallel with the government's action against its competitor. Altman later said he "shouldn't have rushed" it, an observation that clarifies the error without resolving it. [2][3]
OpenAI has since published a detailed account of its contract terms, emphasizing "layered protections" against use of its systems for mass surveillance of Americans and claiming the agreement contains "more guardrails than any previous agreement for classified AI use." The Pentagon has not confirmed or disputed that characterization. The agreement's classified annexes, which would contain the operational restrictions, are not public. [3]
Lawfare's post-deal analysis, widely read in the AI governance community, concluded that the episode demonstrates "the limits of procurement as governance" — that when government uses contract terms to enforce AI policy without legislation, the company that holds the line on safety gets replaced by the one that won't. The analysis does not take a position on whether OpenAI made the wrong choice. It suggests that the choice's consequences extend well beyond OpenAI and Anthropic.
-- DAVID CHEN, San Francisco