OpenAI took the $950M Pentagon contract Anthropic refused — and the Iran war is sorting Silicon Valley into willing and otherwise.
MSM covers this as a business story about contracts; the real story is an AI defense ecosystem forming in real time.
X is mapping the AI defense industrial base taking shape — and asking who decided OpenAI should be the preferred provider.
The Iran war has produced an unexpected institutional structure: a de facto AI defense industrial base, assembled through a combination of contracts, rejections, and administrative pressure. OpenAI, Anthropic, and Google are not in the same position. The war is sorting them.
OpenAI signed a $950 million classified AI contract with the Department of Defense in February, taking over work that Anthropic had declined under terms the company said would require building weapons-targeting systems. [1] The deal was announced two days after Anthropic publicly rejected the revised contract terms. Sources familiar with the negotiations say OpenAI's willingness to accept the defense framing — including language committing the company to "defense-relevant AI development" — was decisive in the award. [2]
Anthropic's response was not quiet acceptance. The company launched the Anthropic Institute this week, led by co-founder Jack Clark, dedicated to AI safety, economic, and security research. [3] The timing is a signal: Anthropic is building institutional legitimacy as a research organization precisely to rebut the government's framing of it as a defense contractor. CEO Dario Amodei sent an internal memo, portions of which were disclosed, calling the OpenAI deal "safety theater" that the Trump administration accepted because it preferred a compliant partner. [4]
Google's position is more opaque, which is itself a position. The company's Project Maven defense AI work was controversial internally in 2018 and resulted in the company's AI ethics principles. That history appears to have informed a more cautious approach: Google has not signed a comparable classified contract, has not been designated a supply chain risk, and has maintained public silence on the Anthropic-Pentagon dispute. [5] One reading is that Google calculated it could remain outside the defense AI ecosystem and preserve its research brand. Another is that the DOD calculated Google was too large and too embedded in federal cloud infrastructure to be worth a public fight.
The structural question is whether these three companies are competitors in a market or nodes in an emerging state apparatus. OpenAI's acceptance of the defense contract, Anthropic's legal challenge, and Google's strategic silence suggest three different theories of what AI companies are for — and the war is forcing a resolution that no CEO would have chosen in peacetime.