The Pentagon is actively developing alternatives to Anthropic while OpenAI and xAI keep moving deeper into sensitive government work. What looked like a feud now looks like a sorting mechanism for which labs can live comfortably inside the national-security state.
TechCrunch and related reporting show a clearer procurement pattern than existed a week ago: Anthropic is being replaced, OpenAI expanded its government footprint, xAI is under scrutiny but still inside the conversation, and Washington's new AI framework gives the broader policy mood behind that selection process.
Policy and builder accounts increasingly frame the split in simple terms: the labs that say yes keep winning access, and the labs that hold their red lines become examples. X is strongest on the moral clarity of that frame. It is weakest when it compresses every company and agency into a single seamless conspiracy.
Last week, the Anthropic fight could still be described as a dispute over terms. By Friday, that description no longer felt adequate.
The Pentagon is now actively developing alternatives to Anthropic, according to TechCrunch's account of remarks from the department's chief digital and AI office. [1] Read next to the White House's new AI framework and next to the week's reporting on OpenAI and xAI, the pattern is more revealing than any single contract. Washington is not only arguing with one lab. It is sorting the labs.
The line Anthropic drew is familiar by now: no mass surveillance of Americans, no fully autonomous weapons with no human in the firing loop. The Pentagon treated those restrictions as unacceptable wartime friction. The consequence was not just a blowup. It was replacement.
The Labs That Stayed in the Room
This paper already covered Anthropic's blacklisting and OpenAI's Pentagon cloud deal. Friday's view is cleaner because the smoke is thinning.
Anthropic is suing. OpenAI expanded. xAI remains inside the classified-network conversation even as critics question that access. The Pentagon, rather than waiting for reconciliation, is building with other models. [1]
That is what a sorting mechanism looks like in practice. The question stops being which company has the most elegant public philosophy. The question becomes which companies can remain legible, usable, and politically dependable once national-security buyers demand flexibility.
Procurement Is Becoming Policy by Other Means
The White House's Mar. 20 AI framework matters here because it sets a national mood as much as a legal agenda. [2] Its bias is toward centralization, scale, and minimizing friction on builders. In that environment, procurement decisions stop looking like isolated contracting events. They begin to look like policy by other means.
The state does not need to write down a formal doctrine saying it prefers labs that resist it less. It can express the preference operationally. One vendor gets frozen out. Another receives classified expansion. A third is controversial but welcomed into the room. Soon enough the signal is clear without ever having to call itself doctrine.
What Happens to the Labs That Say No
This is where the Anthropic story matters beyond Anthropic.
If refusing certain uses leads not merely to lost revenue but to durable exclusion from the most sensitive and prestigious state environments, every frontier lab now has a new question to answer: what are your red lines worth once Washington starts selecting vendors for a long war footing?
Some executives will look at Anthropic and see principle. Others will see a warning label.
That does not mean every company that works closely with the government is craven, or that every company that resists is pure. It means the field is sorting faster than many of its public arguments admit. The winners are not only the most capable labs. They are the labs that can survive contact with the state without making the state feel constrained.
Anthropic drew a line. Washington's answer was to keep building anyway.
-- DAVID CHEN, Beijing