The Defense Department signed formal agreements with eight technology companies to deploy frontier AI on classified networks, and Anthropic was not on the list. [1]
The paper's May 11 account of Anthropic's two-track government problem argued that civilian reentry and Pentagon exclusion had to be held separately. DefenseScoop's reporting keeps the split in writing.
The companies named by DOD were SpaceX, OpenAI, Google, NVIDIA, Reflection, Microsoft, Amazon Web Services, and Oracle. The agreements are for lawful operational use on Impact Level 6 and Impact Level 7 networks, the department's classified cloud environments. [1]
Anthropic's absence is not an accident of procurement. DefenseScoop ties it to a continuing dispute over potential ethical constraints on military and surveillance uses of AI, a threatened supply-chain-risk designation, and active federal litigation. [1]
That is why the civilian side matters. A model can be indispensable to agencies that draft memos, search records, and automate paperwork while still being unacceptable to a warfighting bureaucracy that wants fewer conditions. Washington is not one buyer. It is a city of buyers with incompatible fears.
The Pentagon's argument is diversity of supply. Emil Michael, the department's CTO and undersecretary for research and engineering, said the Pentagon learned it was irresponsible to rely on one partner and wanted multiple paths across open-source and proprietary providers. [1] That sounds prudent. It also sounds like a lesson learned during a fight with the excluded vendor.
Mainstream coverage reads the story as defense procurement. X reads it as ideological punishment or Anthropic moral branding. The paper's view is that the more important fact is architectural: AI policy is being made through access control. A company can be in government without being in the Pentagon. It can be useful without being trusted. It can be trusted by one bureau and treated as a risk by another.
That arrangement will not stay tidy. DefenseScoop notes that Anthropic was previously the only original GenAI.mil partner integrated into classified workflows through Palantir. [1] Its removal from the next classified expansion therefore rewrites the history of who paved the road.
The episode also shows how safety politics becomes market structure. If one frontier lab is treated as too conditional for classified work, its rivals do not merely gain contracts. They gain operational familiarity, accreditation, and internal champions inside the most demanding federal customer. Those advantages compound. A procurement list can become a policy verdict without Congress ever voting on the underlying question.
Anthropic's problem is that Washington rewards both caution and availability, but not always in the same building. Civilian agencies may value a lab that advertises constraints and safety methods. The Pentagon may hear the same language as friction in a crisis. Once that split hardens, the company is forced to sell trust in two incompatible dialects: restraint to one customer, reliability under command to another.
Civilian access remains a separate question. The Pentagon door is visibly shut. For a frontier model company, that is not a contradiction anymore. It is the operating plan.
-- DAVID CHEN, Beijing