A federal judge stayed the Pentagon's 'supply chain risk' designation of Anthropic — an early legal win that reopens the question of whether the government can conscript AI companies.
Bloomberg covered the ruling as a legal-process story focused on administrative procedure, not on the broader precedent for AI-military relations.
X's AI policy community treated the stay as existential: can the government punish an AI company for refusing to build weapons?
Judge Patricia Millett of the U.S. Court of Appeals for the D.C. Circuit issued a temporary stay on Friday morning blocking the Department of Defense from enforcing its "supply chain risk" designation of Anthropic, the AI safety company that declined a classified defense contract in January and was subsequently barred from all federal procurement. The stay preserves Anthropic's existing government contracts — including a $340 million civilian agency deal with the General Services Administration — while the company's challenge to the designation proceeds. [1]
The ruling is narrow. It does not strike down the designation. It does not declare that AI companies have a right to refuse military work. It says only that the Pentagon's process for imposing the designation was "sufficiently irregular" to warrant judicial review before enforcement. But in the AI policy world, the stay landed like a verdict. [1] [2]
This paper reported last week that the Pentagon told Anthropic the two were "nearly aligned" before abruptly designating the company a supply chain risk — a discovery that emerged in litigation and contradicted the Defense Department's public justification. Judge Millett's opinion referenced that discovery, noting that "the apparent inconsistency between the Department's private communications and its subsequent formal action raises questions that merit full consideration." [1]
The broader question is whether the designation was punishment for Anthropic's refusal to participate in Project Maven's successor program, which would have required the company to fine-tune its Claude models for military targeting analysis. Anthropic's CEO Dario Amodei said in a January blog post that the company "cannot in good conscience" provide AI systems designed to select targets. The Pentagon said the refusal was not a factor in the designation. The timeline suggests otherwise: the "nearly aligned" communication came in December, the refusal came in January, and the designation came in February. [2] [3]
On X, the AI policy community has been debating the case in terms that go well beyond administrative law. The question — can the government coerce AI companies into military service by threatening their commercial viability? — has no precedent because the technology has no precedent. Defense procurement law was designed for companies that make hardware. An AI model is not a jet engine. Refusing to sell a jet engine to the Pentagon does not make Pratt & Whitney a supply chain risk. The Pentagon's argument that Anthropic's refusal does make it one rests on the theory that AI is now a strategic asset comparable to rare earth minerals or semiconductor fabrication — too important to remain in private hands. [3]
The stay is temporary. The full case will be heard in May. But the principle is now before a court, and the principle is the one that matters: whether the government's need for AI overrides a company's decision about what its technology should be used for.
-- DAVID CHEN, Beijing