A federal judge blocked the DOD's attempt to blacklist Anthropic — the question of whether the government can conscript AI companies is now a Supreme Court case.
MSM covers this as an Anthropic vs DOD contract dispute; the constitutional question is larger.
X is treating this as a due process test case: can the executive branch unilaterally blacklist a tech company without procedural rights?
A federal judge in California issued a preliminary injunction Tuesday staying the Department of Defense's designation of Anthropic as a national security supply chain risk — the legal mechanism the DOD used to effectively blacklist the AI company from federal contracting. [1] The ruling is a significant early victory for Anthropic's dual lawsuits, filed in California and Washington D.C. on March 9, and poses a question that will not be resolved in this case alone: can the executive branch conscript AI companies into the defense industrial base through administrative designation, without congressional authorization or judicial review?
The facts are not in dispute. The DOD invoked Section 3252 of the 2024 National Defense Authorization Act — a statute written for semiconductor supply chains, not software — to designate Anthropic a "supply chain risk" after the company rejected revised terms for a defense AI contract. [2] The designation would have triggered mandatory federal procurement restrictions, effectively preventing Anthropic from accessing federal cloud infrastructure, defense contracts, and classified AI research programs. The company was notified by press release. It was not given notice or an opportunity to respond before the designation was formalized and made public.
Judge Susan Chen's preliminary injunction finding was built on the procedural question rather than the merits. Anthropic demonstrated, the judge wrote, that the DOD's failure to provide pre-designation notice or a hearing opportunity likely violated the Fifth Amendment's due process clause. [3] The government's argument — that national security designations are committed to agency discretion and not subject to judicial review — was, in the judge's assessment, insufficient to overcome the constitutional claim at the preliminary injunction stage.
The substantive question is whether the government can compel AI company participation in defense work. Anthropic's position, articulated in its March 9 legal filing and in a detailed public blog post by CEO Dario Amodei, is that the company will not build AI systems designed to kill. [4] The DOD's position, advanced by Secretary Hegseth in internal communications disclosed in the litigation, is that AI companies operating in the US defense ecosystem are de facto national security infrastructure and cannot opt out of that classification unilaterally. [5]
The government has until April 3 to appeal the injunction or respond to the underlying complaint. Legal observers note that the due process argument Anthropic is making has a parallel in the Cold War era Supreme Court case of Totten v. United States, which barred former intelligence operatives from suing the government over secret contracts. Whether that precedent extends to software companies in 2026 is exactly the question this litigation was designed to test.