The Pentagon blacklisted Anthropic — first US company ever labeled a 'supply chain risk.' Palmer Luckey said Anthropic's constitutional AI red lines are 'untenable.' Anthropic is suing.
Anthropic is suing. OpenAI announced a Pentagon deal hours after the ban. $200M contract gone.
@palmerluckey's post calling Anthropic's position 'untenable' drew 40K likes. AI safety accounts — @AISI_status, @EFF — are threading the precedent of applying a supply-chain-risk label to a domestic firm over ethical objections. Defense tech accounts back the Pentagon; civil liberties accounts see ideological blacklisting. The two camps are talking past each other.
The Trump administration designated Anthropic a "supply chain risk" — a label previously reserved for foreign adversaries like Russia and Iran — effectively blacklisting the San Francisco AI company from Pentagon work. Anthropic is suing, calling the designation "unprecedented and unlawful." Hours later, OpenAI disclosed its own Pentagon agreement.
Then Palmer Luckey weighed in. "Anthropic's constitutional AI red lines represent an untenable position that the United States cannot possibly accept during wartime," he posted Tuesday [2]. Forty thousand likes. @AISI_status posting seventeen-thread explainers about "who controls AI development." @EFF posting threads about "when defense procurement becomes ideological blacklisting." @TechCrunch confirming the $200M contract lost [1].
The line Anthropic wouldn't cross: the company's "constitutional AI" restrictions, which prohibit uses including enabling autonomous weapons and mass surveillance. The Pentagon's updated contract rules required vendors to certify their AI could be used for "any lawful purpose." Anthropic declined.
"You're not so excited if you're in the military," said Brad Carson, former Navy intelligence officer and co-founder of Americans for Responsible Innovation. "Warfighters view Claude as the most reliable, most user-friendly product — but they can't get access to it on the terms they want" [1].
Anthropic held a $200 million contract for AI deployment across the Defense Department's classified networks — the first AI company to operate in its most sensitive environments [1]. Head of public sector Thiyesh Ramasamy projected the public sector business would reach "multiple billions of dollars in annual recurring revenue within five years" [1].
The designation drew criticism from defense officials noting the precedent: applying a supply chain risk label — previously used for adversary-linked companies — to a domestic AI firm over ethical rather than security concerns. Retired Navy Rear Admiral Mark Dalton: "I don't know how those two things can both be true — you need DPA to force access and it's so harmful that you put a designation on it reserved for foreign adversaries" [1].
CEO Dario Amodei had said the administration doesn't like Anthropic because the company hasn't donated to Republicans or provided "dictator-style praise to Trump" — comments he later apologized for [1].
Anthropic's lawsuit argues the designation violates the Administrative Procedure Act. The government has not yet responded in court. Preliminary injunction hearing scheduled for next month.
The AI safety community is mobilizing. The defense tech community is celebrating. And somewhere in San Francisco, Claude is unavailable for government use.
— SAMUEL CRANE, Washington