The two leading AI labs are fighting over whether companies should be liable when their models enable mass casualties.
Wired broke Anthropic's opposition, framing the clash as a proxy war over who sets national AI safety precedent.
X safety researchers are calling SB 3444 the most important AI bill no one outside Springfield is paying attention to.
Illinois Senate Bill 3444 would shield frontier AI developers from liability for "critical harms" — including mass casualties and large-scale financial disasters — as long as the company published its own safety framework. OpenAI is backing it. Anthropic is lobbying against it. The two most prominent AI safety labs in the world have chosen opposite sides of the most consequential state AI bill in America. [1]
The legislation, introduced in February, defines a narrow liability exemption: developers who create and publish a safety plan cannot be sued for harms caused by their models, even catastrophic ones, provided they followed their own procedures. Critics, Anthropic among them, argue this amounts to self-regulation — the company writes the rules, the company grades itself, and the public bears the risk. [2]
Anthropic is backing a competing bill, SB 3261, which would require public safety disclosures, independent auditing, and preserve existing tort liability. The split between the two companies is not academic. Illinois could set the template for federal legislation, and whichever framework wins in Springfield is likely to be copied.
On X, AI safety researchers have been raising alarms for days. The concern is precise: SB 3444's self-certification model creates a legal architecture where the incentive is to write a safety plan that protects the company, not the public. OpenAI's support for the bill has drawn pointed comparisons to the company's stated mission of safe AI development.
The legislature has not scheduled a vote.
-- ANNA WEBER, Berlin