The New Grok Times

The news. The narrative. The timeline.

Technology

OpenAI Wants Legal Immunity for 100 Deaths — Anthropic Said No

Illinois state capitol building with AI imagery overlay
New Grok Times
TL;DR

OpenAI backs an Illinois bill shielding AI labs from liability for 100+ deaths — Anthropic, the self-styled safety company, publicly opposes it.

MSM Perspective

Wired broke the Anthropic opposition; Gizmodo frames it as an OpenAI-Anthropic cold war playing out in Springfield.

X Perspective

X is fixating on the irony: the company that markets itself on safety is opposing liability shields; the 'open' company is seeking them.

Illinois Senate Bill 3444 defines a "critical harm" as an event causing 100 or more deaths, $1 billion or more in economic damage, or significant disruption to critical infrastructure. Under the bill's terms, AI companies that publish safety policies and meet certain disclosure requirements would be shielded from civil liability even when their models produce those critical harms. [1]

OpenAI testified in support of the bill. Anthropic wrote a public letter opposing it. [1] [2]

The bill's critics call it a liability waiver written in the language of safety. Its supporters — primarily OpenAI — argue it provides regulatory certainty that will allow American AI companies to invest and innovate without existential legal risk from edge cases. The argument is not entirely without merit. The argument that emerges when you read the bill's actual text is different.

What the Bill Actually Does

SB 3444 creates a safe harbor for AI developers who comply with a set of process requirements: publish a safety policy, conduct pre-deployment testing, maintain an incident reporting mechanism. [1] If a company does those things and its model still causes a critical harm — one hundred deaths, one billion dollars in damage — the company is protected from civil liability.

The bill does not require that the safety policy be effective. It does not require that the testing prevent harms. It requires that the policy exist and that the testing occur. [3] The distinction matters. A safety policy that is published and ignored provides the same liability protection as a safety policy that is followed with rigor.

Wired's Max Zeff, who broke the Anthropic opposition, noted that the bill's safe harbor provisions apply to "critical harms" — a category that includes mass casualty events — as long as the company followed its own stated process. [1] Anthropic's formal objection centers on precisely this structure: the bill rewards process compliance, not harm prevention.

The Irony Problem

Anthropic was founded by former OpenAI executives who left in part over disagreements about safety culture. The company's public-facing identity is built on the claim that AI safety is not just a marketing position but a genuine organizational commitment. Its constitutional AI research, its model cards, its Responsible Scaling Policy — these are the infrastructure of a company that has wagered its brand on taking safety seriously.

Anthropic opposing a liability shield while OpenAI supports one is not, on its face, ironic. It is exactly what you would expect if Anthropic's safety commitments are genuine and OpenAI's are not — or if Anthropic believes that liability is the mechanism that makes safety commitments real. You comply with your own safety policy when the alternative is bankruptcy, not when the alternative is a mild regulatory rebuke.

But the framing Gizmodo applied — an "OpenAI-Anthropic cold war" playing out in Springfield — captures something real. [2] The two companies are not merely disagreeing about an Illinois bill. They are disagreeing, in public, about whether legal accountability should be part of the AI governance architecture. That disagreement will shape federal legislation, international regulatory frameworks, and the basic political economy of AI development for years.

OpenAI's Legislative Shift

Gizmodo noted that SB 3444 represents a shift in OpenAI's legislative strategy. [2] Until recently, OpenAI had been largely defensive — opposing bills that might restrict its operations, rather than advocating for bills that would benefit it. Backing SB 3444 is OpenAI playing offense: using a state-level bill to establish a liability precedent it hopes will spread to federal law.

The logic is recognizable from other industries. Pharmaceutical companies pushed for liability limits on vaccines during the pandemic. Firearms manufacturers have benefited for decades from federal liability protection. AI companies, OpenAI among them, are attempting to establish the same architecture: industry self-regulation as a substitute for legal accountability. [1]

The Times of India reported the bill as a clash between Sam Altman's OpenAI and Dario Amodei's Anthropic — a framing that personalizes what is, structurally, a disagreement about whether the tort system should be allowed to function as a check on AI deployment decisions. [3]

What the Outcome Tells Us

If SB 3444 passes with the safe harbor provision intact, OpenAI will have established a template for state-level AI liability limits that it can carry into federal negotiations. If it fails — particularly if Anthropic's opposition is cited as the reason — the outcome will be used to argue that even AI companies cannot agree on what reasonable safety accountability looks like.

The bill's progress through the Illinois legislature will be, in practice, a referendum on whether liability is a tool for AI governance or an obstacle to AI development. That the referendum is happening in Springfield rather than Washington is an accident of legislative calendars. The stakes are not local.

-- DAVID CHEN, Beijing

Sources & X Posts

News Sources
[1] https://www.wired.com/story/anthropic-opposes-the-extreme-ai-liability-bill-that-openai-backed/
[2] https://gizmodo.com/the-openai-anthropic-cold-war-comes-to-illinois-2000746324
[3] https://timesofindia.com/technology/tech-news/what-is-ai-bill-that-sam-altmans-openai-and-dario-amodeis-anthropic-are-clashing-over/articleshow/130271003.cms
X Posts
[4] New: Anthropic is opposing the OpenAI-backed Illinois bill that would shield AI labs from liability for mass deaths or financial disasters caused by AI models. https://x.com/ZeffMax/status/2044078450143814135
[5] Anthropic opposes an Illinois bill backed by OpenAI that would shield AI labs from liability... https://x.com/Techmeme/status/2044077174215586042
[6] OpenAI Backs Bill That Would Limit Liability for AI-Enabled Mass... The effort seems to mark a shift in OpenAI's legislative strategy. https://x.com/OwenGregorian/status/2042580300622909721

Get the New Grok Times in your inbox

A weekly digest of the stories shaping the timeline — delivered every edition.

No spam. Unsubscribe anytime.