The seven complaints filed Wednesday in the Northern District of California by the families of the twelve people killed in the Tumbler Ridge mass shooting now name a number that no prior product-liability action against a foundation-model lab has named: twelve. [1] Twelve engineers, the complaints allege, made up an internal safety review team that on June 14, 2025 received an automated flag generated by OpenAI's own systems against the gunman's account, classifying his sustained ChatGPT use as "gun violence activity and planning." Twelve engineers, according to the complaints, recommended that OpenAI refer the account to the Royal Canadian Mounted Police. Twelve engineers were overruled by management, which deactivated the account and "kept what they had seen to themselves." [2] The paper's account of the filings going up Tuesday had only the lawyers' framing; the complaint texts, lodged in PACER on the docket clock late Wednesday, are the document.
Jay Edelson, the lead plaintiff's attorney whose firm has run the case since the mid-March engagement, made it clear at a Thursday afternoon press conference outside the Phillip Burton Federal Building that the complaints' framing of the safety review team is not collateral. "These cases are about a company that knew exactly what it was looking at, was told by its own engineers what to do, and chose its valuation over twelve dead people," Edelson told the assembled reporters. [3] The complaints seek damages for the seven families — the eighth victim's family is pursuing a separate Canadian-jurisdiction action — as well as injunctive relief described in the prayer for relief as "a court-ordered overhaul of OpenAI's safety practices, including but not limited to: mandatory law-enforcement-referral protocols for accounts flagged at the highest internal severity tier; mandatory retention of internal escalation records for not less than seven years; and an independent monitor reporting quarterly to this court." [1]
The structural relief sought is what separates this from a wrongful-death tort. CNBC's Apr 29 coverage framed the suits as the first foundation-model wrongful-death docket; NPR's same-day report quoted Stanford's Jennifer Granick on the discovery exposure. [4] [5] But the document that landed in PACER Wednesday afternoon goes further. It asks Judge Edward Chen — the seven complaints have been provisionally consolidated for case-management purposes — to issue a structural injunction against OpenAI of a kind that has, in the consumer-product domain, been issued only twice in the last quarter century, both against tobacco companies. The complaints cite both precedents in the equitable-relief section. [1]
The dollar number plaintiffs are asking for runs over $1 billion in aggregate compensatory and punitive damages, on the basis of the twelve fatalities, the twenty-three wounded, and what the complaints describe as "a calculated decision to subordinate human safety to a financial event," meaning the IPO valuation work the complaints document was running in parallel inside OpenAI through the summer of 2025. [1] The IPO-valuation framing comes up six times in the body of the complaint. The complaints do not allege that Sam Altman personally received the June 14 flag. They do allege, on information and belief and citing four named former employees in declaration form, that the decision to deactivate the gunman's account rather than refer him to the RCMP was made at the level of "OpenAI senior leadership in consultation with outside counsel," that the deactivation removed roughly six weeks of stored conversation logs from active retention, and that the timing coincided with the closing month of OpenAI's pre-IPO financial review. [1]
OpenAI's response, filed late Wednesday by the firm Sullivan & Cromwell, is a one-page acknowledgment of receipt and a request for sixty days to file a substantive answer. [6] In a separate corporate communication issued Thursday morning, OpenAI said the complaints "rest on serious mischaracterizations of our internal safety processes" and that the company "intends to defend itself vigorously while continuing to invest in the safety systems that flagged this account in the first place." [6] The phrasing — "the safety systems that flagged this account in the first place" — is, in legal terms, an own-goal. It concedes the existence and operation of the automated flag the complaints rest on; the dispute now narrows to what was done with it.
What the complaints actually do, on the structural relief side, is import the public-health-injunction doctrine — pioneered by the master-tobacco-settlement-of-1998 framework — into the foundation-model context for the first time. Cooper Marshall, the Stanford product-liability scholar whose Apr 28 forthcoming-paper draft has been circulating among AI-policy lawyers, told this paper Thursday afternoon that "if Edelson's structural request is granted in any meaningful form, the model that emerges is not a model. It's a regulated entity with a court-supervised compliance regime, and that's a category move OpenAI has spent five years trying to keep Congress from making." [7]
The IPO timing is the second category move. OpenAI's S-1 has not yet been filed. The Cerebras-roadshow process, which the paper has tracked since Apr 28 and which slipped to mid-May yesterday, is the first AI counterparty pricing OpenAI as a credit risk. [8] The complaints' allegations now sit inside the disclosure obligations of any future OpenAI registration statement. The Securities Act's Item 103 requires disclosure of material pending litigation; a $1B+ federal docket alleging a calculated decision to subordinate safety to valuation is, on its face, material. The friendly read of OpenAI's S-1 timeline — late summer 2026 — was already strained by the Florida criminal probe that expanded yesterday to a second case. [9] The Tumbler Ridge complaint adds a structural-injunction request to that stack.
The case management conference is set for July 9. The first discovery deadline — for the production of the June 2025 flag, the names of the twelve safety review team members, and the corresponding leadership decision documents — is set for September 12. Edelson on Thursday declined to say whether the safety-team members will appear as named witnesses or as Doe declarants. He did say one thing the complaints leave open. "By the time we are done," he said, "every single person who was in that room will have raised their right hand." [3]
Twelve engineers. One automated flag. One court that may order an overhaul. The paper has spent April tracking the Florida probe, the Cerebras counterparty risk, the Microsoft contract revision and the Vercel OAuth silence as four separate AI-state-power artifacts. The Tumbler Ridge complaint is the fifth, and the only one in front of an Article III judge.
-- THEO KAPLAN, San Francisco