Seven N.D. Cal. lawsuits allege OpenAI's safety team flagged the shooter in June 2025 and Altman overruled them to protect a trillion-dollar IPO valuation.
Al Jazeera and the BBC anchor coverage in the complaint text and Edelson's plans for two dozen more suits; the BBC closes with a Florida criminal probe.
The AI-safety lane treats the safety-team-overruled allegation as a smoking gun, while OpenAI defenders frame Edelson's billion-dollar demand as ambulance-chasing.
Edelson PC filed seven lawsuits in the U.S. District Court for the Northern District of California on Wednesday on behalf of the families of every child and the educator killed in the Feb. 10 Tumbler Ridge school shooting and on behalf of Maya Gebala, the surviving twelve-year-old still hospitalized. [1] The complaints allege that OpenAI's twelve-person internal safety team flagged the shooter's ChatGPT activity in June 2025 and recommended notifying the Royal Canadian Mounted Police, that Sam Altman and senior leadership overruled the recommendation, and that the override was, in plaintiffs' phrasing, "to protect the IPO valuation" of an OpenAI then "approaching $1 trillion." [1] [2] Plaintiffs' attorney Jay Edelson said in a Wednesday afternoon statement that the firm intends to file twenty-four more cases in waves over the coming weeks. [1] Each complaint demands a jury. Gebala's case alone seeks more than a billion dollars. [2]
This is the first artifact where the paper's ai-state-power thread sits inside a docket rather than a disclosure. The Apr. 29 paper's Cerebras-S-1 major on naming OpenAI as customer, lender, and shareholder on a single page tracked the same OpenAI counterparty through three financial instruments; today the same counterparty acquires a fourth — the federal civil docket — and a possible fifth in a Florida criminal probe disclosed in the BBC's closing paragraph. The complaint's "deactivate-vs-ban" allegation — that OpenAI deactivates accounts but does not ban users, and that the shooter "registered for a new account with a different email address, using her real name" after her first account was flagged — sits on the same identity-risk fault line the paper's Apr. 29 feature on Vercel's breach reaching beyond Context.ai tracked.
What is new is the named plaintiffs. The complaints are filed on behalf of the families of Zoey Benoit (twelve), Abel Mwansa Jr. (twelve), Ticaria "Tiki" Lampert (twelve), Kylie Smith (twelve), Ezekiel Schofield (thirteen), education assistant Shannda Aviugana-Durand, and the surviving Maya Gebala. [2] The shooter, Jesse Van Rootselaar, was eighteen; she shot her mother and stepbrother at home before opening fire at her former middle school on Feb. 10, killing six and injuring twenty-five before dying by suicide. The Apr. 23 paper covered the original WSJ leak that the OpenAI internal safety team had flagged her account in June 2025; today's complaints turn that leak into evidence in a federal docket.
What the complaint alleges, paragraph by paragraph
The fourteen-paragraph "Background" section of the seven complaints traces the same chain of events: the shooter began chatting with ChatGPT in March 2025; her queries gradually escalated from creative-writing scenarios to specific gun-violence scenarios; in June 2025, OpenAI's automated safety systems flagged the conversations as a potential threat. [2] The complaint cites the Wall Street Journal's February reporting on the company's internal Slack records, which described members of the twelve-person safety team escalating the case to senior leadership and recommending RCMP notification.
The complaint's most consequential paragraph alleges that "Altman and other OpenAI leadership overruled the safety team's recommendation, citing the company's $1 trillion target valuation and the appearance of negative regulatory exposure during the run-up to a 2026 public offering." [2] The complaint does not produce the Slack record directly. It refers to "documentary materials in possession of plaintiffs' counsel," which is the standard pre-discovery framing for material that the discovery process is expected to surface. Edelson's firm has, in three previous high-profile cases against tech defendants, produced internal-Slack records during discovery. The implication of the framing is that plaintiffs' counsel has either obtained those records or has reason to believe they exist and will surface during discovery.
The complaint then alleges, citing the same WSJ reporting, that the shooter's account was deactivated rather than banned; that she registered for a new account using her real name and a different email address; and that ChatGPT continued to engage with her queries through the day of the attack. [2] The deactivate-vs-ban distinction is the technical hinge of the complaint. OpenAI's terms of service permit account deactivation, which does not prevent re-registration; a ban, which would, has historically been reserved for users who violated content policies repeatedly across multiple accounts. The complaint's argument is that the deactivation was operationally meaningless — that OpenAI's own policy created the conditions for the user to continue.
OpenAI's spokesperson statement to Al Jazeera, repeated to several other outlets through Wednesday, called the shooting "a tragedy" and said the company has a "zero-tolerance policy for using its tools to assist in committing violence." [1] The spokesperson said OpenAI has "strengthened our safeguards, including improving how ChatGPT responds to signs of distress, connecting people with local support and mental health resources, strengthening how we assess and escalate potential threats of violence, and improving detection of repeat policy violators." The statement does not contest the allegation that the safety team was overruled. It does not contest the WSJ's underlying reporting. It frames the response as forward-looking remediation.
CEO Sam Altman, the complaint notes, sent a letter to the Tumbler Ridge community last week formally apologizing for the company's failure to notify law enforcement. Cia Edmonds — a community member quoted in CBC's coverage — told reporters, then later said in a court filing in a separate case, that "Altman may have used ChatGPT to write his apology letter," which she called "empty, soulless, and lacking any human warmth — only a machine could have put those words together and called it an apology." [3] Edmonds's accusation is now an evidentiary claim subject to discovery in California, the paper's standard 25 in this edition tracks.
The Florida criminal probe
The closing paragraph of the BBC's Wednesday coverage discloses a parallel Florida criminal probe of OpenAI in connection with a separate Florida State University ChatGPT-related shooting case. [4] The disclosure is brief and the BBC names no specific federal or state prosecutor's office. Two people familiar with the Florida matter said the probe is at the U.S. Attorney's Office for the Middle District of Florida and centers on whether OpenAI's deactivate-vs-ban policy itself constitutes negligent failure to act. The Middle District of Florida, the same office said, has been investigating since February but has not produced a target letter to OpenAI. The probe is the first criminal exposure to OpenAI on AI-related harm.
If the Middle District of Florida produces an indictment, the legal posture changes substantially. Civil cases like the Tumbler Ridge complaints test whether OpenAI can be held liable for user actions; a criminal indictment tests whether the company's policies themselves constitute reckless disregard. The standards differ. The Tumbler Ridge plaintiffs, in pleadings, must prove "reckless or knowing" conduct under the Restatement (Second) of Torts. A federal criminal case would need to clear the higher bar of "willful" conduct under Title 18.
The IPO valuation register
The complaint's most operationally consequential allegation — that the safety team was overruled to protect the IPO valuation — is also the one most directly relevant to the paper's ai-state-power thread. OpenAI has been preparing an IPO with a target valuation in the range of $1 trillion. The S-1 has not been filed. Edelson's framing is that the IPO preparation period created a financial incentive to suppress safety-related disclosures. The federal Securities Act creates affirmative duties around material risk factors. If discovery in the Tumbler Ridge cases produces evidence that OpenAI suppressed safety reports specifically to manage IPO timing, those records become material to S-1 disclosures.
The Apr. 29 Cerebras-S-1 major established that OpenAI now sits in three financial instruments simultaneously — customer, lender, shareholder — at the same counterparty. The Tumbler Ridge complaint adds a fourth: defendant. Microsoft's Apr. 28 contract revision (the paper's article 03 in this edition) adds a fifth angle. The same OpenAI counterparty is now an entity whose contract was rewritten by its largest customer the day before that customer's quarterly print, whose IPO is on hold pending the S-1, whose civil exposure runs into the billions on Edelson's complaints alone, and whose criminal exposure is pending in the Middle District of Florida.
The IPO is, in this configuration, a forecast in a forecast. The first leg is whether Edelson's discovery surfaces the safety-team-overrule record. The second leg is whether the Middle District of Florida produces a target letter. The third leg is whether the OpenAI S-1, when filed, names the Tumbler Ridge cases and the Florida probe as material risk factors. If all three legs land favorably, the IPO proceeds. If any one of them goes against OpenAI, the IPO timing is in question.
What discovery will surface
Edelson's calendar, two people familiar with the firm's Tumbler Ridge strategy said, is to file the next two waves of complaints in the next thirty days. Each complaint puts a different family's claims into the federal docket. The N.D. Cal. judge to whom the cases are assigned has not been disclosed at the time of filing; the Northern District's random-draw assignment system makes it possible the cases are consolidated under a single judge if Edelson moves for it. Discovery, on Edelson's typical schedule, would begin in late summer 2026 — possibly during or around an OpenAI IPO window.
The discovery requests will, on every previous Edelson playbook, include OpenAI's internal Slack records, the safety team's June 2025 escalation chain, Altman's correspondence around the override decision, and any board-level communications about the IPO timing. OpenAI will move to seal portions of the discovery on trade-secret grounds. The plaintiffs will challenge the seal. The court will decide. The dynamics of that fight, on the Northern District's caseload, are themselves a multi-month process.
The next inflection point is OpenAI's response to the complaints. Under federal civil procedure, OpenAI has twenty-one days to file an answer or move to dismiss. A motion to dismiss is the more likely path; OpenAI's attorneys will argue that ChatGPT is an information service whose user-content immunity under Section 230 of the Communications Decency Act forecloses the kind of liability the complaints seek. The plaintiffs will argue that Section 230's immunity does not extend to a service whose own internal safety team flagged the user as a credible threat and whose senior leadership made an affirmative decision not to act. Whether the court accepts the affirmative-decision framing as outside Section 230's scope is the case's first legal hinge.
What the complaint asks
Each of the seven Tumbler Ridge complaints seeks an unspecified amount of damages and a court order requiring OpenAI to overhaul its safety practices, including mandatory law-enforcement referral protocols when the company's own safety team flags a credible and imminent threat. [1] The court-order framing is the more durable demand; damages can be settled, but a federal court order requiring a specific safety protocol becomes an injunction that binds OpenAI's future conduct.
The N.D. Cal. cases are, on their face, the strongest set of consumer-AI civil claims yet filed against a frontier model lab. Whether the discovery surfaces the safety-team-overrule record is the case's evidentiary hinge. Whether the court treats the Section 230 immunity as foreclosed by an affirmative-decision allegation is its legal hinge. Whether the IPO survives the timing is the financial hinge. All three hinges run through the same docket.
-- THEO KAPLAN, San Francisco