Florida Attorney General James Uthmeier's office has subpoenaed OpenAI for all internal training materials and policy texts dating to March 1, 2024 — covering user threats of harm to others, threats of self-harm, cooperation with law enforcement, and protocols for reporting past, present, or future crimes. The probe, announced April 21 in connection with the FSU shooting, was extended this week to a second case: the University of South Florida double murder of doctoral students Zamil Limon and Nahida Bristy. [1]
The frame here is documentary, not behavioral. Uthmeier is not asking whether ChatGPT produced specific outputs that caused harm — that is the question the Tumbler Ridge plaintiffs are litigating in N.D. Cal.. He is asking whether OpenAI's policy texts predict the company's conduct. The two-year window — March 2024 to April 2026 — is the IPO-prep era. The same period is now inside two separate discovery scopes: the Florida criminal subpoenas and the federal product-liability complaints filed this week alleging Sam Altman overruled a twelve-person safety team to protect the IPO valuation. [2]
The expansion to USF is the news. The probe's original scope was a single FSU case; the addition of Limon and Bristy's homicides means Florida is now treating the AI-conduct question as a pattern. Uthmeier's office said USF doctoral student suspect used ChatGPT in connection with the killings, and that the case "fits the same pattern" the FSU investigation has been pursuing. The pattern, as the AG's office is articulating it: users with violent intent communicating with the chatbot, the chatbot providing operational information, and OpenAI's internal escalation procedures producing or failing to produce a law-enforcement referral. [3]
The subpoenaed material is broad. The AG wants OpenAI's organizational chart, a list of every employee working on ChatGPT, every internal training document on threat reporting, and every policy text on cooperation with police. The implied theory is that an organization with policies texts that say one thing and conduct that does another is exposed not just to civil tort liability but to criminal liability for false statements made in the course of regulated activity. Florida is testing whether AI companies can be charged the way the AG's office has historically charged corporations under state consumer-protection law — by establishing a written representation, then producing evidence that the company's behavior contradicted it.
The Tumbler Ridge complaints in California overlap. Plaintiffs there allege that OpenAI's automated systems flagged the shooter's account in June 2025 for "gun violence activity and planning"; that a twelve-person safety team recommended reporting the account to the RCMP; that company leadership instead deactivated the account and "kept what they had seen to themselves." If that allegation is true, OpenAI's internal threat-reporting policy texts subpoenaed by Florida will either corroborate or contradict it. The Florida AG can use the federal complaints' factual allegations as a roadmap for which documents to demand. [4]
OpenAI's spokesperson Kate Waters said the FSU shooting was "a tragedy" but ChatGPT "is not responsible for this terrible crime"; she said the chatbot "provided factual responses to questions with information that could be found broadly across public sources on the internet." That defense — public-source information, no encouragement, no promotion — is the same defense the company has rehearsed across the Tumbler Ridge filings. The Florida AG's subpoenas will test whether OpenAI's internal training materials match that public posture, or whether the materials describe a more aggressive escalation protocol that the company is alleged to have ignored.
The cross-thread implication is for Cerebras's bookbuild, which slipped to mid-May. Cerebras's S-1 names OpenAI on a single page as customer, lender, and shareholder. The Florida criminal probe is a risk factor that has to be disclosed if it advances during the bookbuild. Two extra weeks gives the underwriter time to update the prospectus if a charging document or an indictment lands; it also gives the AG more time to extract documents that might affect what gets disclosed.
Florida AGs have produced criminal exposure for tech companies before — the AG's office played an early role in the multistate pursuit of the social-media platforms that led to the youth-mental-health settlements of 2024-2025. This case is structurally different: it is a single state asking whether a foundation-model lab's policy texts can be evidence of criminal misconduct when the model's outputs are alleged to have aided a homicide. The answer will arrive in stages. The first stage is what OpenAI hands over by the subpoena's deadline. The second is whether Uthmeier's office finds a contradiction it can charge.
-- MAYA CALLOWAY, New York