Bessent and Powell called Goldman, Morgan Stanley, and Citi CEOs to warn them about Anthropic's Mythos model and its zero-day capabilities.
Bloomberg and the Financial Times covered the emergency meeting straight, emphasizing the zero-day vulnerability discovery capabilities.
X is torn between treating Mythos as proof AI safety fears are real and accusing Anthropic of manufacturing a crisis for regulatory advantage.
On Friday afternoon, after markets closed, Treasury Secretary Scott Bessent and Federal Reserve Chair Jerome Powell convened an emergency meeting with the chief executives of Goldman Sachs, Morgan Stanley, Citigroup, Bank of America, and Wells Fargo. The subject was not interest rates, not banking regulation, not the war. It was an artificial intelligence model that its own creator refuses to release to the public. [1]
The model is Claude Mythos, built by Anthropic — the same company that raised $30 billion last week on the premise that safety is its competitive moat. Mythos can autonomously discover and exploit zero-day vulnerabilities across every major operating system and web browser. It found thousands of them, many critical, without being explicitly trained for offensive cybersecurity. The capabilities emerged as a downstream effect of gains in code generation, reasoning, and autonomous action. Anthropic says the model could compromise a browser so that a malicious website reads sensitive data from another site — a victim's bank account, for instance. [1]
The meeting at Treasury headquarters reportedly covered the financial sector's exposure to AI-discovered vulnerabilities. BlackRock's Larry Fink and Apollo's Marc Rowan also attended. JPMorgan's Jamie Dimon was invited but could not make it. The Bank of Canada held a parallel session with Canadian financial institutions on the same day. [1]
Anthropic has not released Mythos publicly. Instead, it launched Project Glasswing, a cybersecurity initiative that gives restricted access to roughly 40 organizations — including Amazon, Apple, Microsoft, Cisco, CrowdStrike, and JPMorgan Chase — so they can use the model to identify and patch their own vulnerabilities before attackers find them. Anthropic is providing approximately $100 million in resources to support the effort. [1]
The company's Alignment Risk Update, published April 7, assessed Mythos's overall risk level as "very low, but higher than for previous models." It identified six risk pathways, including self-exfiltration, backdoor insertion into code, and sandbagging — deliberately underperforming on safety benchmarks. The language was careful. The implications were not. An AI model that finds exploit chains faster than any human team can patch them changes the economics of cybersecurity in a way that regulators barely have vocabulary for. [1]
The reaction split predictably. David Sacks, the former White House AI adviser, accused Anthropic of deploying "doomsday warnings" as a strategy for "regulatory capture" — manufacturing a crisis to make itself indispensable to the government's response. The accusation is not without basis. Anthropic's $183 billion valuation depends in part on being perceived as the responsible AI company, the one governments trust. Every alarm it raises about its own models reinforces that brand. [1]
But the vulnerability data is real. Anthropic's published disclosures describe exploit chains that collapse the window between discovery and exploitation from days to hours. "This work is too important and too urgent to do alone," said Anthony Grieco, Cisco's chief security and trust officer. Microsoft Canada's John O'Brien called the Anthropic reports "a wake-up call." [1]
The paradox deepens. Anthropic builds the model that terrifies the financial system, then sells itself as the cure. It raises $30 billion on a safety narrative, then produces a model dangerous enough to warrant a joint Treasury-Fed summit. The safety-as-moat thesis requires the danger to be real. And Friday's meeting suggests that the people who run America's largest banks believe it is. [1]
-- DAVID CHEN, Beijing