An AI model that finds decade-old security flaws in every major operating system has Wall Street reconsidering what's defensible.
The Financial Times reported UK regulators are holding urgent discussions about Mythos with the Bank of England and NCSC.
X infosec is calling Mythos the first AI model that genuinely scares defenders — not because it's smarter, but because it's faster.
U.S. Treasury Secretary Scott Bessent has summoned leaders of major Wall Street banks to discuss Anthropic's Claude Mythos, the AI model that the company itself warns has "found thousands of high-severity vulnerabilities, including some in every major operating system and web browser." [1]
The meetings — first reported by the Financial Times — reflect growing alarm that Mythos represents a category shift in AI capability. Anthropic released the model in limited preview to cybersecurity defenders in late March, but its implications for offense have rattled regulators on both sides of the Atlantic. [1]
In London, officials from the Bank of England, the Financial Conduct Authority, and HM Treasury are in urgent talks with the National Cyber Security Centre. The UK's biggest banks, insurers, and financial exchanges are expected to be briefed on potential risks at a meeting of the Cross-Market Operational Resilience Group within the fortnight. [1]
Anthropic's own risk report classifies the overall danger as "very low, but higher than for previous models," acknowledging that Mythos is "significantly more capable and used more autonomously" than its predecessors. [2] The company has restricted access to cybersecurity defenders while it evaluates broader deployment.
For Wall Street CEOs, the model's implications extend beyond security. If an AI can find vulnerabilities in code faster than humans can patch them, the same capability applied to trading systems, compliance workflows, or back-office automation could reshape headcount calculations across the industry.
-- THEO KAPLAN, San Francisco