Claude Mythos Preview found thousands of zero-day vulnerabilities and Anthropic is keeping it locked.
Fortune and TechCrunch covered Project Glasswing as a defensive cybersecurity initiative.
X is split between applauding Anthropic's restraint and warning it creates a two-tier AI world.
The most capable artificial intelligence model ever built exists, and you cannot use it. This is not a marketing tease or a waitlist gambit. It is a deliberate decision by Anthropic, the San Francisco company that made it, to keep Claude Mythos Preview locked behind a consortium of twelve corporate partners and use it for a single purpose: finding holes in the world's software before someone else does.
On April 7, Anthropic announced Project Glasswing [1], and in the three days since, it has become one of the more consequential and quietly strange developments in the brief history of artificial intelligence. As yesterday's piece on AI consolidation explored, the major AI companies have been circling one another warily, guarding capabilities while jostling for position. Glasswing is something different: a company voluntarily withholding its most advanced product — not because regulators told it to, but because it decided the product was too dangerous to sell.
The twelve launch partners read like a roster of the institutions that run the digital world: AWS, Apple, Broadcom, Cisco, CrowdStrike, Google, JPMorganChase, the Linux Foundation, Microsoft, NVIDIA, and Palo Alto Networks [1]. More than forty additional organizations have been granted access under non-disclosure agreements. Together, they will use Claude Mythos Preview to audit the codebases that underpin everything from cloud infrastructure to financial networks to military communications.
The model has already delivered results that are difficult to dismiss. According to Anthropic's announcement, Mythos Preview has discovered thousands of zero-day vulnerabilities — previously unknown security flaws — across major open-source and proprietary software projects [2]. The most striking finds include a twenty-seven-year-old vulnerability in OpenBSD, an operating system that has long been considered one of the most secure in existence, and a sixteen-year-old bug in FFmpeg, the multimedia framework that processes video across billions of devices.
A twenty-seven-year-old vulnerability. That is a hole that has existed since 1999, through every security audit, every penetration test, every code review conducted by some of the best programmers alive. Claude Mythos Preview found it. The implications are both reassuring and unnerving: reassuring because the vulnerability is now being patched, unnerving because it raises the question of what else is hiding in the codebases we trust.
Fortune reported that Anthropic has committed one hundred million dollars in usage credits to the initiative, along with four million dollars specifically earmarked for open-source security work [2]. The financial commitment is notable because Anthropic is not a wealthy company by Big Tech standards. It has raised approximately seven billion dollars in venture capital, but it burns through compute at a prodigious rate. Dedicating this much capacity to a non-revenue project is a strategic choice that says something about how Anthropic views its competitive position.
The company appears to be making a bet that responsible deployment — the visible, documented refusal to release its most powerful model — will generate more long-term value than a product launch. It is, in essence, using restraint as a brand strategy. Whether this is principled caution or shrewd positioning depends on how cynical you feel on any given morning.
There is a tension at the center of this story that neither Anthropic nor its partners have fully addressed. The Pentagon has previously labeled Anthropic a potential supply-chain risk, flagging concerns about the concentration of advanced AI capabilities in a single private company [3]. Glasswing does not resolve that concern. If anything, it sharpens it. Twelve companies and forty-plus organizations now depend on a model they did not build, cannot replicate, and have no contractual guarantee of continued access to. Anthropic could change the terms, raise the price, or restrict access at any time.
The counter-argument — and it is the one Anthropic's leadership has made in private briefings with partners — is that the alternative is worse. If Mythos Preview were released publicly, or even sold commercially, the same capabilities that find zero-day vulnerabilities could be used to exploit them. The model does not distinguish between offensive and defensive security research. The knowledge is symmetric. The access is not [4].
This asymmetry is the core innovation of Glasswing, and it is also its core vulnerability. The consortium model assumes that the twelve launch partners are trustworthy custodians of an extraordinarily powerful tool. It assumes that the forty-plus additional organizations with NDA access will not leak capabilities. It assumes that Anthropic itself will not face the kind of financial pressure that turns principles into luxuries.
The broader AI industry is watching with a mixture of admiration and suspicion. OpenAI has not commented. Google, which is both a Glasswing partner and an Anthropic competitor, finds itself in the unusual position of acknowledging that a rival has built something more capable than anything in its own portfolio — at least for this specific task. The Silicon Report noted that the arrangement effectively creates a two-tier AI world: organizations inside the consortium have access to security capabilities that everyone else lacks [3].
Four million dollars for open-source security. One hundred million in credits for corporations. Thousands of vulnerabilities found. One model that nobody outside a locked room will ever use.
It is, depending on your perspective, either the most responsible thing an AI company has ever done or the most effective monopoly play in the history of technology. Possibly both.