AI Central

AI Central

The Zero-Day Monster

Anthropic is withholding Claude Mythos from public release, as Washington scrambles to process the most powerful AI model yet developed.

Jordamøn's avatar
Jordamøn
Apr 13, 2026
∙ Paid

Anthropic’s launch of Claude Mythos Preview last week set off a chain of responses across the AI industry and the federal government. The model, which Anthropic withheld from general release because of its cybersecurity capabilities, prompted rival labs to form an intelligence-sharing coalition and senior government officials to convene emergency meetings with both tech and banking executives.

Decades-old vulnerabilities

The existence of Claude Mythos first surfaced in late March, when a misconfiguration in Anthropic’s content management system exposed internal documents describing the model as a step change in performance. Anthropic confirmed the model and last week released it as a limited preview through Project Glasswing. In testing, the company placed Mythos in isolated containers with real open-source codebases and prompted it to find security vulnerabilities. The model discovered thousands of zero-day vulnerabilities across every major operating system and web browser, including a 27-year-old bug in OpenBSD. In one instance, Mythos wrote a browser exploit that chained four separate vulnerabilities to escape both renderer and operating system sandboxes. Anthropic’s internal evaluations showed that Claude Opus 4.6 had a near-zero success rate at autonomous exploit development on the same tasks where Mythos succeeded consistently. The company stated that these cyber capabilities emerged from general improvements in coding and reasoning, without any security-specific training.

Anthropic chose to withhold Mythos from general release, distributing access instead to twelve partner organizations and roughly forty additional groups responsible for critical software infrastructure. Partners include AWS, Apple, Microsoft, Google, CrowdStrike, and Palo Alto Networks. Anthropic committed up to $100 million in usage credits to support the effort. Shares of CrowdStrike, Palo Alto Networks, Zscaler, SentinelOne, and Tenable fell between 5 and 11 percent after the announcement.

Circling the wagons

The day before Mythos launched, Bloomberg reported that OpenAI, Anthropic, and Google had begun sharing threat intelligence to detect adversarial distillation by Chinese AI labs. The three companies are channeling data through the Frontier Model Forum, a nonprofit they co-founded with Microsoft in 2023 that had previously served as a venue for safety research and policy commitments. The activation marks the Forum’s first operational use and the first coordinated defense effort among the three rival labs.

The distillation concern traces to early 2025, when DeepSeek released its R1 reasoning model and prompted OpenAI and Microsoft to investigate whether the Chinese startup had improperly exfiltrated data from their systems. Anthropic documented the scale in February 2026, publishing a report that identified DeepSeek, Moonshot AI, and MiniMax as having generated 16 million unauthorized exchanges through roughly 24,000 fake accounts. OpenAI separately accused DeepSeek of attempting to free-ride on US frontier labs’ capabilities in a formal memo to the House Select Committee on China. Google reported a significant rise in distillation attempts targeting its models in the last quarter of 2025. All three labs have emphasized that distilled models typically lose the safety guardrails built into the originals, framing the practice as a national security issue.

Washington reacts

In the week before Mythos launched, Vice President JD Vance and Treasury Secretary Scott Bessent convened a phone call with the CEOs of Anthropic, xAI, Google, OpenAI, and Microsoft, along with the heads of CrowdStrike and Palo Alto Networks. Participants discussed the security posture of large language models and how to respond if models scale in favor of attackers, according to people familiar with the call.

This post is for paid subscribers

Already a paid subscriber? Sign in
© 2026 Infogalactic AG · Privacy ∙ Terms ∙ Collection notice
Start your SubstackGet the app
Substack is the home for great culture