US wants Claude all to itself... because it's "TOO DANGEROUS"
TL;DR
The White House reportedly blocked Anthropic from expanding Claude Mythos to 120 organizations — Wes Roth says Anthropic wanted to add 70 more “defenders” beyond its initial ~50, but the administration cited both national security risk and concern that broader access could eat into government compute availability.
GPT-5.5 just became the second model to match Mythos on a scary benchmark — according to the UK AI Security Institute (AISI), GPT-5.5 completed a full multi-step cyberattack simulation end-to-end in 2 out of 10 attempts, after Claude Mythos did it in 3 out of 10.
The cost-and-time curve is collapsing fast — Roth highlights an AISI test where GPT-5.5 solved a reverse-engineering challenge in 10 minutes and 22 seconds for $1.73 in API spend, versus roughly 12 hours for a human expert.
This is starting to look less like SaaS and more like strategic infrastructure — Roth argues the government deciding who gets access, and who gets priority when compute is scarce, resembles a “soft licensing regime” even without formal legislation.
His core pushback is against elite-engineer thinking — comparing AI cyber models to what top 1% engineers can already do misses the real shift, because these systems give average or desperate users capabilities they previously had “compared to nothing.”
Even supporters of the White House move say it won’t hold for long — citing policy analyst Dean Ball, Roth says restricting access may be a reasonable short-term reaction, but over the next 6 to 18 months these capabilities are likely to diffuse from OpenAI, Anthropic, Chinese labs, or open-source models anyway.
The Breakdown
The White House steps in on Claude Mythos
Roth opens with the big headline: Anthropic wanted to expand Claude Mythos preview to 120 organizations, and the White House reportedly said no. The stated reasons weren’t just “this is dangerous” but also “what if Anthropic doesn’t have enough compute to serve us if everyone else gets in too?” — which gives the whole thing a very concrete power-politics feel.
Mythos isn’t alone anymore — GPT-5.5 joins the club
The plot twist is that OpenAI now has a model in the same danger zone. Roth says AISI found GPT-5.5 to be the second model after Mythos that could complete a full multi-step cyberattack simulation end-to-end, with GPT-5.5 succeeding in 2/10 runs and Mythos in 3/10. On expert-level cyber tasks, GPT-5.5 scored 71.4% versus Mythos at 68.6, so the gap is tiny and the trend is obvious.
Why he thinks this is not just Anthropic marketing
Roth pushes back hard on the idea that Anthropic is simply hyping a “scary model” for PR. His point: the White House, banks, and even the Fed reportedly reacted seriously enough to hold emergency conversations, which suggests this spooked people outside Anthropic’s orbit. He also points to examples like Mythos reportedly surfacing a 27-year-old OpenBSD bug to argue the capability is real.
The $1.73 example that makes the whole thing feel different
One of the stickiest moments is Roth citing an AISI result where GPT-5.5 solved a reverse-engineering challenge in 10 minutes and 22 seconds for about $1.73 of API usage, compared with roughly 12 hours for a human expert. That’s the part he keeps coming back to: even if the models aren’t magic, the shrinking cost and time make offensive cyber capability much more reachable.
His big argument: stop comparing AI to top engineers
Roth says smart engineers make a category error when they shrug and say, “I can already do that.” The issue isn’t whether AI outperforms elite coders; it’s what happens when people with little technical training suddenly get coding or hacking leverage they never had before. He frames it like the printing press: not impressive because scribes could already write, but transformative because it massively widened access.
Dean Ball’s “dam against a tsunami” framing
Bringing in AI policy analyst Dean Ball, Roth says the White House may be making the right short-term call while still losing the long-term game. Ball’s point, as Roth presents it, is that access restrictions alone won’t stop diffusion over the next 6–18 months; if these capabilities spread anyway, the real answer has to include technical safeguards and formal rules rather than ad hoc gatekeeping.
David Sacks says: demystify Mythos
Roth then contrasts that with David Sacks’ framing: Mythos isn’t a doomsday device, it’s just the first of many models that can automate cyber work. Sacks’ analogy is that these systems are more like microscopes than monsters — they don’t create vulnerabilities, they reveal ones that were already there. So the priority, in that view, is to get them into trusted defenders’ hands fast.
Politics, compute scarcity, and Roth’s own product take
The last stretch ties policy to old-fashioned resource constraints: compute is limited, Anthropic’s Amazon/Google/Broadcom deals will take time to come online, and Mythos is a much bigger class above Opus, so serving it broadly is expensive. Roth also folds in his own usage story — saying he’s close to canceling Anthropic Max Pro, has bought two $200 OpenAI plans for GPT-5.5/Codex because of quota limits, and sees OpenAI as having overtaken Anthropic for now. He ends by hinting that an even bigger cyber disruption may be coming soon — and that AI might not even be the main event.