OpenAI's GPT 5.5 is wild...
TL;DR
GPT-5.5 may be arriving by April 23, and early signs point to a big leap in UI generation — Wes says OpenAI’s updated GPT Pro is “slaughtering” Claude Opus 4.7 on frontend work, especially image-to-code and near-perfect website replication.
Anthropic’s coding lead is the domino behind nearly every story in the video — Wes frames OpenAI, xAI, Google, and even parts of the U.S. government as reacting to one thing: Claude’s reputation for being the best coding model and the compounding “flywheel” that creates.
Google has gone into code-red mode on coding models, with Sergey Brin personally leading the push — According to reporting Wes cites, Brin and DeepMind CTO Koray Kavukcuoglu are driving a strike team aimed not at better chat UX, but at automating research and engineering faster.
Google’s internal AI-adoption drama matters because it reveals urgency, not just gossip — After Steve Yegge claimed Google looked “as average as John Deere” on AI adoption, Demis Hassabis fired back publicly, while separate reporting showed mandatory AI training and even language that Gemini engineers must be “forced” to use internal agents.
xAI is expected to launch Grok Build and Grok Computer together very soon — Wes says leaked details suggest local and remote versions of Grok Build, likely bundled into a desktop-style product for both Mac and Windows as xAI races for the top coding spot.
Wes argues Mythos is clearly being taken seriously at the highest levels, whatever critics on X say — His evidence is practical, not theoretical: the NSA is reportedly using it, JPMorgan CEO Jamie Dimon sees it as a threat, and top government and finance officials are discussing its cyber implications.
The Breakdown
GPT-5.5’s UI jump is the opening shot
Wes starts with the headline grabber: GPT-5.5 is supposedly in testing, and the standout improvement looks to be UI layout and frontend design. He points to OpenAI’s shadow-dropped GPT Pro update, saying it’s crushing Claude Opus 4.7 on frontend coding, especially when you feed it an image and ask for a faithful website replica. He also notes prediction-market chatter that this could be the long-rumored “Spud” model, with April 23 floating around as the likely release date.
xAI and OpenAI are both reacting to Claude’s strength
From there, Wes connects the dots: OpenAI’s design push follows Anthropic’s Claude Design release, and xAI is reportedly preparing Grok Build plus Grok Computer at nearly the same time. He describes Grok Build as having local and remote modes, with the local version hinting at a desktop app and a simultaneous Mac/Windows launch. His bigger point is that xAI, like OpenAI, seems to be chasing the same prize: beating Anthropic in coding.
Even the government can’t ignore Anthropic
Wes then pivots to the Pentagon drama and says the “supply chain risk” label on Anthropic looks increasingly hollow because agencies still want Claude. The big example is the NSA reportedly using Anthropic’s Mythos anyway, which he treats as proof that the models are too useful to blacklist in practice. He sums it up with the Steve Martin line: be so good they can’t ignore you.
The real race is the coding flywheel
This is the core thesis of the video: coding models matter because they can accelerate AI research itself. Wes calls it the recursive flywheel — better coding agents help researchers and engineers move faster, which helps build even better agents, and so on. In his telling, Anthropic got the early lead here, and now everyone else is waking up to the fact that you can’t just run the same race harder once that curve starts bending upward.
Sergey Brin’s strike team is about much more than autocomplete
That’s why Wes sees Google’s new strike team as the biggest story. Citing reporting, he says Sergey Brin and DeepMind CTO Koray Kavukcuoglu are directly involved, making this a top-level strategic priority rather than a side project. The goal isn’t just a nicer coding assistant for users — it’s turning coding models into engines for research automation and internal engineering leverage.
Google’s AI adoption fight spills into public drama
Wes has fun with the online spat after Steve Yegge claimed Google’s internal AI adoption looked roughly like John Deere’s: 20% power users, 20% refusers, 60% casual tool users. Demis Hassabis publicly dismissed it as “absolute nonsense,” but Wes notes that separate reports about mandatory AI training and Brin saying Gemini engineers must be “forced” to use internal agents suggest there really is internal pressure to raise adoption. His read is that whether Yegge overstated it or not, Google itself clearly believes there’s a gap worth fixing.
Why Anthropic’s lead is making everyone nervous
Wes closes by zooming out: Anthropic has under 5,000 employees, while Google has something closer to 200,000 plus TPUs, giant codebases, and massive budgets. That contrast is what makes this moment so interesting to him — if Google can focus all that scale, it could become unstoppable, but if Anthropic keeps winning anyway, maybe the first mover in coding really does get an exponential advantage. He ends by mocking the idea that Mythos is just PR, asking whether critics really believe they’ve outsmarted the NSA, JPMorgan’s Jamie Dimon, the Fed, and Sergey Brin all at once.