Microsoft and OpenAI break up (Amazon is pumped)
TL;DR
The Microsoft–OpenAI partnership has effectively turned from marriage to managed separation — Theo traces the shift from Microsoft’s 2019 $1 billion investment and exclusive Azure deal to a 2026 amendment where Microsoft is only OpenAI’s “primary” cloud and its IP license is now non-exclusive through 2032.
The real fracture point was OpenAI’s o1 reasoning breakthrough in 2024 — after o1 posted huge gains over GPT-4o, Microsoft reportedly pushed hard for chain-of-thought details and Sam Schillace allegedly raised his voice because OpenAI wasn’t sharing the secret sauce fast enough.
Anthropic is the hidden reason this breakup happened — Theo argues Anthropic’s enterprise growth, especially via AWS Bedrock, scared OpenAI into escaping Azure exclusivity so it could sell models where enterprises already are instead of forcing them through Azure or OpenAI’s direct API.
Cloud distribution mattered more than model quality in enterprise adoption — Theo says many companies chose Claude through Bedrock not because Anthropic models were universally better, but because enterprises want one cloud stack and OpenAI being Azure-locked made it harder to buy.
Theo’s own Azure experience became a live case study in why OpenAI wanted optionality — he shows charts where GPT-5 and o3-mini on Azure sometimes dropped from 70–120+ tokens/sec to 2–20, then says Microsoft only fixed the issue after he published an “Azure bench” and publicly embarrassed them.
Amazon is the biggest winner right now — OpenAI’s new AWS deal includes Bedrock distribution, a stateful runtime environment, 2 gigawatts of Trainium capacity, and a reported $50 billion investment, giving AWS a direct answer to Anthropic’s Bedrock advantage.
The Breakdown
Theo opens with the thesis: this is a breakup, not a routine partnership update
Theo starts by admitting “Microsoft” is usually a hard sell for developers, then immediately makes the case that the OpenAI relationship drama is way more interesting than it sounds. His framing is blunt: the latest announcement is basically a breakup, and the weirdest part is that Anthropic is the reason it’s happening.
The 2019 deal that locked OpenAI to Azure until the vague concept of AGI
He goes back to the original 2019 announcement: Microsoft invests $1 billion, helps fund the compute, and becomes OpenAI’s exclusive cloud provider. The key catch was that Microsoft would get broad rights to OpenAI’s pre-AGI tech and IP until AGI arrived — which sounds neat until you realize nobody can cleanly define AGI, so the deal had no real natural endpoint.
ChatGPT changed the stakes, then o1 changed the power balance
After ChatGPT’s success, Microsoft poured in another $10 billion in 2023 and tightened the relationship around Azure. But the real turning point, Theo says, was September 2024, when OpenAI launched o1: a reasoning model that leapt to the 89th percentile in competitive programming and top-500 USAMO qualifier performance, making GPT-4o look miles behind.
The o1 fight: Microsoft wanted the recipe, OpenAI stopped sharing
Theo says the bromance cracked when Microsoft wanted to know how o1’s reasoning worked and OpenAI allegedly refused to hand over the details. He cites reporting that Sam Schillace got angry on a call because OpenAI wasn’t delivering new technology quickly enough, and Theo’s read is simple: Microsoft thought it had paid for access to the breakthrough, while OpenAI saw reasoning as too strategically important to give away.
The renegotiations rewired the deal and quietly killed the AGI trap
By 2025, the two sides had already amended things so AGI declarations would need an independent expert panel and Microsoft’s rights stretched to 2032. Then in 2026, the language softened even more: Microsoft is now OpenAI’s primary cloud, not exclusive; OpenAI can serve customers across any cloud; and Microsoft’s IP license is explicitly non-exclusive, which Theo basically reads as Sam Altman out-negotiating them.
Why Anthropic and Bedrock spooked OpenAI
Theo’s most interesting argument is that OpenAI wasn’t just escaping Microsoft — it was reacting to Anthropic’s enterprise momentum on AWS. He says many enterprises buy Claude through Bedrock because they already live on AWS, and even if OpenAI models may be stronger in many coding tasks, being Azure-only made them much harder to adopt inside real companies.
The weird economics of cloud credits and why Anthropic’s deals feel different
He goes on a tangent that feels very CEO-brain and very useful: startups can get $100,000 from AWS, $350,000 from Google Cloud, and even $500,000 to $1 million from Azure in credits. But Theo says he hasn’t been able to use those credits for Anthropic models, which he takes as evidence that Anthropic negotiated brutal revenue-share terms with cloud providers, making those credits too expensive for the clouds to subsidize.
Theo’s Azure rant becomes the punchline — and then a partial redemption arc
The video peaks when he turns his own T3 Chat infrastructure pain into evidence: he says Azure-hosted OpenAI models had unacceptable latency and throughput swings for over a year, sometimes dropping to 2 tokens/sec, so badly that he refused to spend even a $1 million Azure credit. Then future-Theo jumps in with the update: after he published a public benchmark and got pressured to delete it, Microsoft fixed the issue within a day, turning a pure crash-out into a weirdly satisfying win.
Amazon’s payoff: OpenAI finally gets to meet enterprises where they are
The final section ties it together with the new AWS partnership: Bedrock distribution, a stateful runtime environment, 2 gigawatts of Trainium capacity, and a massive long-term compute expansion. Theo thinks the next war may be less OpenAI vs Anthropic vs Gemini and more cloud-and-chip ecosystems — Nvidia, Google TPUs, AWS Trainium, maybe AMD if it can hang on.