Alcreon
Back to Podcast Digest
TBPN··33m

Meta Tokenmaxxing, Intel Joins Terafab, Frontier AI vs. China | Diet TBPN

TL;DR

  • Meta’s 60.2 trillion-token month is huge, but probably not a $1 billion bill — Using OpenRouter-style input/cache/output ratios, TBPN argues Meta’s Anthropic spend is more likely $55 million to $136 million per month, or roughly $1,800 to $4,500 per engineer.

  • The real Meta story isn’t leaderboard cringe — it’s vertical integration math — The hosts’ bull case is that even if internal codegen alone costs hundreds of millions, Meta Superintelligence Lab can justify frontier model spend by replacing an enormous external inference bill without needing a breakout consumer AI app.

  • Goodhart’s Law is haunting Meta’s “Claudonomics” culture — Employees reportedly compete on an internal token leaderboard for “token legend” status, and multiple anecdotes suggest some people may be running pointless loops just to avoid being at the bottom during an AI-native push and layoff rumors.

  • Intel joining Musk’s Terafab is being framed as a strategic supply-chain move, not just a press release — With SpaceX, xAI, and Tesla needing far more chips than current partners can provide, the hosts see this as an early test of whether Intel can become a real alternative to TSMC amid looming AI chip bottlenecks.

  • Anthropic’s new security rollout is aimed straight at critical infrastructure — Its preview model “Mythos” is going to about 50 organizations including Amazon, Microsoft, Apple, Google, and the Linux Foundation to find and patch bugs before attackers can exploit them.

  • OpenAI, Anthropic, and Google are now cooperating against Chinese model copying — Through the Frontier Model Forum, the companies are sharing signals on adversarial distillation attempts, a rare alliance the hosts say reflects how seriously US labs take imitation, commoditization, and national-security risk.

The Breakdown

Meta’s “token legend” culture sparks a very online AI panic

TBPN opens on The Information’s report that Meta employees are “token maxing” on an internal leaderboard called Claudonomics, with 60.2 trillion tokens used in 30 days. The hosts immediately frame the tension: this could be silly status-seeking, or it could be what happens when Zuck and Andrew Bosworth push the whole company to become “AI native” while layoffs rumors float in the background. The XKCD/Goodhart’s Law joke lands because everyone gets the fear — nobody wants to be the engineer explaining why they used fewer tokens than the guy who made an agent loop forever.

Why the eye-popping Meta spend estimate probably overstates reality

They walk through Tyler’s back-of-the-envelope math using Claude Opus 4.6 pricing: $5 per million input tokens, $0.50 cached input, and $25 output. If you price all 60.2 trillion tokens at output rates, you get nearly $1 billion a month, but that’s not how coding agents work; OpenRouter data suggests roughly 98.9% of tokens are input and only 1.1% are output. That pushes the estimate down to about $136 million a month, and maybe as low as $55 million if Meta’s usage looks more like Claude Code with heavier cache usage.

The bigger bull case: Meta may not need a hit AI app at all

The hosts pivot from mockery to strategy: if Meta is already spending hundreds of millions on inference for internal codegen, then owning the model stack starts to look financially obvious. Jensen Huang’s prediction of $250,000 in annual token budget per engineer and Karpathy’s “what token throughput do you command?” line both reinforce the idea that AI budgets are becoming core infrastructure, not perks. Their punchline is that Meta Superintelligence Lab could “pencil out” just through vertical integration — ads, internal tools, and operations — even if Meta never ships a standalone viral AI product.

Distillation, enterprise data, and the “Ship of Theseus” problem

That leads into a weirder question: if Meta pays Anthropic to rewrite huge swaths of code, docs, and internal communication, can Meta later train on those outputs? The hosts describe it as a “Ship of Theseus” problem, because AI-generated revisions become embedded in the company’s systems, but terms of service likely forbid using those outputs as training data. They also mention the emerging rumor that failed startups can sell their corporate histories for around $1 million to brokers or AI labs, which makes the whole data-rights landscape feel even messier.

Intel joins Terafab, and Musk’s chip ambitions get more serious

The next big story is Intel joining the Terafab project alongside SpaceX, xAI, and Tesla, with the goal of producing a terawatt a year of compute. TBPN treats this as meaningful because Intel has long had the technical ambition but lacked the demand-side commitment, while the rest of the industry stayed glued to TSMC. Their read is that if AI chip bottlenecks are coming and TSMC capex isn’t scaling fast enough, then big companies may finally have to publicly back Intel to create a second serious manufacturing pole.

Space compute is real enough to argue about economics, not physics

From there they spiral into Musk’s vision of chips for robotaxis, Optimus, and even space-based AI workloads. The hosts push back on the lazy objection that “you can’t put compute in space” — Starlink already does compute in orbit, and they cite claims that even a handful of H100s are up there now. The real debate, they say, is not whether chips can run in space, but whether the economics and heat dissipation can work at the wild scales Musk is imagining.

The most cursed corporate retreat story of the episode

Then TBPN takes a detour into a Wall Street Journal story about Plex bringing 120 remote employees to Honduras for a Survivor-themed retreat that became a full-on disaster. The details are absurd in the best possible way: the CEO gets violently sick with E. coli on arrival, a former Navy SEAL runs beach drills on an “unfit group,” people pass out in 100-degree heat, someone lands on a fire ant hill, and a porcupine reportedly falls through a ceiling. The hosts basically frame it as a corporate Fyre Festival, except somehow it still counted as team bonding because everyone survived it together.

Anthropic’s security push and the unusual US lab alliance on China

The episode closes on Anthropic previewing a security model called Mythos for about 50 critical-infrastructure groups, including Amazon, Microsoft, Apple, Google, and the Linux Foundation. TBPN likes the go-to-market logic: cybersecurity is urgent, highly legible, and a strong wedge for broader enterprise AI adoption. Then they end on Bloomberg’s report that OpenAI, Anthropic, and Google are coordinating through the Frontier Model Forum to detect adversarial distillation by Chinese actors — a rare moment where fierce rivals look aligned because frontier model copying now feels like both a business threat and an AI safety issue.