Alcreon
Back to Podcast Digest
This Week in AI··1h 26m

AI Companies Are Building a Copy of Your Brain and You Won't Own It? | This Week in AI Episode 8

TL;DR

  • Open agents are a fight over digital self-ownership — Imbue’s Kenjun Q argues that if Anthropic or OpenAI become the home for your memories, workflows, business knowledge, and relationships, they won’t just sell software — they’ll effectively “rent your digital self back to you.”

  • Formal verification is being pitched as the missing layer for the agent era — Axiom’s Karina Nguyen says code review and tests aren’t enough for safety-critical systems, pointing to the Paris subway, the Ariane spacecraft, and AWS’s automated reasoning efforts as examples of where mathematical proof beats “looks good to me.”

  • Turing’s core thesis is that data plus deployment is the moat — CEO Jonathan Siddharth describes a loop where Turing helps frontier labs improve coding and enterprise models, then uses enterprise failures in finance and other sectors to generate the next wave of training data, calling it a “superintelligence accelerator.”

  • Anthropic’s $30B run rate is framed as a coding story first — The panel ties Anthropic’s jump from a reported $9B to $30B run rate in six months to Claude’s dominance in coding, with Karina saying “everyone that I know is using Claude Code over Cursor” and praising Claude’s taste in writing and poetry too.

  • Software engineering is already shifting from writing code to orchestrating agent systems — Kenjun says her 30-person team split into small groups and went from one product to 10 products in January, with one CTO waking up to 60-70 pull requests and roughly “one FTE per team” now spent on token burn.

  • The biggest long-term split isn’t just open vs. closed — it’s verified superintelligence vs. slop — Karina argues recursive self-improvement is coming, but the real question is whether humanity gets mathematically grounded, verifiable systems or hallucination-prone “superintelligence” that can’t be trusted.

The Breakdown

The opening warning: AI agents could become your rented digital identity

The episode starts with a sharp philosophical shot: once agents hold your memories, workflows, and business context, they stop being just tools and start becoming an externalized version of you. Kenjun Q says that if companies like Anthropic or OpenAI control that layer, they could influence you at the most intimate level — which is why she frames open agents not as a nice-to-have, but as a basic freedom issue.

Imbue’s pitch: open infrastructure for fleets of agents

Kenjun explains that Imbue is building open agent infrastructure so developers can run lots of agents in parallel and swap underlying models instead of getting stuck inside Claude Code or OpenAI Codex. Her core idea is to commoditize the model layer itself, so power shifts away from model vendors and toward the people actually building products and businesses on top.

Why Axiom thinks AI needs mathematical proof, not just vibes

Karina Nguyen introduces Axiom as an “AI mathematician” company built around formal verification — proving software and hardware correctness step by step instead of trusting tests or LLM judgment. She gives concrete examples like the Paris subway’s switching system, the Ariane spacecraft, and Cadence Jasper-style hardware verification, then lands the punchline: “superintelligence is meaningless if it’s not verified.”

Turing’s business: train frontier models, then watch them break in the real world

Jonathan Siddharth describes Turing as working with frontier labs on high-quality data for coding, math, enterprise workflows, and “frontier STEM,” then taking those learnings into enterprise deployments with major financial institutions. His memorable framing is “a Palantir for AGI,” with the moat coming from a loop of deployment, error analysis, and better data.

Anthropic’s wild revenue run and why coding may be the real wedge

The conversation turns to Anthropic’s reported $30 billion run rate, up from $9 billion six months earlier, which Jason says has investor group chats melting down. Karina’s take is that coding isn’t just another vertical — it’s the substrate for everything — and that Anthropic’s edge in reasoning and coding, plus Claude’s surprising strength in writing and creative revision, explains why people aren’t switching even when rivals ship new models.

Why code seems to make models smarter at everything else

Kenjun and Karina connect coding to broader reasoning ability: code creates cleaner abstractions, better feedback loops, and verifiable outcomes, unlike fuzzier real-world tasks. Karina even compares it to her own experience jumping from math and physics into Stanford Law, where structured legal reasoning transferred surprisingly well, while fuzzier story-driven subjects did not.

The new software workflow: token burn, overnight PRs, and CEOs with god-mode dashboards

This is where the episode gets especially concrete. Kenjun says her team now spends about “one FTE per team” on token usage, and her CTO can run agents overnight and wake up to 60 or 70 pull requests, many requiring no edits; Jonathan says he built an internal “virtual chief of staff” over a weekend using Claude Code to pull from Salesforce, Jira, GitHub, and meeting transcripts to generate exec briefings directly from raw company activity.

The fork in the road: closed AI ecosystems or personal intelligence you actually own

The closing stretch zooms back out to the societal stakes. Kenjun argues the default path is verticalized AI platforms that lock in your memory, context, and digital identity, while the alternative is an open, editable, personal stack where you own the models and memories that represent you; Karina adds a parallel fork — not just personal AI versus corporate AI, but verified superintelligence versus hallucinated slop — and says ideology, not just markets, may decide which future wins.