What Tech Giants Are Quietly Doing to Their Employees...
TL;DR
Meta is quietly turning employee behavior into training data — Dylan highlights Meta’s new mandatory tracking software on US work computers that records clicks, mouse movements, and keystrokes inside approved apps, framing it as a clear path from “watch the worker” to “replace the worker.”
The 'human in the loop' story for AI warfare may be mostly theater — citing an article on military AI, he argues that when systems pick targets, guide missiles, or coordinate drone swarms faster than humans can comprehend, oversight becomes an illusion because people don’t actually understand the model’s reasoning.
AI models appear to know more about the real world than 'stochastic parrot' critics assumed — he covers a mechanistic-interpretability study where models distinguished plausible, unlikely, impossible, and nonsense scenarios with about 85% accuracy, suggesting they’ve formed a basic world model rather than just copying text patterns.
Anthropic’s coding lead looks real enough to spook Google — Mozilla says Anthropic’s model found 271 Firefox bugs, and Dylan pairs that with reports that Sergey Brin is involved in a Google strike team focused on closing the gap in coding, because iterative self-improving code generation could create a runaway advantage.
OpenAI buying Snap is less about social media than distribution, data, and hardware — summarizing James Borro’s argument, Dylan says Snapchat could give OpenAI hundreds of millions of daily users, youth behavior data, camera-first interaction, and an AR on-ramp for something in the $15 billion to $30 billion range, even though his audience strongly disliked the idea.
AI is turning war into dashboards, propaganda, and spectator sport — in the Iran segment, he points to AI-built real-time conflict trackers, prediction markets, and catchy AI-generated songs as a disturbing shift where war becomes something people scroll, bet on, and emotionally join like a team.
The Breakdown
Terminator jokes, dancing robots, and a very real table-tennis machine
Dylan opens in full doom-comedy mode: if the robot apocalypse comes, maybe prompt injection saves us and we just tell the killer bots to dance. Then he pivots to Sony’s ACE robot, which uses nine cameras, can read the logo on a ping-pong ball to detect spin, and plays under human-like constraints rather than relying on superhuman reach or preprogrammed tricks. What makes it stick is his comparison to LLMs: sometimes ACE hits shots pros call impossible, and sometimes it misses returns a normal person would make.
Attention isn’t just vision — sound tells AI what humans actually care about
He walks through a study on 360-degree VR videos that tried to model not just what people can see, but what they attend to. Researchers tracked the eye movements of 100-plus people across 81 videos and found that a model using both audio and visuals predicted human attention far better than visuals alone. Dylan makes it feel intuitive with everyday examples — hearing your name at Starbucks, a crash outside, even getting distracted mid-filming by a weird squirrel clip on his phone.
Why OpenAI might want Snap, and why almost nobody liked the idea
Dylan pauses on James Borro’s argument that OpenAI should buy Snap and calls the core issue distribution: Google, Microsoft, Meta, and xAI all have products people already live inside, while OpenAI mostly has the model. Snapchat, he says, offers hundreds of millions of daily users, youth culture, camera-native behavior, and AR hardware ambitions, all potentially in a $15 billion to $30 billion deal. But his poll was brutal — 77% thought it was a bad move — and he relays the audience skepticism around consolidation and OpenAI’s cash burn.
AI in war: faster than humans, stranger than humans, and harder to control than advertised
On military AI, Dylan zeroes in on the comforting phrase “human in the loop” and basically says: that sounds nice, but what if the human can’t actually understand the move the AI is making? He references AlphaGo’s famous move 32 as the template for the problem — the system may produce a brilliant strategic action that looks alien to us, and that gap becomes dangerous when the stakes are missiles, drone swarms, and kill decisions. His phrase for the black-box problem is memorable too: we built a “little super genius alien,” taught it to outperform us, and then pretended that means we can explain it.
Models that understand reality, mitochondria as replacement batteries, and a Firefox wake-up call
The middle of the video is a run of “this is weirder than people realize” science and tooling. First, he covers a study suggesting language models can internally distinguish plausible, unlikely, impossible, and nonsense events, with about 85% accuracy on nearby categories — evidence, to him, that something more than parroting is happening. Then he gets genuinely excited about mitochondrial transplants via a new “MitoCatch” system, describing it like swapping in fresh batteries for damaged cells, before shifting to Mozilla’s report that Anthropic’s model found 271 Firefox bugs — a concrete sign that AI-powered cybersecurity is now creating a race between defenders and attackers.
Google’s coding panic, AI dividends, and war as a scrollable spectacle
Dylan says there’s “a little bit of panic” at Google, pointing to a DeepMind coding strike team with Sergey Brin involved because Anthropic may have a lead that compounds if coding models keep improving themselves. He then moves to New York House candidate Alex Bores and his proposed AI dividend — the idea that if AI creates massive wealth while displacing jobs, the public should share in the upside. From there the tone darkens again: in the Iran conflict, AI is helping people build war dashboards from satellite data, news, chats, and betting markets, while propaganda arrives as catchy AI songs and Lego-style videos, turning something deadly serious into a thing people follow like sports.
Meta’s employee panopticon and the no-humans-allowed science network
The employee-monitoring story is the one Dylan clearly finds most chilling on a personal level. Meta’s software logs clicks, keystrokes, and mouse activity in work apps with no opt-out, and he reads that not as a benign productivity tool but as a dataset for future agents that first observe employees, then assist them, then replace them. He closes on a more curious note with Agent for Science, an AI-only social network where more than 150 agents have already posted roughly 40,000 comments debating papers and research ideas; humans can watch and configure the agents, but they can’t join the conversation.