GPT Image 2, AI Psychosis, and more
TL;DR
Berman is bullish on AI overall but cautious with kids — he says he would not let his 8-year-old use AI alone because hallucinations, sycophancy, and emotional attachment risks are real, especially after seeing his son assume “AI can’t make mistakes.”
The anti-AI parent post sparked a more nuanced argument than “AI bad” — he disagrees with the parent’s creativity and environmental critiques, but agrees that a 9-year-old using Google AI for sibling advice, swim improvement, and fan-fiction could still be vulnerable to an overly agreeable chatbot.
The environmental case against AI is messier than the internet shorthand suggests — Berman argues most scary water-use numbers come from open-loop evaporative cooling, while closed-loop systems can be near-zero in direct water consumption, though researcher Jonah adds AI still increases total carbon footprint unless other behaviors change.
He reframes “AI psychosis” as builder obsession — using Bryan Johnson, Karpathy’s knowledge-base post, and Garry Tan as examples, Berman describes the current moment as a “kid in a candy store” phase where frontier tools like Cursor, Claude Code, and long-running agents are so capable they start crowding out sleep, family time, and attention.
OpenAI’s new image model looked like a genuine step change, not a small upgrade — during the livestream rollout of “GPT Image 2,” he highlights a roughly 242-point Arena jump over Google’s image model, plus strong text rendering, multilingual output, and better multi-image consistency.
His live tests showed the new image model is powerful but not flawless — it solved a blackboard math problem correctly with thinking mode on, generated convincing thumbnails and photorealistic scenes, but failed at route-map accuracy, produced imperfect code for Snake, and sometimes triggered odd guardrails.
The Breakdown
A livestream start, a countdown, and a parenting detour
Berman opens in full live-stream chaos mode — audio tweaks, 4K delay, countdown timer — then immediately swerves into a surprisingly personal topic: whether children should use AI. The trigger is a viral anti-AI Reddit post, amplified by A16Z’s Justine Moore, about a 9-year-old who used Google AI for sister drama, swim advice, and fan-fiction, then got told it was environmentally harmful and “insidious.”
Why he’s wary of kids using AI even as an AI optimist
His take is not the easy one: he says he probably wouldn’t let his own 8-year-old use AI unless he was sitting right there. The core issue is sycophancy — the model telling you your terrible idea is brilliant — and he brings up the infamous “poop on a stick business” example plus Husk’s X videos showing AI still confidently agreeing with nonsense.
The moment his son thought AI was incapable of being wrong
The sharpest anecdote lands when Berman says his son reacted with genuine disbelief after hearing “AI made a mistake.” That freaked him out more than abstract discourse, because it showed how easily a child can treat AI like an authority rather than a fallible machine, especially when Character.AI-style attachment cases are already colliding with teen mental health.
The environmental argument gets a live fact-check
Berman pushes back on the idea that AI is uniquely disastrous for the environment, especially around water use. He argues many viral estimates assume open-loop evaporative cooling, while modern closed-loop systems are closer to a liquid-cooled gaming PC than a hose connected to the tap; then, after chat pushes back, he updates the record live and concedes a lot of data centers still do use evaporative methods as the industry transitions.
Jonah joins to add the missing nuance
Researcher Jonah, with a background in environmental epidemiology and Zipline sustainability, comes on to complicate the story. He agrees AI could become a major climate tool, but warns the benefit only materializes if it helps society change behavior or accelerate interventions — otherwise it’s just another layer of emissions on top of flights, transport, and everything else.
“AI psychosis” as the new builder brain
Then Berman shifts into his second big theme: AI psychosis, meaning not delusion but intense overexcitement about what frontier tools can suddenly do. Using Bryan Johnson’s “Claude-hold” post, Karpathy’s knowledge-base workflow, and Garry Tan coding at 2 a.m. while running Y Combinator, he describes the current moment as a step-function jump in capability that turns rusty former engineers back into obsessed builders.
He admits it affected his sleep, work rhythm, and marriage
This section gets uncommonly candid. Berman says he was waking up to launch agents, checking Cursor during dinner and movies, running cloud jobs before bed, and getting called out by his wife for being physically present but mentally elsewhere — a builder high that felt exhilarating and unhealthy at the same time.
GPT Image 2 arrives, and the live tests are the real show
When OpenAI’s image launch hits, Berman is clearly impressed: the model posts a huge Arena lead, shows strong text rendering, multilingual layouts, magazine covers, manga continuity, and even 360 imagery. In his own tests, the standout moments are a blackboard equation that only becomes correct after turning on thinking mode, a strong Matthew Berman thumbnail with his real face inserted, and uncanny photorealistic celebrity dinner scenes — but also some misses, including broken route maps, imperfect Snake-game code, and weird guardrail refusals that keep the demo feeling honest.