How AI is changing Software Engineering: A Conversation with Gergely Orosz, @pragmaticengineer
TL;DR
Token maxing is real inside big tech — Gergely Orosz says engineers at Meta, Microsoft, and Salesforce are inflating AI usage because token counts show up in leaderboards, dashboards, and even performance discussions, with Salesforce reportedly targeting around $175 in monthly spend.
The metric is warping behavior in exactly the way lines-of-code metrics used to — people are asking agents to summarize docs they could read faster themselves or running autonomous agents to generate junk, just to avoid landing in the bottom 25% of token usage.
AI is still helping individuals, but team-level gains are much messier — Orosz argues engineers can absolutely become more productive, yet retrofitting AI into existing workflows is hard, which is why a company like Anthropic looks unusually strong compared with more traditional orgs.
Getting good at AI tools is weirdly experiential, not theoretical — citing Simon Willison, Orosz says there’s 'no manual,' and understanding attention or model architecture doesn’t automatically make you better at using Cursor, Claude Code, or agents in real work.
The software engineer role is collapsing more responsibilities into one job — testing, DevOps, and increasingly product thinking are getting folded into engineering, while companies from VC-backed startups to John Deere are shrinking team sizes from 'two-pizza teams' toward 'one-pizza teams.'
Big companies are quietly spending huge energy on internal AI infra, not flashy product launches — Uber, Airbnb, Intercom, Meta, Microsoft, and others are rebuilding dev tooling, MCP gateways, monorepo-aware coding agents, and risk-based code review systems because custom infrastructure works better on giant codebases and is easy to get funded if it has 'AI' in the pitch.
The Breakdown
Token maxing goes from joke to career defense
The conversation opens with Orosz unpacking “token maxing,” a term he says he only heard about a week or two earlier before his DMs filled up with stories from Meta, Microsoft, Salesforce, and others. The core pattern is simple and bleak: once token usage is measured, engineers start gaming it, whether that means asking agents to summarize documentation they could just read or running autonomous agents to crank out junk so they don’t look inactive.
When AI metrics become the new lines of code
Orosz compares this moment to the old era of measuring lines of code, PR counts, and velocity: everyone already knows the metric is dumb, but people still optimize for it when jobs are on the line. He says the weird part is that some of the most sophisticated companies in the world are now nudging engineers into “just stupid stuff honestly,” because low token count can be read as low effort while high token count can be spun as innovation.
Why leadership pushed this in the first place
He doesn’t frame it as pure irrationality: six months earlier, many experienced engineers were skeptical because AI tools were only mildly useful on real, messy codebases. Meanwhile leadership kept hearing stories like Anthropic writing more and more code with Claude, so some executives started pushing usage aggressively, including examples like Coinbase where Brian Armstrong made AI adoption feel mandatory fast.
The uncomfortable big-tech analogy: LeetCode and compliance
Orosz drops one of the sharper analogies in the talk here: token maxing reminds him of LeetCode interviews, not because they test the job well, but because they select for people willing to put up with nonsense to get and keep elite jobs. His point is not that this is good — it’s that big tech has long rewarded people who tolerate arbitrary systems, and AI measurement is becoming one more version of that.
AI productivity is real, but hard-earned and non-intuitive
On whether AI is actually making people faster, Orosz lands in a nuanced middle: yes for many individuals, still a question mark for teams. He references Simon Willison’s point that there’s “no manual,” and says the most counterintuitive thing for engineers is that knowing the theory doesn’t necessarily help much — you get better by doing, relearning, and dropping your priors.
Software engineering keeps absorbing more jobs
Asked how the role is changing, he says this trend predates AI but AI is accelerating it: testing folded into engineering, DevOps folded in, and now product judgment is creeping in too. Even early-career engineers are being asked for more seniority and business awareness, while companies like John Deere are consolidating team structures from two-pizza teams into one-pizza teams.
You’re not becoming an engineering manager — more like wearing a mech suit
Orosz pushes back hard on the idea that AI turns engineers into managers of agents. Real management means people drama, slow feedback loops, and being far from the product; working with agents feels more like what DHH called a “mech suit,” where you can do seven things at once without inheriting all the painful parts of management.
The real action is in internal AI infrastructure
One of the most revealing segments is his look inside large companies: Uber and peers aren’t obviously shipping lots of shiny AI features, but internally they’re rebuilding everything from coding agents and MCP gateways to service discovery integrations and AI-assisted code review. He argues this makes sense for three reasons — it’s a low-risk way to get hands-on, giant codebases need custom systems, and anything labeled “AI” gets funded — then closes by connecting that same logic to Shopify’s early Copilot bet and, finally, to his own Pragmatic Engineer story, which he says hit product-market fit almost instantly after his first Uber platform article.