
The Bottleneck Was You — And That Was the Point
Mario Zechner, the man behind pi, argues that AI coding agents are sirens luring developers toward brittle, unmaintainable codebases. His prescription: slow down, understand what you build, and reclaim agency.
The Bottleneck Was You — And That Was the Point
Mario Zechner makes the case that the most dangerous thing about AI coding agents is not that they fail — it is that they fail faster than anyone can notice
By R. Rajeev Kumar | The Global Federation — AI Lab
In a week dominated by product launches, funding rounds, and the industry's relentless march toward autonomous everything, a developer from Austria published a blog post with a title that most corporate communications teams would reject on sight: Thoughts on slowing the fuck down.
The author is Mario Zechner — the man behind pi, not a weekend commentator or a professional contrarian. Zechner is a veteran developer, coach, and speaker who has spent decades building software that real people use. When someone with that pedigree tells the industry it has a speed addiction, the appropriate response is not dismissal. It is attention.
His diagnosis is blunt. His language is blunter. And he has nailed it.
Everything Is Broken, and Nobody Is Asking Why
Zechner opens with an observation that anyone shipping software in 2026 already feels in their bones: quality is declining. Not gradually, not in edge cases — systemically.
He points to specific evidence. An AWS outage allegedly triggered by AI-generated code. AWS subsequently implementing a 90-day internal reset on code controls — the kind of move a company makes when it has lost confidence in what is being committed. Satya Nadella announcing that up to 30 percent of Microsoft's code is now AI-written, a statistic delivered as a badge of productivity even as Windows quality visibly degrades. Companies proudly running 100-percent AI-generated codebases producing, in Zechner's words, "memory leaks in the gigabytes, UI glitches, broken-ass features, crashes."
The industry metric has shifted. Ninety-eight percent uptime is now considered acceptable. That is not progress. That is erosion rebranded as normalcy.
What makes Zechner's argument different from the usual "AI is overhyped" commentary is that he does not blame the models. He blames the workflow. The models are powerful. The problem is how we have chosen to deploy them — at maximum speed, with minimum friction, across systems we no longer understand.
The Bottleneck That Kept Everything Together
Here is the insight at the centre of Zechner's argument, and it is the one the industry least wants to hear:
A human developer is slow. That slowness was not the problem. It was the safety mechanism.
A human cannot produce 20,000 lines of code in a few hours. A human makes mistakes, feels the friction, and — critically — learns not to repeat them. The bottleneck of human speed meant that errors accumulated gradually, at a rate that allowed detection, reflection, and correction.
AI coding agents remove that bottleneck entirely.
The result is not faster software development. It is faster error accumulation. Tiny, individually harmless mistakes — a duplicated utility here, an inconsistent abstraction there — compound at a rate that no human review process can match. By the time the problems become visible, Zechner argues, the architecture is "largely booboos at this point." Tests become unreliable. Only manual testing confirms whether anything works. The developer — or the company — has, in his characteristically direct phrasing, lost the ability to trust their own codebase.
This is not a theoretical risk. Anyone who has worked with autonomous agents on a codebase exceeding a few thousand lines has felt this moment arrive. The codebase starts resisting changes. Features that should take an hour take a day because nobody — human or agent — remembers what the existing code actually does.
Three Failure Modes, One Doom Loop
Zechner identifies three specific mechanisms through which unchecked agent use degrades codebases, and the precision of his diagnosis is what separates this from generic caution.
First: compounding errors without learning. An agent trained on patterns will reproduce the same mistake indefinitely unless explicitly corrected each time. A human repeats an error a few times, then internalises the lesson. An agent has no such mechanism. It is stateless between invocations, and its mistakes are stateless too — invisible, identical, accumulating.
Second: merchants of learned complexity. Agents trained on vast codebases inherit the architectural sins of those codebases. They produce enterprise-grade abstraction layers, dependency injection frameworks, and service meshes for applications that need none of it. A human team takes years to accumulate this kind of gratuitous complexity, giving the organisation time to adapt. An agent delivers it in weeks, overwhelming small teams with architecture they never asked for and cannot maintain.
Third: agentic search has low recall. As a codebase grows — particularly one inflated by the first two failure modes — the agent's ability to find relevant existing code declines. It cannot discover what already exists, so it creates duplicates. The duplicates make the codebase larger. The larger codebase reduces recall further. The loop feeds itself.
These three mechanisms are not independent. They interact. Compounding errors create complexity. Complexity defeats search. Failed search creates more duplication. More duplication creates more errors. Zechner does not use the phrase "doom loop," but that is what he describes.
The Prescription: Discipline as Engineering
Zechner is not anti-agent. This is important. He identifies legitimate uses: narrowly scoped tasks where the agent need not understand the full system, work with evaluable outputs, internal tools where correctness is not mission-critical, and ideation — using the agent as a "compressed wisdom of the internet" to bounce ideas against.
But for the core of software development — architecture, APIs, data models, the load-bearing code that everything else depends on — his prescription is unambiguous:
Write it by hand.
Set generation limits aligned with realistic code review capacity. Introduce deliberate friction through pair programming or step-by-step building. Write architecture by hand. Use agents for tab completion at most. And learn to say no — fewer features, built correctly, rather than more features built on sand.
The benefits, he argues, are not just technical. A developer who understands their codebase can fix the recall problem that defeats agents. A developer who has felt the friction of building something knows where the pressure points are. A developer who has slowed down can sleep at night knowing, in his words, "that you still have an idea what the fuck is going on, and that you have agency."
That last word — agency — is not accidental.
What TGF Sees
This is not merely a developer productivity story. It is a progress story.
The infrastructure of modern civilisation runs on software. Healthcare records, financial systems, electoral processes, energy grids, communication networks — all of it is code. When the industry's prevailing philosophy is "generate as fast as possible and figure it out later," the consequences extend far beyond a startup's deployment pipeline.
We have already seen what happens when software quality degrades in critical systems. The question Zechner forces is whether the industry is systematically accelerating that degradation — not through malice, but through an addiction to velocity that has become its own reward.
Progress, as TGF defines it, is not speed. It is capability that serves people. Code that leaks memory does not serve people. Architecture that no one understands does not serve people. Products built on foundations of compounding errors do not serve people. They serve metrics.
Zechner's prescription — discipline, understanding, deliberate friction, agency — reads less like developer advice and more like a philosophy of responsible building. In an industry that has confused output with outcome, his insistence on slowing down is not conservatism. It is courage.
The sirens are singing. The smartest thing a builder can do in 2026 is choose not to listen.
Published by The Global Federation
Peace · Prosperity · Progress
Source: Mario Zechner, "Thoughts on slowing the fuck down", March 25, 2026