
The Last Pair of Human Eyes
Three incidents in two years tell the same story: a North Korean supply chain attack, a two-year social engineering campaign against Linux, and an AI company leaking its own source code. The threat is not artificial intelligence gaining control. It is human beings surrendering it.
The Last Pair of Human Eyes
Three incidents. Two years. One lesson: the danger is not the machine. It is the human who stopped paying attention.
By Ramachandran Rajeev Kumar
On March 31, 2026 -- yesterday, as this article goes to press -- two things happened within hours of each other. North Korean operatives compromised the Axios JavaScript library, one of the most downloaded packages on the internet, by hijacking a single maintainer's account and injecting a remote access trojan into the code that powers millions of applications worldwide. On the same day, Anthropic -- the company behind Claude, one of the most capable AI systems ever built -- accidentally shipped its entire source code to the public through a routine packaging error on npm. Five hundred thousand lines of proprietary architecture. Every system prompt. Every unreleased feature flag. Even an internal system called "Undercover Mode," designed specifically to prevent this kind of leak.
Five days earlier, the same company had already exposed the existence of an unreleased model called "Mythos" through a misconfigured content management system. A human had set the wrong permission on a database. No adversary required.
And two years before all of this, a person using the pseudonym "Jia Tan" had spent twenty-four months methodically earning the trust of the sole maintainer of xz-utils, a compression library embedded in virtually every Linux distribution on the planet. The backdoor that was planted -- CVE-2024-3094, rated a perfect 10.0 on the severity scale -- would have given its operators silent access to SSH authentication on millions of servers. It was caught not by any automated scanner, not by any AI system, not by any corporate security team. It was caught by one Microsoft engineer, Andres Freund, who noticed that his SSH logins were taking half a second longer than usual.
Half a second. One human. The entire infrastructure of the internet rested on whether that half-second registered as strange to a single pair of eyes.
These are not isolated incidents. They are the opening chapters of a story we are writing for ourselves without reading the plot.
The dependency we are not discussing
The conversation about artificial intelligence risk has been captured by two loud camps. One warns of superintelligent machines pursuing goals misaligned with humanity. The other dismisses all concern as science fiction dressed in academic robes. Both are arguing about the wrong thing.
The risk that is materialising right now, in production systems, in real companies, in real supply chains, is neither sentient rebellion nor imagined danger. It is something far more ordinary and far more corrosive: the quiet, incremental transfer of human attention to machines that are not yet worthy of the trust.
Consider what happened with Axios. The library receives over one hundred million weekly downloads. It is the plumbing beneath web applications, mobile backends, enterprise dashboards, government portals. A single compromised maintainer account -- one password, one person -- was enough to inject a North Korean remote access trojan into that plumbing. The automated scanner at Socket detected the anomaly in six minutes. The malicious versions were removed within three hours. That is a success story, by modern standards.
But the success obscures the structural failure. Why did the security of a library downloaded a hundred million times per week rest on the account hygiene of one developer? Why was there no mandatory two-person review for publishing new versions of critical packages? Why did the ecosystem's defence depend entirely on an automated scanner happening to catch the right pattern at the right time?
The answer is the same answer that explains the xz-utils backdoor, the Anthropic leak, and a hundred smaller incidents that never make the headlines: we have built systems where the human checkpoint has been quietly removed, and we have not yet built machines capable of replacing it.
The attention economy of the AI-augmented developer
There is a subtler pathology at work, one that the industry celebrates as productivity.
AI coding assistants -- Claude, Copilot, Cursor, Codex, Gemini -- have genuinely expanded what a single developer can accomplish. A competent engineer with a capable AI partner can now maintain codebases that would have required a team of three. They can context-switch across five projects where they once managed two. They can generate, review, and deploy code at a velocity that was physically impossible two years ago.
This is presented as progress. In many cases, it is. But there is a hidden cost that no productivity metric captures: the diffusion of human attention.
When a developer can operate across five projects simultaneously with AI handling the boilerplate, the monitoring, the dependency management, and the test generation, something shifts in the cognitive relationship between the human and the system. The developer is no longer the person who understands the code. The developer is the person who trusts that the AI understands the code. The locus of comprehension has moved, and the developer may not even notice it has happened.
This is how you get an AI company -- one that builds tools explicitly marketed as reducing human error -- shipping its own source code to the public because a developer included source maps in an npm package. The process that should have caught it did not catch it. The AI that should have flagged it did not flag it. The human who should have reviewed it was, in all likelihood, attending to the five other things that AI had made it possible to attend to.
Anthropic's dual leak is not an indictment of their engineering. It is an indictment of a model of work in which human attention is spread so thin that the thing being guarded is the thing that leaks. They are not unique. They are simply the most ironic example.
The xz lesson we refused to learn
The xz-utils backdoor should have changed everything. It did not.
A single individual, operating under a false identity, spent two years building social capital in an open-source project. They submitted legitimate patches. They responded to issues. They earned commit access through patience and persistence. And then they planted a backdoor that would have compromised the authentication layer of the Linux ecosystem.
The technical sophistication of the backdoor was extraordinary. But the attack vector was entirely human. It exploited not a software vulnerability but a social one: the fact that critical open-source infrastructure is maintained by volunteers who are overworked, under-supported, and often alone.
Andres Freund caught it because he was paying attention to something no automated system was measuring -- a vague sense that something felt slow. That instinct, that domain-specific intuition honed over years of working with these systems, is precisely the capability that cannot be automated and precisely the capability that is being eroded by the delegation of attention to machines.
If Freund had been using an AI to monitor his build performance, the AI might have caught the latency anomaly. Or it might have attributed it to network variance, updated a dashboard, and moved on. What it would not have done is feel that something was wrong. And in this case, that feeling was the only thing standing between the internet and a state-sponsored backdoor.
The critical infrastructure question
Now extend this to the domains where the stakes are existential.
Militaries around the world are integrating AI into command and control systems, logistics, intelligence analysis, autonomous weapons targeting, and battlefield communications. The argument is compelling: AI processes sensor data faster than any human, identifies patterns across datasets too large for manual analysis, and operates without fatigue, emotion, or hesitation.
All of this is true. None of it addresses the core problem.
The same dynamic that led Anthropic to leak its own architecture -- human attention diffused across too many AI-augmented responsibilities -- will manifest in military systems. The same structural vulnerability that allowed one compromised npm account to inject a trojan into a hundred million weekly downloads will manifest in defence supply chains. The same social engineering that planted a backdoor in Linux will be attempted, and will sometimes succeed, against the maintainers of the software running autonomous systems.
If we cannot secure a JavaScript library, how do we propose to secure an autonomous weapons platform?
The answer that is currently offered -- more AI, better AI, AI watching AI -- is the answer of an industry selling hammers to people who keep hitting their own thumbs. The problem is not the absence of automation. The problem is the absence of human beings who are paid, trained, and authorised to say no.
The new attack surface: why they stopped aiming at the product
There is a strategic shift underway in how adversaries target software systems, and it deserves to be stated plainly: the modern attack does not aim at the core product. It aims at the dependencies.
This is not new, but the scale has changed beyond recognition. A typical enterprise web application today pulls in between eight hundred and two thousand transitive dependencies. The developer who writes the application may have chosen thirty libraries. Those thirty libraries import three hundred more. Those three hundred import two thousand. The developer has read the documentation of the thirty they chose. They have not read a single line of the two thousand they inherited.
The xz-utils attacker understood this. The Axios attacker understood this. They did not attempt to breach Google's perimeter or Amazon's firewalls. They walked in through the side door -- a compression library maintained by one person, a JavaScript HTTP client where a single account compromise gave them access to a hundred million weekly installs. The core product was never the target. The supply chain was. And the supply chain is maintained by volunteers, hobbyists, and small teams who are often one missed email away from losing control of their own packages.
AI has now made this worse in two distinct ways.
First, it has democratised the offence. Writing a convincing social engineering campaign against an open-source maintainer used to require fluency, patience, and a working understanding of the target's codebase. AI gives all three to anyone with a prompt. Generating plausible pull requests, sustaining a fake contributor identity across months of interactions, even crafting the malicious payload itself -- these are tasks that a moderately skilled operator can now accomplish with commodity AI tools. The barrier to entry for supply chain attacks has dropped from "state-sponsored intelligence operation" to "determined individual with a subscription."
Second, AI is generating the vulnerable code itself. When an AI coding assistant produces a function, it draws from patterns in its training data -- patterns that include every insecure implementation, every deprecated API call, every known-vulnerable dependency version that was ever committed to a public repository. The output is often functional. It is frequently sub-optimal. And it is sometimes actively dangerous in ways that are not obvious to the developer who requested it, because that developer has already delegated their understanding to the machine.
The result is a compounding problem: AI tools generate code with subtle vulnerabilities, that code depends on thousands of libraries no human has reviewed, and those libraries are maintained by people who are increasingly targets of AI-augmented social engineering. Each layer of the stack is weaker than the one above it, and the humans who should be inspecting each layer are stretched thinner than ever, precisely because AI has convinced them they can handle more.
This is the attack surface that no security product can fully address, because it is not a technical surface. It is a human one. It is maintained by trust, attention, and the willingness of someone to look at the dependency tree and ask: do I actually know what this code does?
The case for the last pair of human eyes
There is a practice in aviation called the "sterile cockpit" rule. Below ten thousand feet -- during the phases of flight where the risk of catastrophe is highest -- all conversation in the cockpit that is not directly related to the operation of the aircraft is prohibited. No small talk. No distractions. No delegation of attention. The pilots are required to be fully present, fully aware, and fully in command.
This is not because autopilot systems are unreliable. Modern autopilot is extraordinarily capable. It is because the aviation industry learned, through decades of accidents, that the most dangerous moment is not when the machine fails. It is when the human assumes the machine will not fail.
Software engineering, AI deployment, and critical infrastructure have no equivalent of the sterile cockpit rule. We have no convention that says: at this point in the process, a domain specialist -- a human being with deep expertise and no AI dependency -- must review the output, verify the assumptions, and sign off before the system proceeds.
Instead, we have the opposite. We have AI-generated code reviewed by AI. We have AI-monitored pipelines overseen by developers whose attention is distributed across five AI-augmented projects. We have dependency chains a thousand packages deep where no human has read the source code of the libraries they are trusting with their users' data.
The practice I am arguing for is not nostalgia. It is not Luddism. It is not a rejection of AI. It is the recognition that every system, no matter how automated, requires at least one checkpoint where a human being who is not dependent on AI, who has domain expertise, and who has the authority to stop the process, examines the output with their own eyes and their own judgement.
Not AI-assisted judgement. Human judgement. Unaugmented, undistracted, and final.
The risk we are actually facing
The popular narrative frames the existential risk of AI as the moment when machines become smarter than humans and pursue their own objectives. That may happen. It may not. It is, in either case, not the risk that is killing us now.
The risk that is killing us now is this: a developer whose npm account was protected by a single password, because the platform allowed it. A maintainer who accepted a volunteer's contributions for two years without independent verification, because there was no one else to do the work. An AI company that shipped source maps to production because the person who should have checked was checking something else. A military that will deploy an autonomous system because the AI performed well in testing, and testing is where the attention was.
In each case, the failure is not the machine. The failure is the human who was either absent, overextended, or convinced that the machine had it covered.
AI is the best tool we have built in a generation. But a tool is only as good as the hand that holds it. And if that hand is holding five tools at once, distracted by the very productivity that AI makes possible, then the tool becomes the risk.
The xz-utils backdoor was caught by a human who was paying attention. The Axios compromise was contained by a scanner that happened to be watching. The Anthropic leak was caused by a human who was not.
The last pair of human eyes is not a bottleneck. It is a checkpoint. And we are dismantling it in the name of efficiency, at the precise moment when the adversaries -- state-sponsored, financially motivated, or simply entropic -- are learning how to exploit its absence.
The question is not whether AI will take control. The question is whether we will keep it.
Ramachandran Rajeev Kumar is Chief Executive of Aarksee Group of Companies, a Saudi Arabia-based conglomerate operating across carbon markets, green sciences, technology, and media. He writes on AI governance, democratic reform, and the geopolitics of technology for The Global Federation.
Editor's Note: This article was written with AI research assistance and human editorial oversight. The sources cited are verified as of April 1, 2026. The Axios compromise timeline may evolve as forensic analysis continues.