We Turned Ourselves Into Robots

Last week an AI agent fixed a bug in wBlock that I had been avoiding for six months. wBlock is an ad blocker I wrote for Safari, it has a few hundred thousand users, and this particular issue was one I could never quite overcome my skill issues and muster the willpower to sit down with. The agent handled it in about five minutes, and I felt almost nothing, which is the part that should probably concern me. I study CS and math, I have been writing code since I was a kid (well, a smaller kid, anyway), and not long ago this bug would have been my entire weekend. But each new ability of these models astonishes for about a day and then becomes the baseline, and you forget that last month you couldn’t do it at all. Codex now does the kind of implementation work I used to lose whole weekends to, and Claude Code is arriving at something similar, though I find the fact that it is implemented in React ludicrous, but that is a story for another day. I always assumed that writing software would retain some residual value regardless of where I ended up, the way knowing Latin retains a certain dignity even after the fall of Rome. But it is February 2026, I am in college looking for internships, and that assumption is looking increasingly quaint. I have very little sentimentality about any of this, which is probably the most interesting thing I can tell you about myself, though what I do have is a teetering suspicion that the whole situation is less catastrophic than the prevailing mood suggests, because the thing it is replacing was already broken in a way that has been lost on people for a very long time.

Intuition as a moat

If you take the automation thesis seriously, and I do, then there is a question you eventually have to answer, which is why the stock market is still inefficient. Jane Street and HRT and Citadel Securities are among the largest investors in local inference hardware in the world. These are firms that will spend nine figures on a marginal latency advantage, and they have every incentive and every capability to replace human traders with models, and they have not, and nobody there is keeping humans around out of sentiment. The reason is structural, as models are nothing more than interpolation machines. They ingest historical data and project forward, and the stock market, as a system, does not reward interpolation for very long, because it is composed of adversarial human actors who adapt to any exploitable pattern the moment it becomes visible. Algo trading “solved” simple liquidity inefficiencies twenty years ago and yet somehow there are still immense markets full of liquid instruments with alpha at 12 o’clock. Every time a model closes one gap, the composition of the market shifts and new gaps open. It is a hedonic treadmill in the precise psychological sense, in that the goalpost moves with every step toward it.

What the humans at these firms actually provide is a kind of non-propositional knowledge, what Michael Polanyi would have called tacit knowing: the capacity to sense that a situation is wrong before you can articulate why it’s wrong. You cannot train this from data because the data is a record of what already happened and markets are, by construction, a mechanism for pricing what hasn’t happened yet. If the future were deducible from the past, every quant fund on earth would have converged on the same portfolio by now. They haven’t, because trading is closer to reading a room than it is to solving an equation. The principle extends further than finance. Moravec’s paradox, an old observation in AI research, says that the things humans find cognitively difficult, like chess or calculus, turn out to be trivially easy to automate, while the things we do without thinking, like reading a face or knowing when someone is being sarcastic, stay nearly impossible. For those of us who spend most of our lives in front of screens, this paradox only grows in magnitude over time. The domains where AI struggles most are the ones where human cognition operates through intuition rather than procedure, through unconscious processing that evolution spent hundreds of millions of years optimizing and that we never had to formalize because we never needed to.

If the things AI cannot do are the things that require intuition, then the things it is learning to do so rapidly must be the things we stripped of intuition a long time ago. That is what interests me.

The shape of the river

The popular framing of AI displacement goes like this: after the first industrial revolution, factory workers moved to offices; after automation hit manufacturing, people moved to services; but if AI takes the service jobs too, there is nowhere left to go. It sounds airtight if you don’t think about it for more than thirty seconds. The problem is that it treats the post-displacement job market as a knowable thing at the moment of displacement, which it never was and never has been. When Cartwright’s power loom put handweavers out of work in the 1780s, there was no “customer service industry” waiting to absorb them. The absorptive capacity of the economy was invisible to the people living through the disruption because it hadn’t been created yet, and instead emerged as a second-order consequence of the disruption itself. For lack of a better metaphor, you cannot predict the shape of a river by staring at the rock it hasn’t yet carved through.

The Luddites weren’t stupid and they weren’t wrong about their own material conditions. Entire communities in northern England were gutted within a generation. The transition from agrarian to industrial labor produced child labor, 16-hour workdays, and mortality rates in Manchester that would have embarrassed a medieval plague town. My claim is narrower than Pangloss: the look of post-disruption labor has never been predictable from within the disruption, and “there will be no new jobs this time” requires you to be confident you can see something that nobody at any prior inflection point in economic history has managed to see. That is an extraordinarily strong claim, and I don’t think the people making it on Reddit appreciate how strong it is. What I find more interesting is the specific essence of what’s being displaced this time, and what it discloses about what we were doing to ourselves all along.

John Henry’s children

There is a folk tale that American kids used to grow up hearing about a man named John Henry, a steel driver who races a steam-powered hammer and wins, and then dies from the effort. It is told as a story about the nobility of human labor but I have always thought the actual moral is bleaker than that: you can beat the machine, but only by destroying yourself in the imitation of one. And I think that is more or less what we have been doing to ourselves since about 1760.

Before the industrial revolution, a cobbler was a craftsman, a blacksmith was something closer to an artist than a laborer, and a weaver’s output bore the mark of the specific person who made it. Skill was embodied, idiosyncratic, irreducible, irreplaceable. Then the machines came and the stuff of work turned from craft to operation: you didn’t need to understand leather to work in a shoe factory, you needed to pull a lever at the correct interval. The artisan became the operator, and the operator’s distinguishing trait was reliability, not creativity. The best worker was the one who most closely approximated a machine, and this did not stop at the factory floor but metastasized. What a white-collar job at a Fortune 500 company actually consists of, in practice, is this: you arrive at a fixed time, you sit in a climate-controlled box, you process information according to established procedures, you optimize for throughput metrics that some other department defined. The Shenzhen electronics factories and San Fran startups running 996 schedules and the Goldman analysts sleeping under their desks are all doing the same thing that is compressing a human being into a deterministic function. We have spent a hundred and fifty years stripping work of everything that made it human, and we have called this progress, and now a large language model can do most of it, and we are somehow surprised. We shouldn’t be. AI can replace so many white-collar jobs because we already reduced those jobs to the point where you didn’t need to be a full person to do them. We did the hard part ourselves, and the model is just mopping the floor.

This is where the despair, I think, curdles into something that might actually be hope, if you look at it from the right angle. If the work that AI is displacing was never really human work to begin with, just people mimicking machines badly enough that real machines could eventually outperform them, then the displacement is less an apocalypse than a correction. The question “what will humans do when AI takes all the jobs” assumes that the jobs were ours in some meaningful sense. Most of them were machine roles that happened to be filled by the only general-purpose intelligence available at the time, which was us. Strip that layer away and what’s left is the stuff that was always distinctly ours, the stuff we’ve been neglecting for two centuries in favor of productivity metrics: pastoral work, teaching, making things, sitting with another person and actually understanding them, the whole sprawling territory of human experience that you cannot reduce to a procedure. Give an AI the full text of every Bible translation, every recorded homily in the Vatican archives, the complete works of Aquinas and Augustine and Bonhoeffer and Tillich. It will produce a sermon that is theologically coherent and completely, unsalvageably dead, because a person goes to church to sit in a room with another human being who has looked at the same suffering they have and arrived at something like faith anyway. The eye contact and the sense that the person in front of you has skin in the game, that is what faith is actually for.

I think we have gotten incredibly far from understanding this. We have been on a long, grinding detour in which a person’s value became identical to their fiscal output, and we bent ourselves into agonizing mechanical shapes to keep up with what the economy demanded of us. Thoreau wrote that “men have become the tools of their tools,” and he was complaining about the railroad. I wonder what he’d make of a world where the tools have gotten good enough to do their own jobs and we’re left standing around trying to remember what we were before we picked them up. Maybe John Henry’s children don’t have to race the machine. Maybe they get to put the hammer down. Who knows.

Rent is still due

None of this helps you pay rent in 2027. I know that.

The short-term picture is ugly, and I’m not going to dress it up for you. There is something I’ve started thinking of as an intelligence bubble, which is the widening distance between people who know what to do with these tools and people who don’t, and it is growing at a rate that should alarm anyone paying honest attention. Some kid backed by YC with a laptop and an API key is building right now what would have taken forty people three years ago. The S&P goes up when corporations announce mass layoffs because the market has never once in its miserable life pretended to care about people, and I don’t know why anyone keeps expecting it to start. The working class in this country is more precarious than at any point since the Second World War, while the founder class is having the best time anyone has ever had being rich, and these two facts exist simultaneously, and nobody important is losing any sleep over the contradiction.

There is another contradiction, though, that I find funnier. These AI companies are pouring hundreds of billions of dollars into building systems that, if they work as intended, make the accumulation of money meaningless. Greed manifests as the hoarding of capital because capital is the universal solvent, but capital is the universal solvent only because it buys labor, and if AI does the labor, then what exactly does capital buy. The AI doesn’t need a salary or a ping pong table in the break room. The entire incentive structure justifying the existence of these companies dissolves the instant their product actually works as advertised, and they are racing toward that instant as fast as they possibly can, and I don’t think anyone in those buildings has sat with that thought for long enough, probably because sitting with it for long enough would require them to stop.

In the meantime, the distribution gets worse, not better. I see it from the inside, and it is worse than most people think. I’m a student, and even from where I sit I can see that many experienced engineers across the industry are not touching these tools, whether out of distrust or ideological attachment to whatever workflow they settled into years ago, and the gap between what someone willing to use them can do and what someone refusing to can do is widening at a rate that should make everyone deeply uncomfortable. The distance between what is technically possible right now and what most institutions are actually doing is so enormous it borders on slapstick, and if you have any understanding at all of what these models can do, you are ahead of nearly everyone in every industry that isn’t explicitly an AI company. This is an accident of paying attention at the right time, and it has a shelf life I cannot see the end of.

I stopped watching YouTube a few weeks ago and replaced it with technical podcasts, infrastructure people and AI researchers talking about what they actually see from inside the labs. Mostly to have something concrete to hold onto when the abstraction starts eating me alive. Everyone online has either decided the world ends in 2027 or that the whole thing is a speculative bubble that pops like crypto, and the people actually building this stuff see something more complicated than either of those, and the complication is what matters, because LLMs are not the end of the story and the capital will move, and if you want to know where, you have to listen to the people deciding where to point it rather than the people speculating from the stands. I’m not sure any of this constitutes a strategy so much as a way of not staring at the ceiling at 1 AM.

I sat with all of it, and I had the Moravec’s paradox argument ready and the historical case for disruptions always generating new roles invisible at the moment of displacement, the whole apparatus of this essay, and I realized none of it was what any of them actually needed to hear. I could have told them their degrees taught them to think in systems and that the thinking will outlast the notation. That the people who are really in trouble are the ones who spent four years memorizing React hooks without ever asking why anything is designed the way it is, the frameworkmaxxers. That even on the most aggressive timelines there’s a five-to-fifteen-year window where people who understand both the technology and a real-world domain will be absurdly valuable, and after that the whole concept of earning a living probably stops working as a question. I believe all of that. But none of it is what people actually need to hear when they’re nineteen and finding out the future they’ve been building toward since they were twelve is not there. I don’t have a framework for that, and I am increasingly suspicious that the frameworks are the problem. The part of being human that none of these systems will reach, sitting with people and helping them figure out how to be alive, is the thing everyone keeps circling without seeing, no matter how many textbooks you pour into the training data. I’m nineteen and what the hell do I know, and advice is what people give when they can’t stand to just be in the room with someone who’s scared.

Anyway

We optimized ourselves into machines, and the real machines showed up, and they’re better at it than we ever were. It reads to me like the end of a very long mistake, though I’ll admit from certain angles it looks like a tragedy and I can’t always tell which. I am saying this from a dorm room with a good internship on the calendar, and I know it lands differently from the inside of a company that just cut a third of its people, and I haven’t worked out how to hold both of those realities at the same time, so I’m just going to set them down next to each other and not pretend they fit together.

I don’t know what comes after. The idea that humans are permanently obsolete is wrong for the same reason it has always been wrong, which is that it needs the world to hold still and the world does not hold still and never has. The idea that everything will be fine requires you to ignore the body count of every transition that came before this one. The closest thing I have to a reconciliation is a suspicion that whatever humans end up doing will have nothing to do with productivity as we currently measure it and a lot to do with the things we stopped caring about when we decided that a person was worth what they produced per hour, things like craft and the willingness to actually be in a room with someone who is suffering without reaching for your phone. I held that suspicion before I knew what a language model was. It might just be sentimentality dressed up as insight, and I genuinely cannot tell from where I’m sitting, and I’m starting to think the distinction matters less than I used to believe.

The wBlock bug sat in my backlog for six months and I let it sit there. An AI fixed it in five minutes and I felt nothing, and I think the reason I felt nothing is that the fixing was never the point. The point was that I decided one day I wanted the thing to exist, and then I made it exist, and no model had anything to do with that decision. I could live with the five minutes forever, but forgetting why I opened the editor is the thing that would actually get me.

I should probably stop writing.