Brain Fuel · March 2026
What Happens to Humans?
Two of the sharpest thinkers of early 2026 ask the same question and arrive at strikingly different answers.
The tension in four lines
I — The Question They Share

In January 2026, two people sat down to write about the same thing without writing to each other. Dario Amodei, CEO of Anthropic, published a 19,000-word essay called The Adolescence of Technology. Ben Thompson, the most influential technology analyst writing today, published a shorter piece called AI and the Human Condition. It was a response to an essay that had been circulating over the holidays — Trammell and Patel's Capital in the 22nd Century.

Both begin from the same place. AI is not a specific tool replacing a specific skill. It is becoming something closer to a general substitute for human cognitive labour — capable, in Amodei's framing, of acting like a "country of geniuses in a datacenter": fifty million entities smarter than any Nobel laureate, operating ten to a hundred times faster than a human, running in parallel, never sleeping.

Both treat this not as a distant speculation but as a near-term reality. Amodei thinks powerful AI — AI that can genuinely do most of what humans do — could arrive in one to two years. Thompson does not contest the timeline. The question, for both of them, is not whether this happens. It is what happens to the people who were doing the jobs.

II — The Same History, Read Differently

Both writers reach for the same historical anchor: agriculture. In 1810, 81% of Americans worked on farms. Today, less than 1% do. Machines replaced them. And yet the economy did not collapse into mass unemployment. It created factories, then offices, then the entire apparatus of knowledge work — jobs that could not have been imagined by the farmers being replaced.

Thompson reads this as the pattern that will repeat. "The history of humans is the continual creation of new jobs to be done — jobs that couldn't have been conceived of before they were obvious, and which pay dramatically more than whatever baseline existed before technological change." He is not dismissive about the disruption. He simply believes that every time machines have taken old jobs away, humans have invented new ones that nobody saw coming.

Amodei reads the same history and sees it ending. He identifies four reasons AI will not follow the agricultural precedent. Speed: previous transitions took generations; this one is happening in years. Breadth: prior automation took narrow skills; AI takes the full range of cognitive ability simultaneously. Slicing by ability: AI advances from the bottom of the capability ladder upward, which means the people hit first are those with less cognitive horsepower — and they have nowhere obvious to go. Adaptability: every gap AI cannot fill today, it fills within months — there is no niche that stays safe long enough for a workforce to migrate into it.

The crux Thompson's optimism depends on the historical pattern holding: machines destroy old work, humans invent new work, the cycle continues. Amodei's concern is that AI breaks the cycle entirely — not by being a bigger version of past automation, but by being something fundamentally different from it.
III — The Human Touch Debate

The sharpest disagreement is about what machines can never replace. The phrase "human touch" is often invoked as a comfortable shield — the idea that empathy, creativity, physical presence, and genuine connection will always require a person. Amodei, unusually, refuses the comfort.

"I'm a little skeptical that it will be enough to offset the bulk of the impacts," he writes. He cites his own sister's experience — struggling with a difficult pregnancy, finding that Claude gave her better answers and more patience than her actual care providers. "Many people report that it is easier to talk to AI about their personal problems than to talk to a therapist — that the AI is more patient." He acknowledges the counterargument. He just isn't sure it covers enough ground to matter for the labour market as a whole.

Thompson holds the opposite position, but his argument is stranger and more interesting than simply "people prefer people." He points to what might be the most stubborn human preference there is.

Amodei

"I'm sure there are some tasks for which a human touch really is important, but I'm not sure how many — and here we're talking about finding work for nearly everyone in the labour market."

Thompson

"I have no doubt that there will be human-like robots with which you can have sex; I also have even stronger conviction that the overwhelming preference of humans will be to have sex with other humans."

Thompson's logic extends from this base outward: if the most fundamental human preference is for other humans, then everything that flows from it — courtship, status, beauty, community, the desire to own something made by a real person — is an economy in its own right. Not a marginal one. A central one. One that AI, by definition, cannot supply.

He also notices something structural. AI and human content operate as inverses of each other. AI scales compute to deliver personalised results to individuals — one intelligence reaching many. Human creators reach many people through the fact of their uniqueness — one person, resonating outward. "What I have to say is by definition unique to me, and that is interesting precisely because I am flesh-and-blood, not a robot." These two value systems run on different logic, and Thompson thinks the human one will hold precisely because AI cannot fake being a person.

IV — The Inequality Trap

Here the two converge in their concern, even as they approach it from different directions. Trammell and Patel — whose essay Thompson is partly responding to — made an argument that stuck with a lot of people over the holidays. Throughout history, they said, extreme inequality has been self-correcting: wealthy people can buy more machines, but machines are useless without workers to run them, so workers stay in demand and wages hold up. But AI breaks that. Once the machine can do the thinking too, capital no longer needs labour at all. Wealth flows to whoever owns the AI at the moment of transition — and stays there.

Amodei agrees with the structural concern, and he quantifies it in terms that are difficult to dismiss. We are already, he notes, at historically unprecedented wealth concentration — before AI has had most of its economic impact. Elon Musk's current net worth exceeds the Rockefeller benchmark (2% of US GDP) that defined the Gilded Age peak. A world of AI-driven GDP growth at 10–20% annually could produce personal fortunes "well into the trillions." At that point, he writes, "the debates we have about tax policy today simply won't apply."

Thompson arrives at inequality from a different angle — psychological rather than structural. He cites Louis C.K.'s 2008 observation: everything is amazing and nobody's happy. The comedian's point, which Thompson reads as a precise insight about human nature, is that happiness is relative, not absolute. Technology makes everyone richer in absolute terms — but it also shows you, in real time, how much richer other people are. When everyone can fly, flying no longer feels special. What matters is not what you have but what someone else has that you don't.

This is the hidden assumption buried inside Trammell and Patel's argument, Thompson argues. They assume the negative parts of human nature survive into the AI age — jealousy, the drive for status, the pain of comparison. But if you make that assumption, you must also allow the positive parts to survive: the desire for connection, for beauty, for human company. You cannot selectively keep the resentment while discarding the love.

V — The Underclass and the Renaissance

Their most divergent visions are also their most vivid ones, and they land on opposite poles of the same question: what does the world look like for the people who are not at the top of the cognitive-ability distribution?

Amodei's fear is specific. AI, he observes, is advancing upward through the capability ladder — from mediocre to strong to exceptional in each domain it enters. This means the disruption first hits people with lower levels of cognitive ability, whose skills are earliest to be matched or surpassed. But unlike previous technological shifts, where disruption was narrow enough that retraining into adjacent roles was possible, this disruption is broad. "AI isn't a substitute for specific human jobs but rather a general labour substitute for humans." The cognitive profile required for most new jobs will be matched by the same AI that displaced the old ones. He names his concern directly: the risk of a permanent cognitive underclass — people with nothing economically useful left to offer, with nowhere to retrain towards.

Thompson's hope is equally specific, and it comes from an unexpected place: beauty. He notes that before the industrial revolution, labour was abundant and cheap, which made it economically viable to devote thousands of person-years to intricate buildings — cathedrals, palaces, decorated public spaces. The industrial revolution made labour expensive, and beauty in the built environment largely vanished. "How is it that we built intricate cathedrals hundreds of years ago, and forgettable cookie-cutter crap today?"

His argument: AI devaluing labour might reverse this. If human work is no longer the scarce economic input, perhaps humans are freed to create beauty again — not because it is economically efficient, but because it is humanly meaningful. And human-made things, he suggests, will carry a premium of provenance that AI-made things cannot carry, precisely because of where they came from. "I expect the widespread availability of high-quality AI art to actually make human art more desirable and valuable, precisely because of its provenance." He is encouraging his own daughter to pursue art.

The unresolved question Thompson's renaissance is real as a possibility, but it is a vision of what the fortunate do with the abundance. Amodei's underclass is a vision of what happens to those who have nothing to offer the abundance economy. Both can be true simultaneously.
VI — Where They Converge

For all their differences, the two essays agree on more than they disagree. Both accept that the capability transition is real and accelerating. Both think some form of redistribution will eventually be necessary. Both are wary of predictions that rely on human institutions responding wisely and quickly to something they do not yet understand.

Amodei is explicit about this: the interventions he proposes — better labour market data, steering enterprises toward innovation rather than cost-cutting, progressive taxation, private philanthropy — are, he says, all ways of "buying time." In the end, AI will be capable of everything. The question is whether the transition is managed well enough that we arrive at the other side with functional societies rather than broken ones.

Thompson ends up somewhere stranger and more personal. His essay closes not with policy but with philosophy — with the Louis C.K. observation turned into an argument. If you believe human nature is real and persistent, then you must believe all of it persists: not only the jealousy and the status anxiety, but also the desire for connection, for things made by human hands, for the texture of another person's presence. The economy that emerges from that desire is the one he is betting on.

Neither writer pretends to certainty. Amodei acknowledges he might be wrong about the speed, wrong about the breadth, wrong about comparative advantage. Thompson acknowledges he might be engaging in wishful thinking. What both share, finally, is the conviction that the answer is not predetermined — that it depends on choices being made now, in labs and boardrooms and legislatures, by people who are still mostly not paying attention.

The question both leave open The ratio between Amodei's underclass and Thompson's renaissance is not fixed by technology. It is set by the decisions that precede it. What that ratio turns out to be is the most important open question in economics right now.
Sources

Dario Amodei — The Adolescence of Technology
darioamodei.com · January 2026 · ~19,000 words. Read the intro and Section 4 ("Player Piano") for the economic argument in full.

Ben Thompson — AI and the Human Condition
stratechery.com · January 5, 2026. Responds to Trammell & Patel's Capital in the 22nd Century, which is itself worth reading as the essay that started the conversation.