The Internet philosopher Eliezer Yudkowsky has been predicting the end of the world for decades. In 1996, he confidently declared that the singularity — the moment at which computers become more “intelligent” than humanity — would happen in 2021, though he quickly updated this to 2025. He also predicted that nanotechnology would suddenly emerge and kill everyone by 2010. In the early aughts, the self-described “genius” claimed that his team of “researchers” at the Singularity Institute would build an artificial superintelligence “probably around 2008 or 2010,” at which point the world would undergo a fundamental and irreversible transformation. 

Though none of those things have come to pass, that hasn’t deterred him from prophesying that the end remains imminent. Most recently, he’s been screaming that advanced AI could soon destroy humanity, and half-jokingly argued in 2022 that we should accept our fate and start contemplating how best to “die with dignity.”

Yudkowsky carries on his indefatigable doomsaying in a new book, co-written with his fellow apocalypticist Nate Soares, “If Anyone Builds It, Everyone Dies.” The conclusion is in the title: If anyone, anywhere, builds an artificial superintelligence, then everyone on Earth will somehow “die instantly.”

Hence, we should do everything possible to stop companies like OpenAI and DeepMind from building ASI, even if that requires military action that risks killing billions of people in a thermonuclear war. Yudkowsky and Soares are so serious about this that their research organization, the Machine Intelligence Research Institute, doesn’t offer 401(k) matching for employees. What’s the point if ASI is right around the corner and, once here, will annihilate us?

They don’t actually care about whether our species goes extinct, nor are they opposed to building ASI.

Despite many negative reviews, the book has become a New York Times bestseller. What all these reviews miss is the fact that the book’s content is less interesting than what it leaves out. Over the course of 230 pages, the authors are careful not to reveal the disturbing and deeply anti-human worldview that underlies and motivates their pleas to stop the ASI race. Though they want readers to think otherwise, the authors are no friends of humanity. They talk about ASI as if we should never build it, and frequently warn of “human extinction.” But, in fact, they don’t actually care about whether our species goes extinct, nor are they opposed to building ASI.

The book is a clumsily written piece of propaganda for what I have elsewhere called the “TESCREAL” ideologies — a constellation of techno-futuristic worldviews that imagine a future utopia in which we create a new posthuman species to usurp humanity, spread beyond Earth and fill the universe with what might be called digital space brains. If this sounds bizarre, that’s because it is. Yet this “techno-utopian” vision is integral to the book’s mission, even if the book itself slyly avoids the subject.

The dishonesty starts on the cover, with the title, which most readers will interpret literally. Why wouldn’t they? However, what the authors really mean is, “If anyone builds ASI in the near future, then everyone will die.” Yudkowsky and Soares want to build ASI as quickly as possible, they just don’t think we’re ready for it — yet.

To understand this, consider the difference between “AI capabilities” and “AI safety” research. The former aims to create a superintelligent computer or algorithm. The latter aims to figure out how to ensure that this ASI is controllable by those who build it. 

Yudkowsky and Soares believe that if AI safety research trails behind AI capabilities research, the default outcome will be the death of everyone on Earth. But, they profess, if AI safety research leads the way, the controllable ASI that results will bring about the aforementioned utopia marked by immortality, mind-uploading, space colonization and endless pleasures beyond our wildest techno-fantasies. A new, superior race of digital space brains will then “fill the stars with fun and wonder,” as Yudkowsky and Soares put it in almost childlike language.

Related The Acronym Behind Our Wildest AI Dreams and Nightmares

This is what a controllable ASI promises, and it’s why the authors want ASI ASAP — though they don’t convey this in their book. They do, however, state this view elsewhere, as in a footnote of a 2024 report from their research organization, the Thiel-funded Machine Intelligence Research Institute, which Yudkowsky co-founded and Soares now directs. The report states: “We remain committed to the idea that failing to build smarter-than-human systems someday would be tragic and would squander a great deal of potential. We want humanity to build those systems, but only once we know how to do so safely.”

Contrast this with a claim from their book: “We’re not here to tell you that you’re doomed .… Artificial superintelligence doesn’t exist yet. Humanity could still decide not to build it.” A straightforward reading would suggest that ASI should never be built, which is not the authors’ view. To the contrary, they would no doubt say that never building ASI would itself be an existential catastrophe, because reaching “utopia” is likely impossible without a controllable ASI to take us there.

Such talk of utopia might seem harmless — if also a bit wacky — but it’s actually quite dangerous. That’s because realizing this utopia would almost certainly entail the extinction of our species, the very thing that Yudkowsky and Soares seem to be warning about, as when they write: “Artificial intelligence poses an imminent extinction risk to humanity” and “ humanity should not go extinct and be replaced by something bleak.” Do they actually care about “humanity” not “going extinct”?

The devil is in the details. What the authors don’t tell readers is that they’re using the terms “extinction” and “humanity” in an idiosyncratic way. For them, humanity would include not just our species, Homo sapiens, but also whatever technologically modified posthuman descendants we might have. (It is these posthumans, rather than us, who will someday populate utopia.) It follows from this definition that if our species were to create such descendants and then die out, human extinction would not have occurred — because those descendants would also count as humanity. 

Another linguistic trick concerns the word “extinction.” In the academic literature, “terminal extinction” refers to scenarios in which our species dies out, whereas “final extinction” refers to a distinct scenario in which our species dies out without leaving behind any posthuman descendants. The implications of this parallel those spelled out above: If our species were to perish next year without being replaced by posthumans, then we will have undergone final extinction. But if our species were to create these posthumans and then kick the can, we will have merely undergone terminal extinction. This may look like an annoyingly academic point, but it’s crucial for understanding AI doomers like Yudkowsky and Soares.

All indications are that the authors only care about final extinction — not terminal extinction. In other words, they care about the survival of our species insofar as this is necessary to create digital space brains to succeed us. What really matters are those space brains — which they would count as “humanity.”

The fact that our species doesn’t matter is evident in a discussion from late last year between Yudkowsky and the computer scientist Stephen Wolfram. “It’s not that I’m concerned about being replaced by a better organism,” Yudkowsky told Wolfram, “I’m concerned that the organism won’t be better.” In a subsequent exchange with the outright extinctionist Daniel Faggella, Yudkowsky made this jaw-dropping statement:

If sacrificing all of humanity were the only way, and a reliable way, to get … godlike things out there — superintelligences who still care about each other, who are still aware of the world and having fun — I would ultimately make that trade-off.

Yudkowsky insists that this isn’t “the trade-off we are faced with,” but if it were, he’d happily make it. In yet another conversation, Yudkowsky proclaims that  “a glorious transhumanist future” awaits if we play our cards right — where “playing our cards right” means, in part, building a controllable ASI. He then says:

I have basic moral questions about whether it’s ethical for humans to have human children, if having transhuman children is an option instead. Like, these humans running around? Are they, like, the current humans who wanted eternal youth but, like, not the brain upgrades? Because I do see the case for letting an existing person choose, “No, I just want eternal youth and no brain upgrades, thank you.” But then if you’re deliberately having the equivalent of a very crippled child when you could just as easily have a not crippled child .…

In other words, once posthuman children become possible, having a normal “human child” would be the equivalent of having a “crippled child” — deeply offensive language, by the way, that I would never write if I weren’t quoting a eugenicist like Yudkowsky. He continues his musing about the future of our species:

Like, should humans in their present form be around together? Are we, like, kind of too sad in some ways? … I’d say that the happy future looks like beings of light having lots of fun in a nicely connected computing fabric powered by the Sun, if we haven’t taken the sun apart yet. Maybe there’s enough real sentiment in people that you just, like, clear all the humans off the Earth and leave the entire place as a park. … Yeah, like … That was always the [thing] to be fought for. That was always the point, from the perspective of everyone who’s been in this for a long time.

The line about “nicely connected computing fabric” is a reference to future posthumans living in computer simulations, which would be powered by Dyson swarms that harvest nearly all of the sun’s energy output. Yudkowsky notes that this was “always” the thing that everyone in the AI safety camp — a branch of the TESCREAL movement that he largely founded — has been fighting for: a “glorious transhumanist future” full of posthuman space brains living lives of endless ecstasy in huge virtual-reality worlds spread throughout the universe.

In this utopia, though, what becomes of our species? What’s our fate? The answer is that we’d be sidelined, marginalized, disempowered and ultimately eliminated. “Clear all the humans off the Earth and leave the entire place as a park,” Yudkowsky callously suggests. As long as what supersedes us is “better,” what’s the problem? He’d even sacrifice “all of humanity” to bring about this “utopia”!

This is about the most extreme version of eugenics imaginable, since it’s not about improving the human species as such (as with traditional eugenics), but usurping us entirely with “superior” posthumans. We can call this “digital eugenics,” an idea that has become widely embraced within Silicon Valley.

“Clear all the humans off the Earth and leave the entire place as a park.”

The disturbing implication of digital eugenics is that if we build an uncontrollable ASI, then our species will die out. Yet if we build a controllable ASI, then our species will die out. The outcome is the same either way, meaning that neither option looks appealing for pro-humanity folks like me. Here we see the deeply anti-human worldview — an insidious variant of Silicon Valley pro-extinctionism — that underlies Yudkowsky and Soares’ entire worldview, though the authors are careful not to reveal such dark secrets to unsuspecting readers.

In sum, casual readers might assume that Yudkowsky and Soares care about preserving humanity and preventing us from going extinct. But the authors aren’t using “humanity” and “extinct” the way most of us understand those terms: They’re talking about preserving the possibility of posthuman space brains taking over the world (and universe). The disempowerment and disappearance of our species is ultimately part of the desired plan for the future. Once ASI arrives, our species will become a useless vestige of a bygone era that would only waste valuable resources that our space-brain descendants could use for much “grander” things.

I actually agree with Yudkowsky and Soares that an uncontrollable ASI would probably annihilate everyone on Earth by default (though I don’t think we’re anywhere close to ASI). If we build a system that can genuinely outmaneuver us in every important way, process information much faster than us, solve problems too complex for us to understand, then why would anyone expect humanity to persist for long? An immensely powerful technology that we cannot control would likely destroy us as an unintended consequence of its actions, perhaps the same way that we destroy ant colonies because we see them as inferior beings that are sometimes in the way of paving a new road for a suburban neighborhood.

Where I strongly disagree with the authors is on the question of whether ASI can ever be controlled. Yudkowsky and Soares believe controllability is feasible if AI safety research leads the way over AI capabilities research. “The ASI alignment problem,” they write, “is possible to solve in principle.” But I see no reason for accepting this claim (and they provide no convincing reasons for believing it). How could we possibly ensure that a dynamic, constantly evolving cluster of self-improving algorithms remains under our control for more than a flash? As the computer scientist Roman Yampolskiy compellingly makes the point:

We don’t have static software. We have a system which is dynamically learning, changing, rewriting code indefinitely. It’s a perpetual motion problem we’re trying to solve. We know in physics you cannot create [a] perpetual motion device. But in AI, in computer science, we’re saying we can create [a] perpetual safety device, which will always guarantee that the new iteration is just as safe.

That seems like a fool’s errand. There is no perpetual safety device that can protect us — or even our posthuman descendants, if we were to create them — from eventual annihilation. Hence, annihilation is the inevitable outcome if ASI is ever built by anyone at any point.

The inescapable conclusion is one that Yudkowsky and Soares, given their techno-utopianism, would roundly reject: We should implement a permanent ban of all efforts to build ASI, as groups like Stop AI have vociferously argued. There is no such thing as “controllable” ASI, and even if ASI were controllable, the realization of Yudkowsky’s “utopia” would itself precipitate our extinction, as noted above. The entire project of creating “godlike” AI is fundamentally misguided and extremely dangerous. It should be abandoned immediately, though at this point there’s so much money in the mix that it’s easier to imagine an end to the world than an end to the ASI race.

We should implement a permanent ban of all efforts to build ASI.

Integral to this alternative approach, which I advocate, must be a shift away from the TESCREAL worldview that Yudkowsky and Soares champion. That means embracing what might be called a true affirmation of life, here and now, on our spaceship Earth: Planet A. Rather than moving fast and breaking things, we should move slow and build things. Rather than seeing our species as a conduit through which the “digital world of the future” will be born, we should value our species as an end in itself.

According to one study, approximately $1.5 trillion will have been spent on the race to build ASI this year alone. Imagine for a moment if that obscene heap of cash had been spent instead on restoring Earth’s ecosystems, cleaning up our pollution, mitigating climate change, eliminating global poverty, ensuring that everyone has access to free health care and making the world a more livable place for all. Imagine that.

I think about this as follows: Here we are on a magnificently beautiful planet, a twirling orb in space, painted with greens and blues, bustling with complex ecosystems full of innumerable living creatures, magnificent and exquisite, some of which (like the octopus) we are only beginning to understand.

But this is not enough for the techno-utopians. They want more — infinitely more. They want to extend the ethos of extractive techno-capitalism into the stars, plunder the vast resources of the cosmos and build massive computer simulations full of trillions of digital space brains — powered by Dyson swarms. When they look up at the stars, they don’t see beauty in the pristine firmament but a vast reservoir of untapped resources to be exploited for the purpose of maximizing profit and “value.”

I agree with Yudkowsky and Soares that we should halt efforts to build ASI, but not to buy time for “AI safety” research to catch up. We should instead stop it because we love humanity, cherish our exquisitely marvelous planet and care about the well-being of future humans and our nonhuman companions alike.

AS CHAOS UNFOLDS, FIND SOLID GROUND…

In this time of unprecedented challenges, independent journalism is more vital than ever. At Truthdig, we expose what power wants hidden and give you the clarity to make sense of it all.

Your donation helps ensure that truth telling continues.

SUPPORT TRUTHDIG