Along with inflation and the state of democracy, Americans are increasingly worried about human extinction. A survey from 2017 found that roughly 40% of the U.S. public thinks “the odds that global warming will cause humans to become extinct are 50% or higher.” Another survey published last year reports that 55% of us are either “very worried” or “somewhat worried” that advanced artificial intelligence “could eventually pose a threat to the existence of the human race.”

These concerns are not irrational. After all, the overwhelming consensus among climate scientists is that climate change will have catastrophic consequences in the coming decades. The people running artificial intelligence companies like OpenAI, meanwhile, are explicit that the technologies they’re building could lead to the annihilation of our species. In a 2023 interview, the CEO of OpenAI, Sam Altman, said that “the bad case — and I think this is important to say — is, like, lights out for all of us” if advanced AI goes awry.

Most of us would say that human extinction would be rather bad, for one reason or another. But not everyone would agree. 

What kind of person prefers human extinction over continued existence? There are a few obvious suspects. One is the “philosophical pessimist” who argues that the world is full of so much human suffering that the nonexistence of our species is better than continued existence. Another is a certain stripe of radical environmentalist who claims that humanity is so destructive to the biosphere, that only our extinction can save what remains of the natural world. 

Then there is a third group of people who aren’t bothered by the possibility of human extinction, and indeed some hope to actively bring it about in the coming decades. They represent a more dangerous and extreme form of pro-extinctionist ideology that is fairly widespread within Silicon Valley. In fact, some of the most powerful people in the tech world are members of this group, such as the co-founder of Google, Larry Page.

People in this third group imagine our species disappearing by being replaced by a new species of what some call “posthuman” beings, which could take the form of “intelligent machines” or “AIs.”

To understand this group, it’s important to recognize the particular type of human extinction that they hope to bring about. The philosophical pessimists and radical environmentalists want our species to disappear so that there are no more creatures like us in the future. In contrast, people in this third group imagine our species disappearing by being replaced by a new species of what some call “posthuman” beings, which could take the form of “intelligent machines” or “AIs.” In their view, if humanity stopped existing because these intelligent machines took over the world in two decades, that would be a good thing — or, at the very least, it is nothing to bemoan. Consequently, these people argue that we shouldn’t resist the AI takeover, even if that means the end of everything we hold dear today.

Page has argued that “digital life is the natural and desirable next step in … cosmic evolution and that if we let digital minds be free rather than try to stop or enslave them, the outcome is almost certain to be good.” On this account, you could see human beings as the crucial link between two worlds: the biological world that exists right now, and the digital world run by intelligent machines that will exist in the future. By building these intelligent machines — or digital minds — we are creating our successors, who will inaugurate the next stage in cosmic evolution.

According to Page, the advance of cosmic evolution cannot be — and must not be — stopped, so the best thing to do is build these digital successors and then step aside. Once we do this, our successors will take over and begin to “spread throughout our Galaxy and beyond,” establishing a kind of “utopian” world in which biological life has been marginalized and perhaps even eliminated entirely. When someone challenged Page’s “digital utopianism” (as one commentator called it), Page accused that person of “speciesism” — of treating “certain life forms as inferior just because they [are] silicon-based rather than carbon-based.” These AIs are, Page and his cohort contend, “life” no less than we are, and since they’ll be more advanced than us in nearly every way, we should willingly hand the baton off to them.

It is worth noting that Google owns DeepMind, one of the most powerful companies right now with the explicit goal of building superintelligent machines — the very technologies that Page imagines will take our place in the universe.

Page is far from alone in his “utopian” visions of the future. Consider Hans Moravec, who is currently at the Robotics Institute of Carnegie Mellon University. In a 1989 article, he describes himself as “an author who cheerfully concludes that the human race is in its last century, and goes on to suggest how to help the process along.” According to Moravec, we are building machines that will soon “be able to manage their own design and construction, freeing them from the last vestiges of their biological scaffolding, the society of flesh and blood humans that gave them their birth.” He declares that “this is the end,” because “our genes, engaged for four billion years in a relentless, spiraling arms race with one another, have finally outsmarted themselves” by creating AIs that will take over and replace humanity. Rather than this being a cause for gloominess, we should welcome this new phase of “post-biological life” in which “the children of our minds” — i.e., the AIs that usurp us — flourish.

Other computer scientists have promoted the same view. Richard Sutton, who is highly respected within a subfield of AI called “reinforcement learning,” argues that the “succession to AI is inevitable.” Though these machines may “displace us from existence,” he tells us that “we should not resist [this] succession.” Rather, people should see the inevitable transformation to a new world run by AIs as “beyond humanity, beyond life, beyond good and bad.” Don’t fight against it, because it cannot be stopped. Similarly, another leading AI researcher named Jürgen Schmidhuber, director of the Dalle Molle Institute for Artificial Intelligence Research in Switzerland, says that “in the long run, humans will not remain the crown of creation. … But that’s okay because there is still beauty, grandeur, and greatness in realizing that you are a tiny part of a much grander scheme which is leading the universe from lower complexity towards higher complexity.” 

Again, we should not resist our AI replacements, but recognize that we play a small but crucial role in cosmic evolution — the critical link between the biological and digital worlds.

Proponents of a “posthuman” world envision a “utopian” end of humanity as a necessary consequence of evolution. Image: Adobe

These are just a few examples of an ideology that some call “accelerationism,” or the view that we should “accelerate AI development as rapidly as possible” while opposing “restrictions on the development or proliferation of AIs.” As an academic paper from last year notes, “this sentiment is alarmingly common among many leading AI researchers and technology leaders, some of whom are intentionally racing to build AIs more intelligent than humans.” In fact, over the past several months, an especially insidious version of accelerationism has emerged in the form of so-called “effective accelerationism,” abbreviated as “e/acc.” The core claim of e/acc is that we must colonize space and build a sprawling, multigalactic civilization, and the way to do this is by developing superintelligent AIs as soon as possible. If humanity perishes in the process, so be it — all that matters is that these machines are conscious and able to fulfill our grand, cosmic destiny of spreading beyond Earth and taking control of the universe.

E/acc has recently gained a significant following. The venture capitalist Garry Tan, who is currently the CEO of Y Combinator, proudly wears the “e/acc” label, as does the right-wing tech billionaire Marc Andreessen. The group’s intellectual leader (using the word “intellectual” loosely) is a physicist named Guillaume Verdon, who goes by “Beff Jezos” on X, formerly Twitter.

The core claim of e/acc is that we must colonize space and build a sprawling, multigalactic civilization, and the way to do this is by developing superintelligent AIs as soon as possible.

At times, members of the e/acc movement have said that they are not, in fact, pro-human extinction. Last summer, Tan posted on X that “e/acc is not ‘replace humans with robots’” but instead promotes the idea that “more tech means more humans, more prosperity, but also more AIs.” An issue of the e/acc newsletter says something similar. In response to the question “Do you want to get rid of humans?,” the newsletter states: “No. Human flourishing is one of our core values! We are humans, and we love humans.” This idea was reiterated by Andreessen, who says that “we believe the techno-capital machine is not anti-human — in fact, it may be the most pro-human thing there is. It serves us. The techno-capital machine works for us. All the machines work for us.”

Such claims might just be marketing e/acc to the public, though, since at other times e/acc members have been explicit that humanity might not have any place in the future that they envision. For example, a different issue of the e/acc newsletter states that the ideology “isn’t human-centric — as long as it’s flourishing, consciousness is good.” In December of 2022, Beff Jezos—aka Guillaume Verdon—was asked on X, “in the e/acc manifesto, when it was said ‘The overarching goal for humanity is to preserve the light of consciousness,’ this does not necessarily require the consciousness to be human in essence, is that correct?” Verdon’s response was short and to the point: “Yes. Correct,” to which he added that he’s “personally working on transducing the light of consciousness to inorganic matter.”

The following year, when the topic came up during a podcast interview, he argued that we have a genetically based preference for our “in-groups,” whether that is “our family, our tribe, and then our nation, and then ‘team human’ broadly.” But, he said,

if you only care about your team in the grand scheme of things, there’s no guarantee that you always win. … It’s not clear to me why humans are the final form of living beings. I don’t think we are. I don’t think we’re adapted to take to the stars, for example, at all, and we’re not easily adaptable to new environments — especially other planets … And so, at least if I just look on the requirements of having life that becomes multi-planetary we’re not adapted for it, and so it’s going to be some other form of life that takes to the stars one way or another.

While it is true that, in the very long run, our species is fated to disappear, it’s crucial to note that Verdon explicitly advocates building superintelligent machines in the very near future — as soon as possible. But what exactly should we expect to happen once these machines are built? Obviously, they’re going to take control, for the same reason that humans dominate the globe rather than chimpanzees or gorillas. Would that be bad, according to the e/acc worldview? Not at all: if these machines were conscious and then proceeded to fulfill our cosmic destiny by plundering the universe for its resources, that’s all that matters — even if that means that Homo sapiens disappears in the process. As Verdon admitted above, spreading the light of consciousness across the cosmos does not require our AI progeny to be “human in essence.” Humanity is just a steppingstone from our present era of biology to this glorious future run by machines.

Everyone mentioned above falls on the spectrum of “pro-extinctionism,” right next to the philosophical pessimists and radical environmentalists. They are all, to borrow a phrase from the scholar Adam Kirsch, “revolting against humanity.” However, a big difference between these accelerationists, on the one hand, and the philosophical pessimists and radical environmentalists, on the other, is that most pessimists and environmentalists insist on our extinction being voluntary. In contrast, the accelerationist view is perfectly compatible with humanity being usurped by AIs in an involuntary, perhaps even violent, manner. This usurpation is, after all, the inevitable next step in cosmic evolution, so it doesn’t matter how much we might protest the coming revolution: sooner or later — but probably sooner — the AIs are going to take over and our species will be relegated to the margins or eliminated entirely.

The situation is made even more complicated by the fact that many of the people who are most vocally opposed to “human extinction” in the public discussion are also in favor or indifferent to human extinction. How does that make sense? I’ll explain in a moment. The first thing to get clear is that these people are the so-called transhumanists and longtermists. If you talk to a member of these groups, they’ll tell you that preventing the extinction of humanity should be our number one priority this century (and beyond). Human extinction, they say, is the most obvious type of “existential catastrophe,” and as the influential transhumanist and longtermist Nick Bostrom writes, existential catastrophes are the “one kind of catastrophe that must be avoided at any cost.”

Transhumanists and longtermists believe that it’s really important to actually create a new species of posthumans, and many would say that if Homo sapiens disappears in the process, so much for the better.

The catch — the sneaky move on their part — is that these people define “humanity” in a very unusual way. For most of us, “humanity” means our particular species, Homo sapiens. If “humanity” dies out, then our species no longer exists. In contrast, for these transhumanists and longtermists, “humanity” refers to both our species and whatever “posthuman” descendants we might have, so long as they possess certain capacities like consciousness. (Yes, that means that our “posthuman” descendants would also be “human,” which is obviously confusing!) So, imagine a future in which there are no more Homo sapiens, but there exists a population of posthuman beings — intelligent machines or cyborgs of some sort — that have replaced us. On this expanded definition, “humanity” would still exist, even though our species does not. This means that Homo sapiens could die out without “human extinction” having occurred. Very sneaky indeed.

Even more, transhumanists and longtermists believe that it’s really important to actually create a new species of posthumans, and many would say that if Homo sapiens disappears in the process, so much for the better. For example, Bostrom’s colleague, Toby Ord, argues in his book “The Precipice” that our ultimate destiny in the universe is to fulfill our “longterm potential” over the coming millions, billions and trillions of years. This means colonizing space and creating a ginormous civilization that spans many galaxies — the very same goal of accelerationism. But Ord also says that fulfilling our “potential” will require us to reengineer our species. In his words, “rising to our full potential for flourishing would likely involve us being transformed into something beyond the humanity of today,” since “forever preserving humanity as it now is may also squander our legacy, relinquishing the greater part of our potential.”

When Ord claims that our top global priority should be to prevent “human extinction,” he’s not talking about keeping Homo sapiens around. Rather, the survival of our species matters insofar as it’s necessary to create or become a new posthuman species: if we were to die out next week, for example, that would prevent us from creating posthumanity, which means that we would have failed to realize “our longterm potential.”

And what if we survive for long enough to create our posthuman successors? What would happen to Homo sapiens once posthumans rule the world? Members of our species would obviously be marginalized or, more likely, eliminated, because why keep around an inferior species of humans when a far better version exists? Some transhumanists refer to members of Homo sapiens who persist into the posthuman era as “legacy humans,” who might be kept in pens or as pets, though it seems more likely that we’d simply disappear. And if that were to happen, would “human extinction” have occurred? No, because posthumans are humans, by their idiosyncratic definition!

In an era of posthumans, “legacy humans” may still linger, though it is more likely they won’t. Image: Adobe

To summarize: if “humanity” means Homos sapiens, then the transhumanists and longtermists are mostly indifferent to human extinction. Some are even in favor of our extinction, as when a philosopher named Derek Shiller argues that, if posthuman lives could be “better” than our lives, we should try to create these posthumans and then actively bring about our own extinction. “It is plausible that in the not-too-distant future,” he writes, “we will be able to create artificially intelligent creatures with whatever physical and psychological traits we choose. Granted this assumption, it is argued that we should engineer our extinction so that our planet’s resources can be devoted to making artificial creatures with better lives.” Hans Moravec himself was an early transhumanist who, as we saw, “cheerfully” hopes to catalyze “the end” of our species by replacing us with intelligent machines.

However, if one defines “humanity” so that it includes these posthuman beings, then the transhumanists and longtermists very much do care about avoiding “human extinction,” even if our species itself gets the boot along the way. So, don’t be fooled by the language here: when these people say that they care about avoiding human extinction, they aren’t expressing their commitment to “Team Human,” a term popularized by the media theorist Douglas Rushkoff. Instead, they’re on “Team Posthuman,” and don’t really care whether or not Homo sapiens has a home in the future.

Unfortunately, redefining “humanity” the way transhumanists and longtermists do has completely muddled the entire public discussion about “human extinction.” Let me illustrate with a line from Elon Musk, who posted on X late last year that “the real battle is between the extinctionists and the humanists.” This was in part, it seems, a reference to the debate about whether we should rush to create advanced AIs or take our time and move cautiously. Musk considers himself to be one of the humanists — a member of Team Human. He disagrees with Larry Page that if we just get out of the way and let our AI progeny take over, everything will be fine and dandy.

Yet Musk is both a transhumanist and longtermist. His company Neuralink aims to “kickstart transhuman evolution” and “jump-start the next stage of human evolution,” to quote a Futurism article. Or, as Vox puts it, the goal of Neuralink is “to merge human brains with AI,” which would constitute a major step toward becoming posthuman. In an obvious and undeniable sense, this is extinctionism, because merging our brains with AI is moving us toward a future that will be dominated by posthuman beings, in which our lowly species of biological humans will be marginalized, if not entirely erased, possibly in the next few decades.

Musk’s claim about humanists versus extinctionists, and his suggestion that he’s one of the humanists, is thus deeply misleading. Musk isn’t worried that we will be replaced, he’s worried about what might replace us. That’s his only major point of disagreement with Larry Page and the other accelerationists. What these two groups agree about is far more significant: that our species is a temporary bridge connecting the biological world with a new digital era, whereby posthuman AIs or AI-enhanced cyborgs will run the world. This is extinctionism, using Musk’s term—or what I have been calling pro-extinctionism.

The real “humanists” are those who oppose any posthuman being replacing or marginalizing our species within the next few centuries or millennia. Humanists like myself thus stand in opposition to Musk, Page, Moravec, Sutton and the e/acc radicals.

To effectively fight against the growing influence of these pro-extinction ideologies, it’s crucial for members of the public to know what these ideologies are, the role they’re playing in the race to build advanced AI, and how advocates of these ideologies redefine words to give the impression that they oppose something — human extinction — that they are actually working to bring about. Don’t be tricked: pro-extinctionists are all around us, in the form of fringe philosophers and environmentalists, as well as tech billionaires and computer scientists in Silicon Valley who hold enormous power.

People today are right to be worried about the future of our species. But to protect this future, it’s imperative that we understand who is, and is not, rooting for our side: Team Human.

Your support matters…

Independent journalism is under threat and overshadowed by heavily funded mainstream media.

You can help level the playing field. Become a member.

Your tax-deductible contribution keeps us digging beneath the headlines to give you thought-provoking, investigative reporting and analysis that unearths what's really happening- without compromise.

Give today to support our courageous, independent journalists.

SUPPORT TRUTHDIG