A few days before Christmas in 2019, I posted something on social media that shocked a lot of my followers. “I’m increasingly convinced,” I wrote, that if the longtermist ideology is “widely accepted, the result would be a … catastrophe.”

This was shocking because for roughly a decade leading up to this post, nearly all of my academic research centered around promoting longtermism, an ideological offshoot of Effective Altruism (EA). I attended longtermist conferences, gave talks at longtermist organizations like the Future of Humanity Institute and published dozens of articles defending the longtermist worldview. I was what I’d call a “true believer.”

By the end of 2019, it became increasingly clear to me that longtermism is bad philosophy and that, if taken literally, it could be used to “justify” a wide range of extreme actions. Two years later, I wrote a detailed critique of longtermism for Aeon, which explained why I think this ideology could be profoundly dangerous. I’ve since modified and elaborated parts of these arguments, and now prefer the word “TESCREALism” to “longtermism” because the acronym “TESCREAL” does a better job of capturing the array of overlapping ideas that have shaped the general outlook.

At the heart of TESCREALism is a techno-utopian vision of the future in which we become a new species of “enhanced” posthumans, colonize space, subjugate nature, plunder the cosmos for its vast resources and build giant computers floating in space to run virtual-reality simulations in which trillions and trillions of “happy” digital beings live. The ultimate aim is to maximize the total amount of “value” in the universe.

At the heart of TESCREALism is a techno-utopian vision of the future in which we become a new species of “enhanced” posthumans.

When I initially argued that the TESCREAL bundle of ideologies could be dangerous, my worries were merely hypothetical. To be sure, there were plenty of red flags: Nick Bostrom — one of the leading TESCREAListsclaimed in 2002 that we should keep preemptive aggression on the table to protect our posthuman future. In 2019, he argued that we should seriously consider implementing a highly invasive global surveillance system to prevent the destruction of industrial civilization, an idea he later discussed in an interview for TED. And he made worrying claims about “existential risks,” defined as any event that would permanently thwart our ability to create a techno-utopian world among the heavens full of, in his words, “astronomical” amounts of value.

For example, he contended in 2013 that, crunching the numbers, even teeny-tiny reductions in existential risk may be morally better than saving billions of actual human lives. Since “reducing existential risk” is identical to “increasing the likelihood of a techno-utopia full of astronomical value,” if forced to choose between saving billions of actual humans and reducing existential risk by an extremely small amount, you should choose the second option.

Yet there weren’t any TESCREALists actually calling for preemptive violence, mass surveillance or sacrificing actual humans to safeguard utopia. These were just provocative ideas buried in the academic literature, written by people who didn’t have much sway among politicians or the wealthy oligarchs who run our societies. My worry in 2021 was simply that the TESCREAL bundle of ideologies itself contains all the ingredients needed to “justify,” in the eyes of true believers, extreme measures to “protect” and “preserve” what Bostrom’s colleague, Toby Ord, describes as our “vast and glorious” future among the heavens.

In 2023, these worries are no longer hypothetical. First, the TESCREAL movement has become immensely powerful. It’s infiltrating foreign policy circles and major governing institutions like the United Nations, has tens of billions of dollars behind it, is pervasive within Silicon Valley, and has been promoted by people with large social media followings. Elon Musk, for example, calls longtermism “a close match for my philosophy,” and last year retweeted a link to Bostrom’s paper about how many digital people there could be if we colonize space, along with the line: “Likely the most important paper ever written.” The TESCREAL movement has become a global force, and it’s building momentum.

Second, some TESCREALists have begun to explicitly call for policies that could heighten the risk of nuclear conflicts that kill billions of people. Others have flirted with the idea of targeted assassinations of AI researchers to slow down progress on artificial general intelligence, or AGI, which many TESCREALists see as the greatest existential risk facing humanity this century. Talk of extreme actions, even the use of force and violence, to prevent an AGI apocalypse is becoming increasingly common, and my worry now is that true believers in the TESCREAL ideologies, who think we’re in an apocalyptic moment with AGI, could actually do something that causes serious harm to others.

Consider a recent TIME magazine article by Eliezer Yudkowsky, a central figure within the TESCREAL movement who calls himself a “genius” and has built a cult-like following in the San Francisco Bay Area. Yudkowsky contends that we may be on the cusp of creating AGI, and that if we do this “under anything remotely like the current circumstances,” the “most likely result” will be “that literally everyone on Earth will die.” Since an all-out thermonuclear war probably won’t kill everyone on Earth—the science backs this up—he thus argues that countries should sign an international treaty that would sanction military strikes against countries that might be developing AGI, even at the risk of triggering a “full nuclear exchange.”

Yudkowsky is thus arguing that more than eight billion people should be “allowed” to die for the sake of “reaching the stars someday.”

Many people found these claims shocking. Three days after the article was published, someone asked Yudkowsky on social media: “How many people are allowed to die to prevent AGI?” His response was: “There should be enough survivors on Earth in close contact to form a viable reproductive population, with room to spare, and they should have a sustainable food supply. So long as that’s true, there’s still a chance of reaching the stars someday.”

To understand just how extreme this is, a viable reproductive population might be as low as 150 people, although more conservative estimates put the number at 40,000. As of this writing, the current human population is approximately 8,054,000,000. If you subtract 40,000 from this number, you get 8,053,960,000. Yudkowsky is thus arguing that more than eight billion people should be “allowed” to die for the sake of “reaching the stars someday,” i.e., realizing the techno-utopian vision at the heart of TESCREALism.

Astonishingly, after Yudkowsky published his article and made the comments above, TED invited him to give a talk. He also appeared on major podcast’s like Lex Fridman’s, and last month appeared on the “Hold These Truths Podcast” hosted by the Republican congressman Dan Crenshaw. The extremism that Yudkowsky represents is starting to circulate within the public and political arenas, and his prophecies about an imminent AGI apocalypse are gaining traction.

The first time I became worried about what TESCREALism might “justify” was in 2018, when I still considered myself to be part of the movement. I read a book titled Here Be Dragons by the Swedish scholar Olle Häggström, who is generally sympathetic with the longtermist ideology. In one chapter, Häggström considers Bostrom’s claim that teeny-tiny reductions in existential risk could be orders of magnitude better than saving billions of actual human lives. He then writes that:

I feel extremely uneasy about the prospect that [Bostrom’s calculations] might become recognised among politicians and decision-makers as a guide to policy worth taking literally. It is simply too reminiscent of the old saying “If you want to make an omelette, you must be willing to break a few eggs,” which has typically been used to explain that a bit of genocide or so might be a good thing, if it can contribute to the goal of creating a future utopia.

Imagine a real-world scenario, Häggström says, in which

the CIA explains to the US president that they have credible evidence that somewhere in Germany, there is a lunatic who is working on a doomsday weapon and intends to use it to wipe out humanity, and that this lunatic has a one-in-a-million chance of succeeding. They have no further information on the identity or whereabouts of this lunatic. If the president has taken Bostrom’s argument to heart, and if he knows how to do the arithmetic, he may conclude that it is worthwhile conducting a full-scale nuclear assault on Germany to kill every single person within its borders.

When Häggström wrote this, it was just another hypothetical concern. Yet today, this scenario is eerily similar to what Yudkowsky is advocating: military actions that could cause a genocidal nuclear catastrophe, if necessary to keep the techno-utopian dream alive. Yudkowsky is careful to note that he doesn’t say countries should engage in nuclear first strikes, only conventional first strikes, though what does this matter if the result is a nuclear war that kills more than 8 billion people? 

As I have elsewhere argued, any time an ideology or worldview combines a utopian vision of the future marked by near-infinite value with a broadly “utilitarian” mode of moral reasoning—as TESCREALism does—it could easily lead true believers to conclude that, as Häggström writes, “a bit of genocide or so might be a good thing, if it can contribute to the goal of creating a future utopia.”

This scenario is eerily similar to what Yudkowsky is advocating: military actions that could cause a genocidal nuclear catastrophe, if necessary to keep the techno-utopian dream alive.

This is a Code Red Warning about TESCREALism. If Yudkowsky’s words are taken seriously by our political leaders, or perhaps some lone wolves, we should be extremely worried about the harms that could result. Over and over again, history shows that the march to utopia can leave a trail of destruction in its wake. If the ends can justify the means, and the end is paradise, then what exactly is off the table for protecting and preserving this end?

There are other worrying signs within the community as well. For example, last February someone sent me the meeting minutes of an invite-only workshop on AI safety. The field of “AI safety” emerged out of the TESCREAL movement. It’s based on the assumption that if we create an AGI that’s “aligned” with our “values,” it will immediately grant all our techno-utopian wishes, including immortality for everyone who wants it. However, if the AGI is “misaligned,” it will kill every human on the planet by default. The goal of AI safety research is to ensure that AGI is sufficiently aligned, although people like Yudkowsky have become convinced that we’re so far from solving the “alignment” problem that the only option left is to ban all AGI research—including through force.

The AI safety workshop was held in Berkeley in late 2022 and initially funded by the FTX Future Fund, established by Sam Bankman-Fried, the TESCREAList who appears to have committed “one of the biggest financial frauds in American history.” Berkeley is the home town of Yudkowsky’s own Machine Intelligence Research Institute (MIRI), where one of the workshop organizers was subsequently employed. Although MIRI was not directly involved in the workshop, Yudkowsky reportedly attended a workshop afterparty.

Under the heading “produce aligned AI before unaligned [AI] kills everyone,” the meeting minutes indicate that someone suggested the following: “Solution: be Ted Kaczynski.” Later on, someone proposed the “strategy” of “start building bombs from your cabin in Montana,” where Kaczynski conducted his campaign of domestic terrorism, “and mail them to DeepMind and OpenAI lol.” This was followed a few sentences later by, “Strategy: We kill all AI researchers.”

Participants noted that if such proposals were enacted, they could be “harmful for AI governance,” presumably because of the reputational damage they might cause to the AI safety community. But they also implied that if all AI researchers are killed, this could mean that AGI doesn’t get built. And foregoing AGI, if properly aligned, would mean that we “lose a lot of potential value of good things.”

It’s not clear how serious the participants were. But their discussion indicates that talk of violence is becoming normalized. Yudkowsky is partly to blame, given his cult-like status among many AI safety researchers fretting about the apocalypse. Not only has he made the claims quoted above, but he’s endorsed property damage targeting AI companies; and while he insists that “I have at no point uttered the words ‘b*mb datacenters’ nor called for individual terrorism,” when someone asked him whether he would “have supported bombing the Wuhan center studying pathogens in 2019,” he said this:

Great question! I’m at roughly 50% that they killed a few million people and cost many trillions of dollars [a reference to the lab leak theory]. If I can do it secretly, I probably do and then throw up a lot. If people are going to see it, it’s not worth the credibility hit on AGI, since nobody would know why I was doing that. I can definitely think of better things to do with the hypothetical time machine.

Yudkowsky talks out of both sides of his mouth, giving mixed messages. He wants us to believe that his proposals for avoiding an AGI apocalypse are within the bounds of established norms, yet in a moment of honesty he says that he would “probably” bomb laboratories in Wuhan, if he can do this “secretly.” He opposes nuclear first strikes, yet implies that more than 8 billion people should be “allowed” to die, if it means keeping the door open to “reaching the stars someday.” And his extraordinary over-confidence, fueled by his egomania, that a misaligned AGI will kill everyone on Earth is inspiring the sort of radical, dangerous discussions like those recorded in the Berkeley workshop meeting minutes.

It’s not just AI researchers who have been singled out: so have critics of the TESCREAL ideology. In fact, the reason the whistleblower leaked the meeting minutes was because he saw that I’ve received numerous threats of physical violence over the past year for speaking out against the movement I once belonged to.

For example, in October of last year, I received an anonymous DM over Twitter saying: “Better be careful or an EA superhero will break your kneecaps.” This message was repeated the following day by a different Twitter account, which later wrote “I wish you remained unborn” under a social media post of mine. The next month, an anonymous person in the TESCREAL movement sent me a menacing email that read: “Get psychiatric assistance before it’s too late, buddy,” after which yet another anonymous account threatened to try to dox me.

Last June, I received an email from an unidentified TESCREAList referring to a short film about a murder-suicide, in which a mother kills herself and her disabled daughter by setting fire to their car while both are inside. The sender stated, “I hope it will take something far less extreme than what happens in the film to make you look at the kind of person you’re becoming,” by which they meant a vigorous opponent of TESCREALism.

I’m not the only one who’s been frightened by the TESCREAL community. Another critic of longtermism, Simon Knutsson, wrote in 2019 that he had become concerned about his safety, adding that he’s “most concerned about someone who finds it extremely important that there will be vast amounts of positive value in the future and who believes I stand in the way of that,” a reference to the TESCREAL vision of astronomical future value. He continues:

Among some in EA and existential risk circles, my impression is that there is an unusual tendency to think that killing and violence can be morally right in various situations, and the people I have met and the statements I have seen in these circles appearing to be reasons for concern are more of a principled, dedicated, goal-oriented, chilling, analytical kind.

Knutsson then remarks that “if I would do even more serious background research and start acting like some investigative journalist, that would perhaps increase the risk.” This stands out to me because I have done some investigative journalism. In addition to being noisy about the dangers of TESCREALism, I was the one who stumbled upon Bostrom’s email from 1996 in which he declared that “Blacks are more stupid than whites” and then used the N-word. This email, along with Bostrom’s “apology”—described by some as a flagrant “non-apology”—received attention from international media outlets.

The danger of secular apocalyptic cults is that, when members believe that the promises of utopia are about to be shattered, they may resort to extreme measures to keep those promises alive.

For utopians, critics aren’t mere annoyances, like flies buzzing around one’s head. They are profoundly immoral people who block the path to utopia, threatening to impede the march toward paradise, arguably the greatest moral crime one could commit. Even people within the TESCREAL community have become scared to speak out, for fear of retaliation. Last January, a group of about ten Effective Altruists (EAs) posted a lengthy critique of the movement anonymously, because of “the significant risk” that including their names “would pose to their careers, access to EA spaces, and likelihood of ever getting funded again.”

When people inside a community become too afraid to publicly criticize it, that community starts to look rather like a cult. Indeed, last year I asked someone who was very prominent in the TESCREAL scene of San Francisco about the mood there, and they said that it had become a “full grown apocalypse cult.” The danger of secular apocalyptic cults is that, when members believe that the promises of utopia are about to be shattered, they may resort to extreme measures to keep those promises alive.

The threats that I’ve received, the worries expressed by Knutsson, and the fact that TESCREALists themselves feel the need to hide their identities further bolsters my claim that this movement is dangerous. It operates like a cult, has “charismatic” leaders like Yudkowsky and Bostrom, and appears to be increasingly at ease with extreme rhetoric about how to stop the AGI apocalypse.

The warnings I articulated in 2019 and 2021 are no longer merely hypothetical. What we’re seeing now is exactly what I worried could happen in the years to come. It didn’t take long for my predictions to prove accurate. When leading TESCREALists argue that existential risks are the “one kind of catastrophe that must be avoided at any cost,” to quote Bostrom with italics added, one shouldn’t be surprised if members of the community start talking about the use of force, military strikes, or targeted killings to reduce the supposed “existential risk” of AGI.

I do not know how this ends, but what’s clear from history is that many utopian movements, embracing a kind of “utilitarian” reasoning, have left a trail of destruction behind them. Will this time be any different? It depends on whether the power and influence of the TESCREAL movement continues to grow, and right now, the trendlines are ominous.

Wait, before you go…

If you're reading this, you probably already know that non-profit, independent journalism is under threat worldwide. Independent news sites are overshadowed by larger heavily funded mainstream media that inundate us with hype and noise that barely scratch the surface. We believe that our readers deserve to know the full story. Truthdig writers bravely dig beneath the headlines to give you thought-provoking, investigative reporting and analysis that tells you what’s really happening and who’s rolling up their sleeves to do something about it.

Like you, we believe a well-informed public that doesn’t have blind faith in the status quo can help change the world. Your contribution of as little as $5 monthly or $35 annually will make you a groundbreaking member and lays the foundation of our work.

Support Truthdig