The recent ouster of Sam Altman from OpenAI, followed by his reinstatement within a week, triggered a flurry of speculation. What led OpenAI’s board of directors to fire the face of artificial intelligence, one of the most popular figures in Silicon Valley?

Some believe that Altman’s dismissal was the culmination of a fight between — as the media have framed it — “effective altruists” and so-called “accelerationists.” Effective altruists, also known as “EAs,” want to slow the march toward artificial general intelligence, or AGI, while accelerationists want to push the pedal to the metal.

Altman, according to this narrative, leans accelerationist, while board members like Ilya Sutskever, Helen Toner and Tasha McCauley align with the EA approach. In the end, the accelerationists won, as Altman returned to his position as CEO and Sutskever, Toner and McCauley were removed from the board. Some EAs now think that this power struggle may be “as bad for EA’s reputation as [the collapse of] FTX,” and the subsequent imprisonment of its former CEO and co-founder, Sam Bankman-Fried, arguably the most prominent EA in the world alongside his moral adviser, philosopher William MacAskill.

What exactly is “accelerationism”? How does it contrast with EA, and does it connect with what Dr. Timnit Gebru and I call the “TESCREAL bundle” of ideologies — a cluster of techno-futuristic worldviews that have become immensely influential within Silicon Valley and major governing institutions? With a few exceptions, there’s very little in the popular media about the accelerationist movement, which received a burst of momentum with Altman’s return to OpenAI.

While there are important differences between accelerationism and EA — which accelerationists play up in blog posts and interviews — their respective visions of the future are more or less identical. If you imagine a five-by-five-foot map of different ideologies, accelerationism and EA would be located about an inch apart. Taking five steps back, they’d appear to be in the same location. Meanwhile, both would be about three feet from the field of AI ethics, which focuses on the real-world harms caused by AI —f rom worker exploitation and algorithmic bias to the spread of disinformation and environmental impacts of AI systems. 

To understand the topography of this map, let’s put e/acc and EA under a microscope to see how they diverge and where they overlap.

Accelerationists vs. “decels”: Two views on the dangers of AGI. Image: Adobe

The differences between accelerationism and EA fall into two areas. The most significant concerns their respective assessment of the “existential risks” posed by AGI. Accelerationists are techno-optimistic: They believe the risks are very low or nonexistent. To quote one of the thought leaders of contemporary accelerationism, Guillaume Verdon — better known by his spooneristic pseudonym “Beff Jezos” — an existential catastrophe from AGI has a “zero or near zero probability” of happening. Another leading accelerationist, tech billionaire Marc Andreessen, declares in one of his manifestos that he is “here to bring the good news: AI will not destroy the world, and in fact may save it.”

Many EAs tend to be much more techno-cautious, at least when it comes to certain hypothetical technologies like AGI. While the popular media and accelerationists alike often refer to this opposing group as “EA,” a more accurate label would be “longtermism.” The reason is that EA is a broad tent, and includes many people who aren’t that interested in AGI, existential risks and similar matters. Traditionally, EAs have distinguished between three main cause areas within the movement: alleviating global poverty, improving animal welfare and longtermism. When EA formed around 2009, it was initially focused entirely on global poverty. But over time, most if its leading figures and grantmaking organizations have shifted toward more longtermist issues, such as mitigating the supposed existential risks of AGI.

The reasoning went like this: The fundamental aim of all EAs is to do the most “good” possible in the world. Alleviating global poverty and ending factory farms seem like obvious ways to do this. But then EAs realized that, if humanity survives for the next century or so, we’ll probably spread into space, and the universe is huge and will remain habitable for trillions of years. Consequently, if one takes this grand, cosmic view of our place in space and time, it seems obvious that most people who could exist will exist in the far future — after we’ve spread beyond Earth and colonized the accessible universe. It follows that if you want to positively influence the greatest number of people, and if most people live in the far future, then you should focus on how your actions today can help them not only live good lives, but come into existence in the first place.

The connection with AGI is that EA longtermists — or “longtermists” for short — believe that it could be essential for colonizing space and creating unfathomable numbers of future people (most of whom, incidentally, would be digital people living in vast computer simulations). This is the upside if we get AGI right, meaning that we build an AGI that’s “value-aligned” with this “ethical” vision of the future. The downside is that if we get AGI wrong, it will almost certainly destroy humanity and, along with us, this “vast and glorious” future among the heavens, in the words of longtermist Toby Ord, co-founder of the EA movement.

Everything, therefore, depends on how we build AGI — the entire future of humanity, spanning trillions and trillions of years into the future, spread across galaxies, hangs in the balance. And given that we’re on the cusp of building AGI, according to many longtermists, this means that we’re in an absolutely critical moment not just in human history, but in cosmic history. What we do in the next few years, or perhaps the next few decades, with advanced AI could determine whether the universe becomes filled with conscious beings or remains a lifeless waste of space and energy.

This is partly why many longtermists are techno-cautious: They want to be very, very sure that the AGI we build in the next few years or decades opens up the doors to a heavenly techno-utopia rather than turning on its creators and annihilating us, thereby ruining everything. Over the past 20 years, longtermists (though the word itself wasn’t coined until 2017) have thus explored and developed various arguments for how and why a poorly designed AGI could kill us.

Accelerationists claim these arguments are unscientific and overly pessimistic. They denigrate proponents of caution as “doomers” and “decels” (short for “decelerationist,” and presumably a play on the word “incel.”) From the accelerationist perspective, there’s nothing worse than a “decel,” which can give the false impression that accelerationists and longtermists are miles apart on the ideological map.

From the accelerationist perspective, there’s nothing worse than a “decel,” which can give the false impression that accelerationists and longtermists are miles apart on the ideological map.

This points to the second disagreement between the two camps. Since getting AGI right is extremely important in the longtermist view, we need to figure out how to properly “align” AGI with the “values” of “humanity” — by which longtermists implicitly mean their own values. Toward this end, longtermists established a field that’s now called “AI safety,” which focuses on how an AGI might cause an existential catastrophe, and what we can do to avoid this. The field of AI safety thus emerged directly out of the longtermist movement, and it calls the problem of designing a “value-aligned” AGI the “value-alignment problem.”

Longtermists imagine our present situation as a desperate race between AI safety and AI capabilities research: If capabilities research produces an AGI before AI safety research finds a solution to the value-alignment problem, then a “misaligned” AGI will kill everyone on Earth by “default.” If AI safety research solves the value-alignment problem before capabilities research produces an AGI, then the AGI will be “aligned” and we get utopia among the stars.

Because AI safety research lags behind AI capabilities research, according to longtermists, they believe we’re in serious danger of total annihilation in the near future. This is why they argue that we need to decelerate capabilities research and allow safety research to catch up. Prominent AI doomers such as Eliezer Yudkowsky have thus called for an AI “ban,” while others have argued for a “pause” of such research.

But how can we implement such a ban or pause? This is where the government enters the picture. Longtermists claim the government must regulate research projects that aim to build AGI, such as those being pursued by OpenAI, DeepMind, Anthropic and Elon Musk’s company xAI. Altman himself has made this argument, which suggests that he’s not a full-blown accelerationist, but still has one foot in the AI safety camp. For full-blown accelerationists, this is deeply misguided for a couple of reasons.

First, since they believe the existential risks posed by AGI are negligible or nonexistent, such regulation would be utterly pointless. Its only effect would be to slow down progress on AI, thereby delaying the techno-utopian world that they imagine it bringing about.

Second, most accelerationists are libertarians who oppose government interference in the race to build AGI. They would say that even if AGI poses existential risks, the free market is by far the best way to mitigate these risks. This is why they advocate open-sourcing advanced AI systems: The best way to counteract a dangerous AGI would be for there to exist 1,000 other AGIs that are good. Fight power with power. Their argument here isn’t that different from the National Rifle Association’s claim that “the only thing that stops a bad guy with a gun, is a good guy with a gun.” Let a thousand AGIs bloom and you get a utopian meadow.

So, longtermists are techno-cautious, while accelerationists are techno-optimistic, in the sense that they don’t see AGI as existentially risky. And where longtermists see government intervention as playing an essential role in acting cautiously, accelerationists think that government intervention will only make things worse, while simultaneously slowing down the march of progress toward a utopian future world.

These are the first major differences between accelerationism and longtermism. The second concerns their particular visions of the future. As we’ll discuss below, both are deeply utopian and, in practice, virtually indistinguishable, even though some leading accelerationists have a slightly different take on what humanity’s ultimate goal in the universe should be.

Here it’s useful to make explicit a version of accelerationism that’s gained a lot of attention recently, especially since Altman’s ouster at OpenAI. This is so-called “effective accelerationism,” abbreviated “e/acc.” This has some connection to the accelerationism of Nick Land, a far-right racist philosopher who has written about the “supposedly inevitable ‘disintegration of the human species’ when artificial intelligence improves sufficiently.” However, e/acc is the brainchild of Verdon — aka “Beff Jezos” — who describes it as being rooted in thermodynamics. In particular, Verdon bases his grand, eschatological vision on a highly speculative theory proposed by the MIT physicist Jeremy England, though reading Verdon’s writing gives the impression that this theory is established science — which is not the case.

So far as I can make sense of it — and I’m not convinced that Verdon properly understands England’s theory — the idea is that the universe is moving toward a state of maximal entropy, or disorder. This part of the story is, in fact, true: the second law of thermodynamics does imply that entropy will increase until it reaches a maximum. Verdon then claims that, drawing from some of England’s work, life arose basically because it’s better able to help along this process. We take usable energy from the environment and convert it to unusable energy, thereby increasing entropy. “Intelligence” evolved because more “intelligent” creatures are better able to catalyze the conversion. Extending this idea from biology to economics — which England most certainly doesn’t do — Verdon argues that companies, corporations and capitalism itself are types of super-human intelligence that are even more efficient at using up energy, and that a gigantic, sprawling civilization spread across galaxies would be even more efficient still.

Verdon’s conclusion is that we must lean into the “will of the universe,” building increasingly powerful technologies and eventually colonizing the entire cosmos because this is what the universe — driven by the fundamental laws of thermodynamics — “wants.” What is the case is also what ought to be the case, a fallacy that any undergraduate philosophy student will immediately recognize.

Put differently, both e/acc’s and longtermists think the ultimate goal is to maximize something.

What “accelerating effectively” means in practice, Verdon explains, is “climbing the Kardashev gradient,” also known as the Kardashev scale. The Kardashev scale is a ranking of civilizations according to how much energy they’re able to use. A type 1 civilization uses all the energy on its planet. A type 2 civilization harnesses all the energy produced by its nearest star. And a type 3 civilization captures all the energy of its galaxy. The more energy a civilization captures and uses, the more in-line it is with the “will of the universe,” which is why e/acc’s like Verdon believe that we should do everything we can to accelerate the development of technology and ascend the Kardashev scale. Since they claim that building AGI is probably the best way to do this, we have an obligation to accelerate capabilities research on AGI.

This contrasts with the ultimate goal of the longtermists, which is to maximize “moral value,” which might be understood as the total amount of pleasurable experiences in the universe. That’s why they want to build “safe” AGI, spread into space and create huge computer simulations in which trillions and trillions of digital people live. The more people there are with pleasurable experiences, the larger the total quantity of pleasurable experiences. And the larger the total quantity of pleasurable experiences, the better the universe will become, morally speaking.

Put differently, both e/acc’s and longtermists think the ultimate goal is to maximize something. For e/acc’s, this “something” is energy consumption; for the longtermists, it’s “value” in a more general sense. That said, maximizing energy consumption might be the best way to maximize the “value” that matters to longtermists, which leads us to the similarities between these ideologies.

The differences noted above are not trivial, though a complete picture reveals far more affinities than points of divergence. The disagreements between e/acc and EA longtermism should be understood as a family dispute, and as many of us know, family disputes can be vicious. Indeed, it makes sense to see e/acc as an extension or variant of the TESCREAL bundle of ideologies rather than something entirely distinct, as some sources suggest. The development of e/acc has been shaped in many ways by the TESCREAL ideologies, and its vision of what the future looks like and its account of how we should proceed to realize this future is very closely aligned with every ideology in the TESCREAL bundle.

In brief, the acronym “TESCREAL” stands for transhumanism, Extropianism, singularitarianism, cosmism, Rationalism, and of course, Effective Altruism and longtermism. We don’t need to define each of these polysyllabic words here — I’ve explained them in other Truthdig articles — but I will describe these ideologies when relevant below. The reason that the TESCREAL bundle is important for journalists, politicians, policymakers and the general public to understand is precisely because this bundle has become immensely influential in the world today — especially in Silicon Valley. The TESCREAL bundle is, in fact, what sparked the race to build AGI in the first place. One cannot make sense of the AGI race without some understanding of this bundle.

To begin, “effective accelerationism” is a riff on “effective altruism.” According to Verdon, e/acc can be understood as a type of “altruism,” which he calls “emergent altruism” that proceeds from the bottom-up rather than top-down. By this he seems to mean that the altruism of e/acc arises through unconstrained competition in the free market — that is, by “placing economic power and agency in the hands of the meek.” This gestures back to the idea of letting a thousand AGIs bloom: The more competition there is among AI companies and the AGIs that they build, the greater the chance of a positive, even utopian outcome. In contrast, Verdon sees the altruism of EA longtermism as more top-down, given the EA-longtermist claim that government regulation may be the best way to ensure a “safe” AGI.

When Verdon was asked about the word “effective” in “effective accelerationism,” he responded that it means “finding the actions we can take now that have the highest impact on the expansion of the scope and scale of civilization.” This parallels the language of EA, which defines its central goal as having the “greatest” or “highest impact” in the world. As the official EA website states, “the aim is to find the biggest gaps in current efforts, in order to find where an additional person can have the greatest impact.” Verdon himself affirms in an interview from earlier this year that “EA and e/acc align on quite a few things.” He continues:

Ultimately, we both care about civilization and the benefit of all. We just have different ways of going about it. And different — ultimately — things we’re optimizing for. In our case, where we diverge is mainly that AI is one of the most potent technologies for massive good and massive utility, towards the advancement of civilization. And we shouldn’t … neurotically obsess over fictitious stories of how it will lead to our doom, especially [given that] such stories kind of fall apart from first principles.

As we noted above, the main divergence concerns the existential risk of AGI. But e/acc’s and EA longtermists agree that advancing civilization is of utmost importance. Indeed, both maintain that we should try to grow civilization as much as possible, which means spreading into space and ascending the Kardashev scale. As MacAskill, a leading longtermist who cofounded EA with Ord, writes in his longtermist screed “What We Owe the Future,” “if future civilization will be good enough, then we should not merely try to avoid near-term extinction” — for example, from AGI. “We should also hope that future civilization will be big. If future people will be sufficiently well-off, then a civilization that is twice as long or twice as large is twice as good.” He adds that “the practical upshot of this is a moral case for space settlement. … The future of civilisation could be literally astronomical in scale, and if we will achieve a thriving, flourishing society, then it would be of enormous importance to make it so.”

Verdon and his e/acc colleagues echo this in writing that e/acc strives to “develop interplanetary and interstellar transport so that humanity can spread beyond the Earth,” a feat that will probably require AGI, so that we can “increase human flourishing via pro-population growth policies and pro-economic growth policies.”

Both e/acc and longtermism are, at their core, pro-population and extremely growthist. MacAskill even spills quite a bit of ink in his book worrying about global population decline, which he says we might be able to counteract by “develop[ing] artificial general intelligence (AGI) that could replace human workers — including researchers,” since “this would allow us to increase the number of ‘people’ working on R&D as easily as we currently scale up production of the latest iPhone.”

A related concern in MacAskill’s book is the possibility of “technological stagnation,” which he sees as a grave threat to our long-term future. Stagnation, he says, “could increase the risks of extinction and permanent collapse,” meaning that we’d never make it to the stars. Verdon says almost exactly the same thing in writing that “stagnation is far more dangerous than growth,” which is why he contends that — contra longtermists — “deceleration is the real killer.”

E/acc’s and longtermists even agree about the importance of mitigating existential risks — events that would prevent us from achieving our “long-term potential” in the universe — despite differing views about what the existential risk of AGI actually is. As Verdon says, e/acc aims to “minimize existential risk to life itself, as a whole,” which he contrasts with the more anthropocentric focus of EA longtermists. According to Verdon, future life may be entirely technological rather than biological, and he seems to think that longtermists are too caught up on preserving humanity in its current form, or a modified future form that is still similar to current humanity.

The most prominent longtermists believe that we should use technology to radically reengineer humanity, thus creating one or more new “posthuman” species, which might be completely different from the humanity of today.

But this isn’t true: The most prominent longtermists believe that we should use technology to radically reengineer humanity, thus creating one or more new “posthuman” species, which might be completely different from the humanity of today. For example, Ord writes that “forever preserving humanity as it is now may also squander our legacy, relinquishing the greater part of our potential,” while Bostrom asserts that “the permanent foreclosure of any possibility of this kind of transformative change of human biological nature may itself constitute an existential catastrophe.”

This is the ideology of transhumanism — the “T” in “TESCREAL.” And while Verdon and a colleague write that “e/acc has no particular allegiance to the biological substrate for intelligence and life, in contrast to transhumanism,” the fact is that most transhumanists have no allegiance to the biological substrate, either. Their aim — subsumed in the longtermist ideology — is to create or become a completely new species, which may be entirely nonbiological. There is no bias against becoming posthuman beings whose “bodies” consist of metal, plastic, wires or computer hardware. Indeed, the longtermist vision is fundamentally about spreading into space, which is likely impossible for wholly biological beings, and simulating vast swarms of digital people. As long as these future beings have moral worth, to hell with the biological “meatsacks” that we walk around in today.

And what is required for moral worth, for something to matter in an “ethical” sense? The most obvious answer is consciousness. If future beings aren’t conscious, then they wouldn’t really matter, for the same reason that rocks don’t really matter, morally speaking. You don’t think twice when you kick a rock, because rocks don’t feel anything. You would — or should — think twice before kicking a kitten, because kittens can feel.

The connection between consciousness and morality is why Bostrom writes that one type of existential catastrophe would be if “machine intelligence replaces biological intelligence but the machines are constructed in such a way that they lack consciousness.” Machines replacing biology would be fine; machines without consciousness replacing biology is the problem. For longtermists, it’s imperative that we, in the words of Elon Musk, “maintain the light of consciousness to make sure it continues into the future.” (Musk has promoted Bostrom’s vision of digital colonization and calls longtermism “a close match for my philosophy.”) Similarly, Verdon writes that “e/acc is about shining the light of knowledge as bright as possible in order to spread the light of consciousness to the stars.”

Consistent with these aims, longtermists also agree that we should ascend the Kardashev scale. In his book on longtermism and existential risk, “The Precipice,” Ord includes an appendix that proposes extending the Kardashev scale to include a fourth type of technologically advanced civilization: one that harnesses the energy of the entire universe. “Our global civilization,” he writes, “currently controls about 12 trillion Watts of power,” adding that “this is about … 10,000 times less than the full capacity of our planet,” and nowhere near the 4 x 10^46 Watts of power — that’s a 4 followed by 46 zeros — that a type 4 civilization would have at its disposal. Harnessing this vast energy by conquering the universe and subjugating the natural world is a key part of fulfilling our “long-term potential” in the universe, leading longtermists would say, in agreement with e/acc’s like Verdon.

Both sides in the intra-TESCREAL AGI debate envision the eventual digitization of consciousness. Image: Adobe

So, the similarities and overlaps between e/acc and longtermism are considerable. Both emphasize the importance of being “effective” in the pursuit of humanity’s ultimate goals, and both describe their position as a kind of “altruism.” Both aim for a maximally large future population supported by a multigalactic civilization at the top of the Kardashev scale. Both affirm our obligation to preserve the “light of consciousness.” And both care about existential risks, though they have differing assessments of how risky our current situation is.

Elaborating on this last point, Verdon and a colleague write that the loss of consciousness “in the universe [would be] the absolute worst outcome.” E/acc’s even acknowledge that as technology advances, “it becomes easier to extinguish all conscious life in our corner of the universe.” That’s exactly what longtermists say, as when Ord asserts that “there is strong reason to believe the risk will be higher this century, and increasing with each century that technological progress continues,” or when Bostrom declares that “most of the biggest existential risks seem to be linked to potential future technological breakthroughs that may radically expand our ability to manipulate the external world or our own biology.”

While Andreessen, who includes “e/acc” in his Twitter/X bio, lists the idea of “existential risk” as one of the main “enemies” of accelerationism, what he’s really saying is that the techno-cautious approach to thinking about AGI risks poses an “existential risk” to his own accelerationist vision of unconstrained technological progress. Andreessen is no less worried about “existential risks” than longtermists. He just disagrees about what these risks are. For him, “decels” pose a much greater risk than AGI.

Finally, both e/acc and longtermism are extremely growthist, libertarian, neoliberal and pro-capitalism. Longtermists just tend to believe that AGI constitutes an exception to the rule against state intervention and top-down regulation. This last point is worth stressing: Many longtermists oppose government regulation — except with respect to AGI and other advanced technologies. Because they support the regulation of AGI, it might look like they aren’t libertarian, but this is generally not the case.

Many longtermists oppose government regulation — except with respect to AGI and other advanced technologies.

Consider a question that someone recently posted on Twitter/X: “Is there an ‘e/acc for everything but nukes, bioweapons, AI-enabled warfare and authoritarian surveillance’?” One person who responded was Rob Bensinger, an AI doomer who works for the Machine Intelligence Research Institute, founded in 2000 by Yudkowsky, the most famous doomer in the world right now. “Last I checked,” Bensinger responded, “the term for that kind of e/acc is ‘doomer.’” In other words, AI doomers are accelerationists about everything except for AGI and related technologies, and indeed doomers like Yudkowsky started out as AI accelerationists, but then modified their libertarian views to make an exception for AGI. When Yudkowsky started MIRI, it was originally called the Singularity Institute for Artificial Intelligence, and its explicit mission was “to accelerate toward artificial intelligence” — exactly what e/acc now wants to do.

Emmett Shear, who briefly took Altman’s place at OpenAI, echoes Bensinger in saying: “I’m a doomer and I’m basically e/acc on literally everything except the attempt to build a human level” AGI. Elsewhere, he writes that “I’m a techno-optimist who ALSO believes that there’s a chance an human level [AGI] will be catastrophically dangerous.”

The differences between e/acc and EA longtermism are, therefore, relatively small. While e/acc’s like to play up these differences in interviews and blog posts, the two positions occupy nearly the same location on the ideological map and, in practice, their visions of the future are more or less indistinguishable.

E /acc also has important connections with other TESCREAL ideologies, in addition to EA, longtermism and transhumanism. For example, e/acc’s are obsessed with what they call the “technocapital singularity,” which they define as the “asymptotic limit” of “finding/extracting free energy from the universe and converting it to utility at grander and grander scales.” Bringing about the technocapital singularity is essential for realizing the “will of the universe.”

The concept of the Singularity comes from a version of transhumanism called “singularitarianism,” the “S” in “TESCREAL.” There are different definitions of the Singularity, but one influential account imagines technological progress accelerating so quickly that it ruptures human history, initiating a new epoch in the cosmos, after which we will colonize space and spread consciousness throughout the Milky Way galaxy and beyond. The universe itself will then begin to “wake up.” An integral part of the Singularity, on this account, is humans merging with machines, an idea that e/acc’s also enthusiastically endorse. For example, Verdon describes himself as being “bullish” about “find[ing] ways to fund more efforts [to facilitate] human-machine collaboration and augmentation,” which he calls the “transhumanist path forward.” E/acc builds upon the singularitarian worldview, adding its own spin on what the coming Singularity, catalyzed by AGI, will amount to.

While e/acc’s like to play up these differences in interviews and blog posts, the two positions occupy nearly the same location on the ideological map and, in practice, their visions of the future are more or less indistinguishable.

There are also notable elements of Extropianism in the e/acc movement. Extropianism, the “E” in “TESCREAL,” is a libertarian version of transhumanism that supports free-market solutions to all our problems, and advocates an alternative to the “precautionary principle” called the “proactionary principle.” This states that when deciding whether to create a new technology, we must consider all the harm that might be caused if we don’t create it. Channeling this idea, Andreessen writes in his accelerationist manifesto that “we believe any deceleration of AI will cost lives. Deaths that were preventable by the AI that was prevented from existing is a form of murder.” Put differently, if you oppose advanced AI, you’re no better than a murderer. In many ways, E/acc is an updated version of Extropianism centered around AGI. Verdon even named his AI company “Extropic,” borrowing a word popularized three decades ago by the Extropians.

Finally, Verdon also says that “e/acc adds pinches of cosmism to the basic accelerationist framework.” Cosmism, the “C” in “TESCREAL,” is yet another kind of transhumanism that imagines humans not only building AGI and merging with machines, but spreading into space and literally redesigning entire galaxies through “scientific ‘future magic.’” The grand aspirations of e/acc are extremely similar to those of cosmism, and, in fact, a cosmist named Giulio Prisco recently published an article about e/acc in which he concludes that “extropy, futurism and cosmism are strongly related in spirit, and I guess e/acc is a new instance of that common spirit.”

E/acc is, therefore, an extension of Extropianism that advocates for the Singularity and shares the cosmist dream of controlling the entire universe.

Does this mean that “e/acc” should be added to the TESCREAL acronym? No, for the same reason that “D” for “doomer” shouldn’t be: the doomers and accelerationists are just variants of the TESCREAL movement. One is techno-cautious about certain advanced technologies like AGI, while the other is techno-optimistic. Both are part of the very same hyper-capitalist, techno-utopian tradition of thought that has roots in transhumanism and has become pervasive within Silicon Valley over the past 20 years. This is why the quarrels between these camps should be seen as mere family disputes.

If only the e/acc’s and “decels” could get over their disagreement about the risks of AGI, they’d be lovers holding hands skipping into the utopian sunset, singing songs about conquering the universe and digitizing humanity. In my view, both are equally dangerous in their own ways. While e/acc seems to have come out ahead following the snafu at OpenAI, let’s hope that neither gets to control what our future looks like.

Your support is crucial…

With an uncertain future and a new administration casting doubt on press freedoms, the danger is clear: The truth is at risk.

Now is the time to give. Your tax-deductible support allows us to dig deeper, delivering fearless investigative reporting and analysis that exposes what’s really happening — without compromise.

Stand with our courageous journalists. Donate today to protect a free press, uphold democracy and unearth untold stories.

SUPPORT TRUTHDIG