Zuade Kaufman: Hello I’m Zuade Kaufman, publisher of Truthdig.

As you know, at Truthdig, we dig beneath the headlines to find thought-provoking, forward-thinking ideas and conversations. 

We are thrilled today to present two influential and brilliant thinkers, 

Dr. Emile P. Torres and Dr. Timnit Gebru, who will be discussing the timely and important question, “What’s Behind the Race to Create Artificial General Intelligence?” which is also the title of today’s event.

During this discussion, they will provide an overview of the bundle of ideologies known as TESCREAL, an acronym they coined while  Emile was curating the Dig titled  “Eugenics in the Twenty-First Century: New Names, Old Ideas,” which can be found on the Truthdig website.

TESCREAL are ideologies that increasingly influence public perceptions and policy debates related to AGI.  Emile and Timnit will examine the intellectual underpinnings of these ideologies, which purport to answer the question: “Is AGI a transformative technology that will usher in a new age of abundance and prosperity, or will it pose dire threats to humanity?”

And now for the introductions…

Dr. Emile P. Torres is a philosopher and historian whose work has focused on global catastrophic risks and human extinction. They have published widely on a range of topics, including religious end-time narratives, climate change and emerging technologies. They are the author of the book “Human Extinction: A History of the Science and Ethics of Annihilation,” which was published this year.

We are also going to hear from Dr. Timneet Gebru, a computer scientist whose work focuses on algorithmic bias and data mining. As an advocate for diversity in technology, Timnit co-founded Black in AI.  She also founded D.A.I.R, a community-rooted institute  that was created to counter Big Tech’s pervasive influence on research, development and deployment of AI. In 2022, Timnit was one of Time Magazine’s “100 Most Influential People.” She continues to be a pioneering and cautionary voice regarding ethics in AGI.

To our audience, please feel free to write your questions during their discussion, wherever you’re watching. And there will be a Q and A at the end. 

Thank you for participating in this event. I’ll hand it over to you, Emile and Timnit…  

Émile P. Torres: Thanks so much. So I missed some of the intro due to a technical issue on my side. So maybe I’ll repeat some of what you said now. Basically we’ll be talking about this acronym that, you know, has been central to the Dig project that I’ve participated in, at Truthdig, but also, it really came out of a collaboration that I was engaged in with Timnit. So I think there isn’t any particular rigid structure of this conversation, but I figured we could just go over kind of the basics of the acronym, of this concept, why it’s important, what its relation is with artificial general intelligence and this race right now to the bottom, as it were, trying to build these ever-larger language models. And then, as mentioned, we’ll take questions at the end. So I hope people find this to be informative and interesting. So yeah, Timnit, is there anything you’d like to add? Otherwise, we can sort of jump right into what the acronym stands for and go from there.

Timnit Gebru: Yeah, let’s jump in.

Émile P. Torres: Okay, great. So this concept came out of, as I mentioned, this collaboration, basically, Timnit and I were writing this paper on the influence of a constellation of ideologies within the field of AI. In writing this paper, discussing some of the key figures who played a major role in shaping the contemporary field of AI, including or resulting in this kind of race to create artificial general intelligence or AGI, we found that it was sort of unmanageable because there was this cluster of different ideologies that are overlapping and interconnected in all sorts of ways. Listing them all, after the names of some of these individuals who have been influential, was just too much. So the acronym was proposed to sort of economize and streamline the discussion. So that we could ultimately get at the crux of the issue that there is this, you know, bundle of ideologies that’s overlapping and interrelated in various ways. Many of these ideologies came out of previous ideologies and share certain key features. So the acronym stands for Transhumanism — it’s a mouthful — Transhumanism, Extropianism, Singularitarianism, Cosmism, Rationalism, Effective Altruism and Longtermism. The way I’ve come to conceptualize this bundle is that Transhumanism is sort of the backbone. If that’s the case, Longtermism is sort of this galaxy brain atop the bundle because it binds together some of the major themes of other ideologies into like, kind of a single, pretty comprehensive normative futurology, or sort of worldview, about what the future can and ought to look like. So that was the impetus behind this bundle. So for example, you know, we’re writing about an Oxford philosopher and neo-eugenicist, named Nick Bostrom. We’ve mentioned that he is a transhumanist, who participated in the Extropian movement in the 1990s, anticipates the singularity, is close colleagues with the leading modern Cosmist Ben Goertzel. He’s hugely influential, has very close ties to the Rationalist and Effective Altruism communities. In fact, his institute, the Future of Humanity Institute, shared office space for a long time… it might still share office space, I’m not sure, but he’s for many years shared office space with the Center for Effective Altruism, which is sort of the main EA organization. And then Bostrom also is one of the founders of this Longtermist ideology. So that gives you a sense of like, okay, you’re listing this one name, you know, and connecting him to all of these different ideologies. Doing that throughout the paper with other names and so on, is just unmanageable. So hence the acronym was born.

Timnit Gebru: I just want to say that my interest was primarily on, you know, the eugenics angle of the whole AGI movement. So, when I approached you about writing a paper, it was like, Okay, let’s talk about how the eugenics thought is influencing this AGI movement, starting from why they want to create AGI to what they envision that it will do. I didn’t, so yeah, it just kept on being like, “Before we get to the point, we have to recall, as we say, in section two, that Nick Bostrom did this thing and was also part of this other institute, which is also investing in this thing.” And it was just kind of impossible to get to the point that we were making. But I was also very surprised, and I don’t know if this this is, was your experience, of course, like I can see the link to eugenics because I’ve been around the Effective Altruists in the longterm movement and the way they talk about how, you know, we have to work on AI to save humanity and all that, and I was very irritated by it for a long time. However, it’s when we were working on this paper that I realized that the link is direct, like it’s not this roundabout kind of subtle thing. It’s a direct link to eugenics. And that was very surprising to me.

Émile P. Torres: Yeah. So, maybe we can elaborate on that just a bit. Because, you know, this backbone of the bundle, transhumanism, I mean, that is uncontroversially considered to be a version of eugenics. It’s called, so called, “liberal eugenics,” which is supposed to contrast with the old authoritarian eugenics of the 20th century. Although I think there are pretty good arguments for why, in practice, a liberal eugenics program would ultimately be very illiberal, and you know, restrict freedom. So that’s another topic perhaps we could go into. But yeah, I agree. I mean, transhumanism itself was developed by 20th-century eugenicists. So there’s sort of, you could distinguish between the first wave and the second wave of eugenics. The main difference between those two is the methodology. So first wave eugenics was about trying to control population level reproductive patterns. So if you get individuals with so-called “desirable attributes” to have more children, and individuals with what are deemed to be “undesirable properties” to have fewer children, then over many generations, this is a transgenerational process, then you can change the frequency of certain traits within the population. So maybe the relevant trait is like, you know, intelligence, whatever that means exactly. Second wave eugenics, that was really a response to the development of certain emerging technologies, in particular, genetic engineering in the 1970s. But by the 1980s, there was plenty of talk of the possibility of nanotechnology radically enhancing us, modifying our bodies as well. And of course, AI is a big part of that as well. So that’s kind of the second, that’s the defining feature of the second wave of eugenics. Transhumanism, then, it was developed by these first wave eugenicists; it basically is this idea that  rather than just perfecting the human stock and preventing the degeneration of humanity, or certain groups of humanity, why not just, you know, transcend humanity as a whole? If we can create, you know, the most excellent, the best version of humanity possible through selective breeding, or maybe, through emerging technologies, so-called person engineering technologies, why stop there? Why not try to create this sort of like, superior post-human species? So that idea, that goes back to the, like, early 20th century. And then really it merged with the second wave methodology in the second half of the 20th century, in particular, late 1980s, early 1990s is when modern transhumanism emerged. So all of this is to say, you’re exactly right, that the connection between this TESCREAL bundle via transhumanism and eugenics is quite direct.

Timnit Gebru: Right. But what I was saying was also that the link to the origins of the drive to create AGI that it comes, you know, I think we were when we were looking into the TESCREAL bundle, for me, I didn’t know what Cosmism was until we were reading the first book on AGI, which was written in, what 2007, by Ben Goertzel and his collaborator. And then he would and then I was like, “Oh,  I’ve heard about this guy,” but he wasn’t super influential in my space, right? So I haven’t really had to look into him or think about him very much. And then I started reading about his Cosmist manifesto, and all of this stuff, right? And then it’s like, wow, okay, so this link is direct. He really wants to create AGI because he wants to create post-humans that are not even human. They called it transhuman AGI. So to me, that was… there’s always eugenicist undertones in artificial intelligence in general and people have written that California, obviously, you know, has had many… it’s like the mecca of eugenics in the 20th century… and many people have written about different angles of this starting from John McCarthy and some of the the people who coined the term AI, but, you know, I still hadn’t seen that direct link. And so, you know, I’m not… you have written so much about some of these people and you were in one of the movements, you were a longtermist yourself and so you’ve been writing about their writings and their books. Unlike you, that has not been my profession. I am just trying to work on… I’m a technologist, I’m just trying to work on building these things and so I only read these things when I absolutely have to. I only read whatever Ben Goertzel is writing about paradise engineering in the universe or whatever, when I absolutely have to. So working on this paper and seeing these direct links, it was very sad, actually, for me, I would say.

Émile P. Torres: Yeah. I mean, so, you know, I was in the longtermist movement, as you mentioned, for many years. The word longtermism was coined in 2017. But it basically referred to people who work on… before the word was out there, there were people who work on existential risk mitigation, particularly, as well as understanding the nature and number and so on, different existential risks out there. So there were, sort of, longtermists before the word existed. I was part of that community. But also the overlap between the longtermist community and the transhumanist movement is pretty significant, which is consistent with this notion that the bundle is kind of a cohesive entity that extends from the late 1980s all the way up to the present. So yeah, I was very much immersed in this movement, this community and these ideas. I have to say, though, one thing that was surprising, and upsetting for me is having been in this community, but not really having explored every little nook and cranny of it. Maybe also just being a bit oblivious to the extent to which a lot of the attitudes that animated the worst aspects of first wave eugenics were present throughout this community. Once you start looking for instances of these discriminatory attitudes, racism, ableism, sexism, xenophobia, classism and so on, they sort of pop up everywhere. So that was one surprising thing for me when we started working on the project. Ultimately, the first article that I wrote for the Dig was just kind of cataloging some of the more egregious and shocking instances of kind of unacceptable views. For example, a number of leading longtermists have approvingly cited the work of Charles Murray, you know, who is a noted racist.

Timnit Gebru: And the Effective Altruists as a whole, even the ones who are not necessarily Longtermists.

Émile P. Torres: Yeah, yeah, absolutely. I mean I mentioned in one of my articles that Peter Singer published this book in the 1980s, called “Should the Baby Live?” and basically endorsed the use of infanticide for individuals, you know, babies who have some kind of disability. So, yes, these ideas are sort of omnipresent, and it’s… once you start looking for them, they show up everywhere within the neighborhood of the TESCREAL bundle, including EA. And so that was something that was kind of surprising to me and disheartening as well.

Timnit Gebru: I think the first time I remember my brush with… maybe I think it would be good to give people like a two-minute overview of the TESCREAL bundle, but I will just say, with Effective Altruism, I think I remember more than 10 years ago or something like that, somebody describing the idea to me and I… just from the get-go, when I heard what they’re saying, “We’re going to use data to figure out how to give our money in the most efficient way possible,’ something about that just rubbed me the wrong way already because it reminds me of a lot of different things. It’s making things abstract, right? You’re not really at a human level connecting with the people around you or your community, but you’re on the abstract trying to think about the, you know, “global something.” So that was that. And then I was like, okay, but I didn’t have to be around this group that much. Then I remember talking to someone who told me that they were at the Effective Altruism conference. They said their keynote speaker was Peter Thiel. I was like, okay, like Effective Altruism, Peter Thiel. Then this person explained to me how Peter Thiel was talking about how to save the world, people have to work on artificial intelligence. That is the number one thing you need to be working on. This was more than 10 years ago. And I could not believe it. And then the person went ahead to explain to me why. “Well, you know, even if there was a point 000000, whatever, one chance of us creating something that is super intelligent, and that even if there’s a really tiny chance of that super intelligent thing wanting to extinguish us, the most important thing to do is to make sure that that is stopped, because there will be so many people in the future. So this person said that to me back then, right, and I didn’t, you know, at that time, I wasn’t looking at, I didn’t know what longtermism was, or anything. I just had this association with Effective Altruism and I was like, “This is ridiculous, you gotta be kidding me.” But what was different back then versus now is that this type of thinking was not driving the…, basically the most popular and pervasive versions of artificial intelligence. The field or the systems. People doing this were fringe. And even when people like Elon Musk at that time were talking about how AI can be the devil or invoke the devil and things like that, many people in the field were, like, laughing at them. So it wasn’t a situation where you had to work in the field, and really just either buy into it because that’s where the money comes from, or interact with them too much. It was the kind of thing where you could avoid them. But in the last few years, it became not only impossible, but they have been at the forefront of all of the funding and all of the creation and proliferation of these huge companies, like Anthropic is one, that got hundreds of millions of dollars from Effective Altruism. And so that’s why for me, I wanted to kind of make a statement about it and collaborate with you to work on this. Because I kind of feel like they’re actually preventing me from doing my job in general. But I think yeah, before we jump into it, maybe it’s good to, maybe you can explain a little bit like, what TESCREAL stands for, right? We’ve gone through transhumanism, but then there’s a number of others. Actually, we might have to include the new EACC thing there too.

Émile P. Torres: Yeah. Maybe the acronym needs to get even clunkier to incorporate this new AI accelerationist movement.

Timnit Gebru: Yeah.

Émile P. Torres: So yeah, very briefly, within this kind of TESCREAL movement, this community, there are two schools of thought. They differ primarily not in terms of the particular techno-utopian vision of the future. In both cases, they imagine this becoming digital, eventually colonizing space, radically augmenting our intellectual abilities and so on, becoming immortal. But they differ on their probability estimates that AGI is going to kill everybody. So you’ve got accelerationists who think that the probability is low. In general, there’s some nuances to add there. But then there are Doomers, AI Doomers. So Eliezer Yudkowsky is maybe the best example.

Timnit Gebru: Didn’t he think that the singularity was coming in 2023?

Émile P. Torres: That was a long time ago. I think in the early 2000s his views shifted. He got a bit more anxious about the singularity. Maybe the singularity is not going to inevitably result in this kind of wonderful paradisiacal world in the future, but actually could destroy humanity. But anyway, so yeah, the TESCREAL bundle is Transhumanism, this notion that we should use technology to radically enhance the human organism. The second letter is Extropianism. This was the first organized transhumanist movement which really emerged most significantly in the early 1990s and was associated with something called the Extropy Institute, founded by a guy named Max More. And then Singularitarianism, this is also kind of just a version of transhumanism that puts special emphasis on the singularity, which has a couple different definitions but the most influential has to do with this notion of intelligence explosion. So once we create an AI system that is sufficiently intelligent it will begin this process of recursive self-improvement. And then very quickly, you go from having a human level AI to having a vastly super intelligent entity that just towers over us to the extent that we tower over the cockroach, something like that. So that’s singularitarianism. And then Cosmism is kind of, you know, transhumanism on steroids. In a certain sense, it’s about not just radically modifying ourselves, but eventually colonizing space and engaging in things like space-time engineering. So this is just like manipulating the universe at the most fundamental level to make the universe into what we want it to be. So that’s the heart of Cosmism. It has a long history going back to the Russian Cosmists in the latter 19th century, but we’re really focused on the modern form that came out of what was articulated by Ben Goertzel, the individual who christened the term AGI in 2007. So then Rationalism is like, basically, if we’re going to create this techno-utopian world, that means that a lot of smart quote unquote people are going to have to do a lot of smart things. So maybe it’s good to take a step back and try to figure out how to optimize our smartness, or rationality. So that is really the heart of rationalism. How can we be maximally-

Timnit Gebru: Take emotions out of it, they say, although they’re one of the most emotional people I talked to.

Émile P. Torres: Yeah, yeah. I mean, there’s-

Timnit Gebru: They’re like robots. I think that to me that Rationalism feels like, let’s act like robots, because it’s better. Any human trait that doesn’t, that is not like a robot is bad. So let’s figure out how to communicate like robots. Let’s figure out how to present our decision-making process like that of a computer program or something. That’s how it feels to me, which then makes sense, you know, how cultural workers are currently being treated. Like how artists and other kinds of cultural workers are being treated by this group of people.

Émile P. Torres: Yeah, so I think from theRationalist view, emotions are sort of the enemy. I mean, they’re something that’s going to distort clear thinking. So like, an example that I often bring up, because I feel like it just really encapsulates the sort of alienated or you might say, robotic, way of thinking, is this less wrong post from a bit more than a decade ago from Eliezer Yudkowsky in which he asked if you’re in a forced choice situation, you have to pick between these two options, which do you choose? One is a single individual is tortured relentlessly and horrifically for 50 years. Another is that some enormous unfathomable number of individuals have an almost imperceptible discomfort of an eyelash in their eye? Well, if you crunch the numbers, and you really are rational, and you’re not letting your emotions get in the way, then you’ll say that the eyelash scenario, that is worse. So if you have to choose between the two, pick the individual being tortured for 50 years. That is a better scenario than all of these individuals who just go,”Oh!”

Timnit Gebru: The through line… the transhumanism it’s like the tusk part. And then the real part does not, I guess, well, the longtermists seem very much like transhumanists, but the real part does not have to be transhumanist. However, this utilitarian maximizing, some sort of utility thing, I think, that exists across all of them.

Émile P. Torres: Yeah, a lot of the early transhumanists were sympathetic with utilitarianism. I mean, you don’t have to be a utilitarian to be a transhumanist. Just like you don’t have to be utilitarian to be an effective altruist, or even a longtermist. But as a matter of fact, utilitarianism has been hugely influential within even the transhumanists. I mean, a lot of them are consequentialists. Nick Bostrom, in one of his early papers, first paper, on existential risk, defined it in terms of transhumanism. Then a year later, he basically expanded the definition of existential risk to incorporate explicit utilitarian considerations. So that gives you a sense of how closely bound up, historically, these ideas have been. So you’re totally right utilitarianism, this notion of maximizing value, whatever it is we value, if it’s happiness, if it’s jazz concerts, the more the better. You want to multiply it as much as possible. So, yeah, unless you have anything else to add to me to help continue with-

Timnit Gebru: Yeah, I think we’re in the “EAL” version.

Émile P. Torres: Yeah, so the “EAL” part. Effective Altruism is basically just… one way to think of it is it’s kind of what happens when rationalists rather than focusing just on rationality, pivot to focusing on morality. So the rationalists are trying to optimize their rationality, the effective altruists are trying to optimize their morality. I think there are ways of describing Effective Altruism that can be somewhat appealing. They want to do the most good possible. You look at the details, it turns out that there’s all sorts of problems and deeply unpalatable-

Timnit Gebru: 20th century eugenicists also wanted to do the most good possible, right? That’s how everybody kind of describes… Everybody in this movement describes themselves as wanting to save humanity, wanting to do the most good possible. Like, nobody’s coming and saying, “We want to be the most evil possible.”

Émile P. Torres: Yeah, I mean, there are many in the community who literally use the phrase “saving humanity.” What we’re doing is saving humanity. So there’s, a kind of, I mean, as a matter of fact, there is a kind of grandiosity to it, a kind of Messianism. We are the individuals who are going to save humanity, perhaps by designing artificial super intelligence that leads to utopia, rather than completely annihilate humanity. So I mean, this is back when I was-

Timnit Gebru: Counteracting against the opposite one, right? We are the ones who are going to save humanity by designing the AGI god that’s going to save our humanity. Also, we’re the ones who should guard against the opposite scenario, which is an AGI gone wrong, killing every single human possible. We are the ones who need to be the guardians. In both cases, this is the attitude of the bundle.

Émile P. Torres: Yeah. That leads quite naturally to Longtermism, which is basically just what happens if you’re an EA. Again, EA is hugely influenced by rationalism. But if you’re an EA, and you start reading about some of the results from modern cosmology. How big is the universe? How long will the universe remain habitable? And once you register these huge numbers, all the billions, hundreds of billions of stars out there in the accessible universe and the enormous amount of time that we could continue to exist, then you can begin to estimate how many future people there could be. And that number is huge. So like one estimate is within the accessible universe, there are 10 to the 58 future people. So one followed by 58 zeros. So if the aim, as an Effective Altruist, is to positively influence the greatest number of people possible, and if most people who could exist will exist in the far future, then it’s only rational to focus on them rather than current-day people because there’s only 1.3 billion people in multidimensional poverty. That’s a lot in absolute terms but that is a tiny number, relative to 10 to the 58. That’s supposed to be a conservative estimate. So that’s ultimately how you get this longtermist view that the value of the actions we take right now depends almost entirely on the far future effects, not on the present-day effects. That’s the heart of longtermism. And that’s why people are so obsessed with AGI because if we get AGI right, then we get to live forever. We get to colonize space. We get to create enormous numbers of future digital people spread throughout the universe. And in doing that, we maximize value, going back to that fundamental strain at the heart of this TESCREAL movement. We maximize value. So that’s ultimately why many longtermists are obsessed with AGI. And again, if we get AGI wrong, that forecloses the realization of all this future value, which is an absolute moral catastrophe.

Timnit Gebru: I was going to say, it’s basically a secular religion that aligns very well with large corporations that we’re seeing right now and the billionaires who are funding this movement, because you’re not telling them that they shouldn’t be billionaires or they should just give away their resources right now for people who exist right now. You’re telling them that they need to be involved in this endeavor to save humanity from some sort of global catastrophic risk. And therefore, they need to put their intellect and their money to that use, not, you know, to the person that they’re disenfranchising, or the person they’re exploiting. For instance, you know Elon Musk had the biggest racial discrimination case in California’s history because of what he was doing to his workers. And of course, then he said all sorts of other things. But in this ideology, you’re telling him “No, no, this is a small concern. This is not a big concern. You as a very important and smart person have to be thinking about the far future and making sure that you save all of humanity. Don’t worry about this little concern of racial discrimination in your factory.” So the reason I became involved in this bundle is because, or not involved in this bundle, sorry, analyzing this bundle is because, you know, being in the field of AI and seeing their growing influence, from, you know, the DeepMind days where now I know, the founders of DeepMind, especially Shane Legg, are in this bundle. The other thing to note is that they all go to the same conferences, are in each other’s movements. That’s why we made it, you know, one acronym. Effective altruists are very much involved in rationalism and rationality and very much in the other ideologies too. So we see DeepMind being founded. It’s one of the most well-known companies whose explicit goal was to create this AGI, this artificial general intelligence, that’s going to bring people utopia. Then we see it was funded by billionaires in this bundle like Elon Musk and Peter Thiel. Then we see Nick Bostrom’s superintelligence coming out, where he warns about both utopia if we build some super intelligent thing and apocalypse if we get it wrong. Then you start having people like Elon Musk going around talking about how we’re going to have the devil. Then once Google buys DeepMind, you have them all panicking saying they need to create their own, basically DeepMind that is quote, unquote, “open.” I don’t know if OpenAI still has this in their company page but they were saying that if somebody else achieves beneficial AGI, they will think that their mission is complete. How nice of them. Then these people in this bundle come along and they panic; they say they’re going to create OpenAI to once again save humanity. And I remember how angry I was when that announcement came out. I wrote a whole letter just to myself about it because I didn’t buy it. It was this Saviorism by this really homogeneous group of people. Then of course, now we have a similar thing going, which is OpenAI is essentially bought by Microsoft, as far as I’m concerned. And then you have them panicking yet again with the Future of Life Institute, Max Tegmark, each of these people we can say so much about, coming up with this letter saying that we need to pause AI and things like that. It got so much attention. It was signed by people, including Elon Musk, saying we need to pause AI and then the next day, what happens? Elon Musk announced his X-AI thing. So it’s like this cycle that goes on every few years both utopian and apocalypse, right? “Oh, we’re gonna bring Utopia. No, and there might be an apocalypse. We’re gonna break this.” It’s the same people. Two sides of the same coin. And, you know, I’m only seeing this growing after OpenAI. OpenAI wasn’t effective altruists enough for a set of people. They left and founded Anthropic. Anthropic got hundreds of millions of dollars from TESCREAL billionaires, including most of their money came from Sam Bankman-Fried who, who got his money, basically he was convinced to earn his money by the Center for Effective Altruists by saying that you know, you have your “earn to give thing” where you earn as much money as possible and give it away to effective altruist causes. And of course his cause was stopping the AGI apocalypse or bringing the AGI utopia. And so then he gives all this money to Anthropic. And now you have all of these organizations who are incredibly influential, in the mainstream. They are no longer fringe like they were 10 years ago. And that’s why we’re here today talking about them.

Émile P. Torres: Yeah, maybe I’ll just add something real quick to that, which is that, you know, years ago, when I was really active in this community, I remember having conversations with people about how in the heck do we get people in power to pay attention to AI, in particular, super intelligence. And it was just such a struggle to convince individuals like, you know, Geoffrey Hinton, for example, Yoshua Bengio, and so on. How do we convince them that super intelligence is either going to result in a techno-utopian world, which will live forever, we colonize space, and so on or it’s complete annihilation. So there was a huge struggle, and it’s just amazing to witness over the past-

Timnit Gebru: It’s unfortunate. Especially with Yoshua because he was not in that bundle. And I knew him. I had spoken to him for a long time, not as much now. His brother was my manager. And he was not in this whole existential risk, then he just all of a sudden, you know, we’re all trying to figure out what’s going on because his brother has the complete opposite view. He’s definitely not in that crew. But Yoshua talked to Max Tegmark and all of a sudden, he’s in full-blown Doomer mode. And this is why I think it’s secular religion. I’m trying to understand what is it that makes scientists want to have that. Is it because they want to feel super important? So Cho, Kyunghyun Cho, who used to be Yoshua’s postdoc, and is very influential in natural language processing and deep learning, recently came out and said, thankfully, that he’s very aware that, you know, ideologies like EA are the ones that are driving this whole existential risk and doomer narrative. He said that there are many people in Silicon Valley who feel like they need to save the world. It’s only them who can do it. And this is a widespread kind of feeling. I’m glad he spoke up and I think more researchers like him need to speak up. But that’s very unfortunate that back about 10 years ago, people like Yoshua, were not taking people like Elon Musk seriously. And Geoff Hinton, I mean, his student, Ilya, is one of the founders of OpenAI and nearly as full-on in this kind of bundle. So I’m not surprised that he said that, but you know, to give you an example, a sense of how they minimize our current present-day concerns in lieu of this abstract representation of the apocalypse that supposedly everybody should be concerned about, Geoff Hinton was asked on CNN about my concerns about language models, because I got fired for a number of my concerns. Meredith Whittaker was pushed out because she was talking about Google’s use of AI for the military. He said that my concerns were minuscule compared to his. This is the way they get to dismiss our present-day concerns while actually helping bring them about through their involvement in these various companies that are centralizing power and creating products that marginalize communities.

Émile P. Torres: Yeah. So thanks for that, Timnit. Should we maybe try to answer a few questions? So maybe I’ll read one out, but is the most recent question good for you, Timnit?

Timnit Gebru: Yeah, sure.

Émile P. Torres: So okay, I’ll read it out. Question for the speakers. Where do researchers like Geoffrey Hinton fall? I very much agree that people like Elon Musk in OpenAI have been extremely inconsistent.

Timnit Gebru: So I can answer a little bit on that question. Personally, when you look at the way in which we’ve described the TESCREAL bundle, and the fact that the AGI utopia and apocalypse are two sides of the same coin, to me, Elon Musk has been consistent. Because his position is always whenever he feels like he cannot control a company that’s creating, that’s purporting to create AGI he panics and says, “We’re going to have an apocalypse.” That’s what happened in 2013, or when you know, or 2014, when DeepMind was acquired by Google. That’s what happened when OpenAI is getting tons of money from Microsoft. And that’s what happened just now, when he signed and publicized the letter from the Future of Life Institute saying that we need to pause AI. Then the next day, he announces his own thing. This is exactly what he did back in 2015, too. He complained and then the next day he announced his own thing. So that’s what I… I think he’s been super consistent. People like Geoff Hinton hadn’t been in this bundle, but they’re students… so what happened is the merger between the deep learning crew, which wasn’t necessarily in this bundle, like Yoshua and Geoffrey Hinton and all that, that have been around for decades, and with companies like DeepMind and OpenAI, you now have the merger between deep learning and machine learning researchers and people in the TESCREAL bundle. And so what we’re seeing with people like Geoff Hinton is that his student, Ilya Sutskever, was cofounder of OpenAI, and now you know, he’s in that bundle. And so Geoff Hinton is going around… and but if you look at his talks and arguments, it’s so sad. A lot of women especially have been talking about how much of what he says in this area makes no sense. So yeah, so that is kind of my point of view on the machine learning side.

Émile P. Torres: Alright. So, next question. I’ll take one quickly from Peter, who asks, “What do you see as the flaw in the longtermist reasoning because most of the philosophical counters to longtermism seem to imply antinatalism.” So antinatalism is this view that you, there are different versions of it, but one is that it’s wrong to have children. Or that birth has a negative value, something of that sort.

Timnit Gebru: Why do we need both extremes? This is what I don’t understand.

Émile P. Torres: Yeah, this is exactly what I’m going to say. I mean, first of all, I think the flaws with longtermism, that would be a whole hourlong talk. So maybe I could just direct you to a forthcoming book chapter I have, which is nice and short and to the point that, I think, provides a novel argument for why the longtermist view is pretty fundamentally problematic. It’s called Consciousness, Colonization and Longtermism. I put it up on my website. The other thing is, antinatalism, this is not the alternative, or the alternatives do not imply antinatalism. I mentioned before, in writing and on podcasts, and so on, long-term thinking is not the same as longtermism. You can be an advocate, a passionate advocate, for long-term thinking, as I am, and not be a longtermist. You can not believe that we have this kind of moral obligation to go out, colonize, plunder the cosmos, our so-called cosmic endowment of negative entropy, or neg entropy, and then create, you know, the maximum number of people in the future in order to maximize value. That’s accepted, even on a moderate longtermist view, and that is very radical. And so you can reject that and still say, I really care about future generations. I care about their well-being, hence, I care about climate change, I care about nuclear waste, how that’s stored, and so on and so on. So I would take issue with the way that the question itself is couched.

Timnit Gebru: Yeah. And why does it only have to come from Western philosophy, the counter to longtermism, right? There’s many different groups of people who have had long-term thinking and their idea of it is safeguarding nature, working together with nature and thinking about future generations. There’s so many examples of this that don’t have to come from European kind of thought. So I just, you know, we didn’t need longtermism, and now we have it. And now we’re wasting our time trying to get rid of it.

Émile P. Torres: Let me just add real fast, now that I have finished this big book on the history of thinking about human extinction in the West, because basically, I was part of this TESCREAL bundle and I was like, oh, what’s the history? So that’s what the book ended up being. Now that I’ve done that, I’m just more convinced than ever, that the Western approach to thinking about these issues is impoverished and flawed in certain ways that haven’t really even been properly identified, articulated and so on. And so for me, that book project is an inflection point where I am just so unconvinced by the whole Western view and feel like it’s just problematic. Most of my work at this point is like trying to understand things from indigenous perspectives and you know the perspective that-

Timnit Gebru: How did you get out of longtermism? I know that’s probably a conversation for another day, but I’m so curious. I think, with all of our collaborations, I never asked that question like, how were you… How did you get in it? And how did you get out of it? And maybe we can answer an audience question after that. But if you have a short spiel about that… because I think that would be helpful in trying to figure out how to get people out of it.

Émile P. Torres: Yeah, I mean, there are really three issues. So I’ll go over them in insufficient detail. One is the most embarrassing, which is that I started to read and listen to philosophers and historians, and so on, scholars in general, who weren’t white men. So just like, wow, okay, there’s this whole other perspective, this whole other paradigm, this whole other way of thinking about these issues that resulted in the techno-utopian vision, that is at the heart of the TESCREAL bundle in which I was somewhat enthusiastic about. It rendered that just patently impoverished. And so that was one of the issues. The other was just sort of studying population ethics and realizing the philosophical arguments that underlie longtermism are not nearly as strong as one might hope. Especially if longtermists are going out and shaping UN policy and the decisions of tech billionaires. And the other one was just reading about the history of utopian movements that became violent, and realizing that, okay, a lot of these movements combined two elements: a utopian vision of the future, and a kind of broadly utilitarian mode of moral reasoning. When you put those together, then if the ends justify the means and if the ends are utopia, then what means are off the table? So that was the other sort of epiphany I had is like “Wow, longtermism could actually be really dangerous.” It could recapitulate the same kind of violence, extreme actions, that we witnessed throughout the 20th century with a lot of utopian movements.

Timnit Gebru: And they explicitly say that some of those tragedies are just a blip, right? They’re not as bad as, like, the tragedy of not having the utopia that they think we all are destined to have.

Émile P. Torres: This is the galaxy brain part, when you take a truly cosmic perspective on things, even the worst atrocities or the worst disasters of the 20th century, World War II, 1918 Spanish flu and so on, those are just are “mere ripples on the surface of the great sea of life” to quote Nick Bostrom. So there’s a kind of… from this grand cosmic perspective, it kind of inclines people to adopt this view, to minimize or trivialize anything that is sub existential, anything that doesn’t directly threaten-

Timnit Gebru: There’s a good question here and there’s multiple of them. Sharon has two questions, which I’ll lump into one. One is about the degree of… the ethics being emphasized and factored into data collection and cleaning processes required by machine learning systems. And you know, there’s a vast underclass that has emerged tasked into feeding data into these systems. What are your thoughts on this? And how does it play into your own research or work? Well, for me, personally, I’ve done…. My institute has worked on the exploited workers behind AI systems. And so what’s really interesting is while you have the TESCREAL organizations like OpenAI talking… and you can just go read what Sam Altman writes and what Ilya and the rest of them write and… you know, while they’re talking about how utopia is around the corner, and they were talking about how they have announced this huge AGI alignment group, and they’re gonna save us, they’re simultaneously disenfranchising a lot of people. They have a bunch of people that they have hired. Karen Hao just had a great day article recently in The Wall Street Journal about the Kenyan workers who were filtering out the outputs of ChatGPT. And one of them was saying how just five months of working on this, like made him… kind of… the mental state that he was in afterwards made him lose his entire family. Just five months, right? So that’s what’s going on. So as they’re talking about how AGI is around the corner, and how they’re about to create this super intelligent being that needs to be regulated because it’s more powerful than everything we’ve ever thought of, they’re very intentionally obfuscating the actual present-day harm that they are causing by stealing people’s data like creatives. And it makes total sense to me that they’re thinking about just automating away human artists, right? Because that’s just like the non-good part about being human for them. That part that they want to transcend. But also it helps them make a lot of money. So they’re stealing data. They’re exploiting a lot of workers and traumatizing them in this process. However, if you take this cosmic view, like Émile was saying, these are just blips on the way to utopia, so it’s fine. It’s okay for them to do this on the way to the utopia that we’re all going to have if we get the AGI that’s going to save humanity.

Émile P. Torres: Yeah, so basically, I think longtermists would say that, okay, some of these things are bad. But again, there’s an ambiguity there. They’re bad in an absolute sense. But relatively speaking, like they, they really just are…. I mean, the 1918 Spanish flu killed, like just millions and millions of people. And that is just a mere ripple. It’s just a tiny little blip from the grand scheme. So all of the harms now, like, it’s not that we should completely dismiss them. But don’t worry too much about them, because there are much bigger fish to fry, like getting utopia right. By ensuring that AGI we create is properly value aligned, it does what we say. So, for example, when we say “cure aging,” it does that, it takes about a minute to think about it, and it cures aging. Colonize space, you know, that’s what matters just so much more, because there’s astronomical amounts of value in the future. And the loss of that value is a much greater tragedy than whatever harms could possibly happen right now to current people.

Timnit Gebru: So Michael asks, if pausing AI research is something we should be skeptical of, what sorts of policies should we support to prevent immediate harms posed by AI systems? That’s a great question, because when we saw this “pause AI” letter, we had to come up with a response. So I’ll link to it. But in our response, we said that we need to consider things like how the information ecosystem is being polluted by synthetic text coming out of things like large language models. We need to consider labor and what’s happening to all the exploited workers and all the people, these companies are trying to devalue their laborers and displace them. We need to think about all of the harmful ways in which AI is being used, whether it is at the border to disenfranchise refugees, or you know, bias and face… people being falsely accused of crimes based on being misidentified by face recognition, etc. So, first, I think we need to address the labor issue and the data issue. So, this is what they do, right? When they’re talking about this large cosmic, whatever galaxy thing, you think that there isn’t mundane day-to-day stuff that they’re doing that we can, like a normal corporation is doing, that can be regulated by normal agencies that have jurisdiction. So we can make sure that we can analyze the data that they’re using to train these systems and make sure that they have to be transparent about it, as in, you know, prove to us that you’re not using people’s stolen data. For instance, make it opt in, not opt out. And also make it difficult for them to exploit labor like they are. That’s just one example. But I will, just to be brief, I will post our one-page response to that “pausing AI” letter on the chat and so maybe you can see it in the comments or something like that.

Émile P. Torres: So we’re more or less out of time. But one of the harms that also doesn’t get enough attention is on the one hand, the release of ChatGPT, just releasing it into society, sort of upended systems within a lot of universities because suddenly students were able to cheat. And it was really difficult to… I knew multiple professors who had students who turned in papers that were actually authored by ChatGPT. But the flip side of that is that there are also some students who have been accused of plagiarizing, meaning using ChatGPT, that actually didn’t, and Timnit, you were just tweeting the other day about a student.

Timnit Gebru: And this is kind of… this cosmic view that we’re talking about allows these companies to deceive people about their capability. So for example, OpenAI, if it makes you believe that they’ve created this super intelligent thing, then you’re going to think that, and then you’re going to use it and many of their systems. Similarly, if they deceive you into thinking that they’ve created a detector that detects the outputs, whether the output is from ChatGPT or not, with high accuracy, you’re going to use it. So what’s happening is that people have been using these kinds of systems to falsely accuse students of not creating original work. So OpenAI quietly depreciated their detector, and it’s really interesting how loud they are about their supposed capabilities of their systems and how quiet they were about, you know, removing this detector. So, I think, for me, my message to people would be, don’t buy into this super intelligence hype. Keep your eye on the present-day dangers of these systems, which are based on very old ideas of imperialism, colonization, centralization of power, maximizing profit, and not on safeguarding human welfare. And that’s not a futuristic problem, that’s an old problem that still exists today.

Émile P. Torres: So that ties into… maybe we’ll take one last question. We’re so sorry to everybody who asked a question we didn’t get to. Genuine apologies for that. Okay, so for those who may not have the tech background, what conversations do you think must happen from below? Especially as this targets marginalized communities, Global South, class, and so on? Timnit, your thoughts on that? I can say a few things but-

Timnit Gebru: I can say something shortly and then I’m curious to hear your thoughts. Well, you know, to know the harms, you don’t have to have a tech background. So that’s a good thing to remember, right? When something is harmful, you don’t have to know the ins and outs of how it works. And often the people who do know the issues are people with lived experience of either being algorithmically surveilled or losing their jobs or of being accused of something they didn’t do. The student who emailed us, who was falsely accused of writing an essay, I mean, plagiarizing an essay, didn’t know anything about how it worked to know that this was an injustice. So I think that’s the first point. The first point is people need to know that they need to be part of the conversation and they don’t need to know how it works. There’s a concerted effort to mislead you, as to the capabilities of current AI systems. The second point to me is that we should be very skeptical of companies that claim to be building something all-knowing and complain and say “Oh my God, this all-knowing thing needs to be regulated,” and then complain when it is. That’s what OpenAI did. They went to the U.S. Congress, and said that there needs to be regulation, and they’re scared. Then the EU regulates it and they’re like, “Oh, we might have to pull out of the EU.” So just think of it as entities using certain systems, and whether those entities are doing the right thing and those systems are harmful or not, there’s really nothing new about this new set of technologies that can be used to disenfranchise people. As much as possible, I highly recommend people… if they are in tech or were thinking about policy… investing in small local organizations that don’t have to depend on these large multinational corporations. And thinking about how the fuel for this exploitation is data and labor. So thinking about where that comes from, how people can be adequately compensated for that, and for people’s data and not to be taken without their consent.

Émile P. Torres: The only thing I would add to that is… tying this back to the central thrust of this whole conversation… just, I think, being aware of some of the ideologies that are shaping the rhetoric, the goals and so on, of these leading AI companies. And sort of fitting the pieces together and understanding why it is that there’s this race to create AGI. Again, you know, these ideologies that fit within the TESCREAL bundle… if not for the fact that they are immensely influential, that they are shaping some of the most powerful individuals in the tech world from Elon Musk to Sam Altman, and so on… If it weren’t for that fact, then perhaps a lot of this would be a reason to chuckle. But I mean, it is hugely influential. So I think the first step in figuring out a good way to combat the rise of these ideologies is at least just understanding what they are, how they fit together and the ways in which they’’re shaping the world we live in today.

Timnit Gebru: I was gonna say Nitasha Tiku has a great article that just came out in The Washington Post that details the amount of money that’s going into the kind of this AI doomerism on the Stanford campus from the effective altruists. So this is just one angle, but I think it’s good to know how much money and influence is going into this.

Émile P. Torres: Alright, so thanks for thanks for having us. I think Zuade might come in a moment.

Zuade Kaufman: I just wanted to thank you. That was just so intriguing and important. And thank you for all your work and for being part of Truthdig.

Émile P. Torres: Thanks for hosting us.

Zuade Kaufman: Yeah, just keep the information rolling. And I know you also provided some links in the chat that we’ll share with our readership, whatever readings that you think they should be doing further, of course, buying your book and continuing. Thank you so much.

Your support matters…

Independent journalism is under threat and overshadowed by heavily funded mainstream media.

You can help level the playing field. Become a member.

Your tax-deductible contribution keeps us digging beneath the headlines to give you thought-provoking, investigative reporting and analysis that unearths what's really happening- without compromise.

Give today to support our courageous, independent journalists.

SUPPORT TRUTHDIG