It’s been a no good, very bad year for Effective Altruism, or “EA” for short, the cultural movement that aims to use “evidence and reason” to figure out the best ways to do the “most good.”

One year ago, the EA community was flying high. Its poster boy, a philosopher named William MacAskill, had just published his book “What We Owe the Future,” which makes the case that we should be far more concerned about humanity’s long-term future — thousands, millions, even billions of years from now — than we ordinarily are. Effective Altruists call this “longtermism,” an idea built around a vision of the future in which we reengineer humanity, colonize space, plunder the vast resources of the cosmos, and ultimately maximize the total amount of “value” in the universe by creating huge numbers of “digital people” living “happy” lives in giant computer simulations.

If this sounds bizarre and potentially dangerous, that’s because it is. Yet MacAskill’s book, longtermism and the EA movement that he co-founded received mostly favorable coverage from leading news outlets like The New Yorker, The Guardian, and Time magazine. MacAskill himself was a guest on “The Daily Show” with Trevor Noah, who was impressed with MacAskill’s claim that he gives away more than 50% of his income to EA-approved charities.

This was EA’s big opportunity to convince the public to join the movement and embrace the longtermist “ethic.” And boy did it backfire, due to a series of scandals that, as of this writing, starts and ends with a certain crypto heavyweight named Sam Bankman-Fried, whose criminal trial for perpetrating one of the largest financial frauds in U.S. history recently concluded with seven guilty verdicts. 

This is a strategy employed — whether consciously or not — by people like Elon Musk and Donald Trump.

The Bankman-Fried debacle was just one of many controversies to have embroiled the EA movement over the past year and a half. In fact, there were so many that it’s difficult to keep track. And so we run the risk of the scandals being buried in the collective memory and forgotten.

This is a strategy employed — whether consciously or not — by people like Elon Musk and Donald Trump. To modify a line often attributed to Joseph Stalin, one scandal is a tragedy; a million scandals are a statistic — and statistics don’t have the same psychological force or impact that tragedies do.  

For example, do you recall that a flight attendant accused Musk of “exposing his erect penis to her” and offered “to buy her a horse in exchange for an erotic massage”? Or that Musk’s company SpaceX then paid her $250,000 to stay quiet about this incident? Probably not, even though this was all over the news in early 2022. The reason is that there’s been so much bad press surrounding Musk the past year that most of us lose track of each individual misdeed. In effect, these scandals start to cancel each other out, which is why it’s sometimes better, from a public relations perspective, to have a thousand big scandals than just one glaring transgression.

I thought it might be useful to review some of the more notable incidents involving the EA community over the last 18 months. Compiling these into a single article underscores how these are not one-off controversies afflicting the movement, but rather evince a pattern of behaviors and proclivities that should make reasonable people more than worried that the EA worldview has become ubiquitous within Silicon Valley, is driving the current race to build artificial general intelligence, and is beginning to infiltrate major governing bodies like the U.K. government and even the United Nations.

We begin with Bankman-Fried, though not where one might expect — namely, the catastrophic implosion of his cryptocurrency platform FTX in November 2022. Instead, let’s rewind to April of that year, when Bankman-Fried was interviewed on the “Odd Lots” podcast. During the conversation, he tried to explain how DeFi works, where “DeFi” stands for “decentralized finance,” defined by Coinbase as “an umbrella term for peer-to-peer financial services on public blockchains, primarily Ethereum.” (Ethereum was co-created in 2013 by Vitalik Buterin, a recipient of the Thiel Fellowship who also made his fortune through crypto and is now one of the major funders of EA-longtermist organizations.)

According to Bankman-Fried’s account, DeFi — which he invested and traded in — is basically “one of the largest Ponzi schemes in history.” The following month, a journalist from the Financial Times asked Bankman-Fried “what he thinks of critics who liken crypto to a Ponzi scheme,” to which he replied: “By number of Ponzi schemes there are way more in crypto, kinda per capita, than in other places.” He added that, “by size of actual Ponzis, I’m not sure that it is particularly unusual. It’s just like a ton of extremely small ones.”

What’s scandalous here is that no one in the EA leadership, including MacAskill, cared about these remarks. No one questioned the ethics of Bankman-Fried’s crypto enterprise bankrolling major EA-longtermist projects and organizations. Indeed, the leaders of EA were more than happy to take his money — and there was a whole lot of it: an estimated $46 billion in committed funding before FTX collapsed. After that happened, these very same EA leaders turned around and admonished Bankman-Fried for compromising his moral integrity and failing to take “common-sense moral constraints” seriously when he chose to defraud his customers. Yet none of these people had a problem with billions of dollars in funding coming from a space that Bankman-Fried himself described as full of Ponzi schemes.

This brings us to FTX itself, which collapsed in early November 2022 after the equivalent of a “bank run” from customers, causing Bankman-Fried to lose most of his fortune almost overnight.

Worse, “sources say that MacAskill [was] repeatedly told that Bankman-Fried was untrustworthy, had inappropriate sexual relationships with subordinates, refused to implement standard business practices and had been caught lying during his first months running Alameda, a crypto firm that was seeded by EA investors, staffed by EAs and dedicated to making money that could be donated to EA causes,” according to Time magazine. For all of their post-FTX talk of integrity and commonsense morality, the EA leadership consistently demonstrated that the ends — tens of billions of dollars for EA — can justify the means — patently unacceptable behavior in the Ponzi-esque space of crypto.

This brings us to FTX itself, which collapsed in early November 2022 after the equivalent of a “bank run” from customers, causing Bankman-Fried to lose most of his fortune almost overnight. Many people are, by now, familiar with the basics of what happened, so we don’t need to go into detail here. Suffice it to say that Bankman-Fried has now been found guilty of all seven criminal fraud counts brought against him, after his closest allies at FTX flipped and testified against him in court. He faces up to 115 years behind bars, and is scheduled to be sentenced next March.

Bankman-Fried was probably the most lionized advocate of EA-longtermism before his downfall, next to his moral “adviser,” William MacAskill. In fact, the reason Bankman-Fried ended up going into crypto was because of a lunch he had with MacAskill in 2012. Unsure about which career to pursue, he was persuaded to “earn to give,” whereby one aims to land the most lucrative job possible to donate more to EA-approved charities, even if this means working for what MacAskill describes as an “immoral organization” — like a petrochemical company. To quote a fawning article about Bankman-Fried from 2022, the idea behind “earning to give” is to “get filthy rich, for charity’s sake.” If it weren’t for that conversation with MacAskill more than a decade ago, Bankman-Fried probably wouldn’t be in prison today.

Yet the FTX fiasco is just the most spectacular example of dishonesty and malfeasance since 2022. There were plenty more instances that, if not for FTX, would probably have received more unwanted attention than they did. For example, toward the end of 2022, I became aware that Effective Ventures, a federation of EA organizations that includes the Centre for Effective Altruism and 80,000 Hours — both cofounded by MacAskill — had purchased a “palatial estate” outside of Oxford called Wytham Abbey, which was built in 1480.  Effective Ventures spent a whopping $18.5 million on it in 2022. This is a phenomenal amount of money — especially for a community that repeatedly boasts about the modest lifestyles and generous giving habits of its members.

Most rank-and-file EAs were kept in the dark about this purchase, and once it was made public, many were dismayed. Two days after I tweeted a picture of the castle-like edifice, someone on the EA Forum — the main online hangout of the EA community — posted: “Yesterday morning I woke up and saw this tweet by Émile Torres … I was shocked, angry, and upset at first. Especially since it appears that the estate was for sale last year for 15 million pounds.”

In response, the EA leadership insisted that purchasing Wytham Abbey for millions of pounds was entirely justified. Why? Because if the sumptuous estate could attract “the next Sam Bankman-Fried” — as one journalist wrote in The New Yorker — the investment would be worthwhile. After all, another “Bankman-Fried” means billions more to fund EA-aligned “charitable” projects.

This gestures at the problem of treating ethics as a branch of economics, as EAs often do. When cost-benefit analyses and ends-justify-the-means reasoning determines the morally right course of action, one can always get the desired result by wiggling the numbers. Is spending money on a palace right or wrong? If you want the answer to be “Yes, buy it!,” it’s not hard to find numbers that, plugged into the right equation, make it work. Let’s say there’s a small probability that an ambitious young person visits Wytham Abbey and becomes inspired to “earn to give.” Let’s say that they go on to become the first trillionaire — as some speculated Bankman-Fried would become — and then donate 99% of that to promoting the EA ideology and building its community. Multiplying these numbers together, it’s entirely possible that the resulting “expected value” makes the purchase perfectly reasonable. So long as moral integrity and commonsense morality don’t get too much in the way, the economists of ethics can “justify” virtually any expense.

The point is that boasting about donating 50% of your income is duplicitous — to say the least — when you’ve got super-wealthy friends in high places who are willing to hand you millions of dollars to boost the sales of your book.

This is presumably why MacAskill celebrated the release of his book at an “ultra-luxurious vegan restaurant where the tasting menu runs $438 per person with tip, before tax.” It’s also likely behind Bankman-Fried’s decision to fly on private jets, buy hundreds of millions in Bahamian real estate, and purchase a Bahamian mansion for his parents at the cost of $16.4 million, all of which became public knowledge only after FTX collapsed. Sure, this money could have been spent feeding people in impoverished countries, but if such luxuries help to further the goals of Bankman-Fried’s companies, that means more money for the longtermist projects that Bankman-Fried cared about, such as mitigating so-called “existential risks.”

Things get even more deceitful and squalid than this. On the one hand, leading EAs were spending large sums of money on Oxfordshire palaces, flights on private jets, mansions in the Bahamas, and so on, while on the other, the image they crafted for the public suggested something altogether different. When MacAskill appeared on “The Daily Show,” he boasted, as noted earlier, that he gives away more than 50% of his income, but what he didn’t say is that he also had direct access to literally millions of dollars just to promote his personal book project — again, because there’s always a way to manipulate the numbers so that these expenses appear “justifiable.”

In late 2022, someone from a PR firm that MacAskill hired to promote his book contacted me. The source revealed, on the condition of anonymity, that MacAskill was paying that particular firm (MacAskill apparently hired more than one) a whopping $12,000 per month for their services, and had floated a budget ceiling of $10 million in promotional funds — much of it from Dustin Moskovitz, the co-founder of Facebook. That’s an unheard-of amount of money for a book, and indeed the PR firm itself had no idea how to spend that much dough.

The point is that boasting about donating 50% of your income is duplicitous — to say the least — when you’ve got super-wealthy friends in high places who are willing to hand you millions of dollars to boost the sales of your book. Indeed, if you’re wondering how MacAskill managed to get so much media coverage for “What We Owe the Future” — resulting in it becoming a bestseller — this is how: so far as I can tell, the book was a commercial success because of money rather than merit.

Meanwhile, in December of 2022, The New Yorker broke the story that EA leaders knew all about Bankman-Fried’s lavish lifestyle but decided instead to perpetuate a series of myths that he lived a humble life, toiling away with his crypto businesses for the greater good of humanity. As Gideon Lewis-Kraus writes:

The story commonly told about Bankman-Fried was that he drove a beat-up Toyota Corolla, slept on a beanbag, and had nine roommates. MacAskill repeated this fable to me, characterizing it as evidence of Bankman-Fried’s profound commitment to the cause. What he did not mention, and what came out only in the last few weeks, is that Bankman-Fried and his roommates were living in a forty-million-dollar penthouse in a gated community in the Bahamas—part of a total local property portfolio worth an estimated three hundred million dollars. His parents, professors at Stanford Law, owned a vacation condominium worth millions of dollars.

He continues:

E.A. leadership ratified a mythology about Bankman-Fried that was simply not the case. One senior member of the community told me that the peculiar contradictions of Bankman-Fried’s life style were widely known but somehow unexamined: it was true that he drove a beat-up Corolla, but it was also true, if underemphasized, that he enjoyed a sumptuary existence—not only the lavish penthouse but the use of such appurtenances as a private jet.

In other words, the EA leaders who knew Bankman-Fried were, basically, lying about his lifestyle by omission, because they saw these lies as useful — and therefore “justifiable” — to advance their power-seeking aims. If people knew the truth about Bankman-Fried, they’d be far less impressed by the supposed commitment to “do the most good,” and far less convinced that EAs really mean it.

The scandals have kept on coming. In December of last year, I happened across an old email by Nick Bostrom, the co-founder of longtermism and one of the most influential figures within EA. Composed in 1996, Bostrom declared that “Blacks are more stupid than whites,” adding that “I like that sentence and think it is true … I think it is probable that black people have a lower average IQ than mankind in general,” though it would be unwise to say this publicly because people “would think that I were a ‘racist.’” They would interpret his belief as — quoting Bostrom — “I hate those bloody [N-word redacted].”

My jaw was on the floor after reading this email, but I can’t say that I was surprised. Bostrom is an advocate of what philosophers call “liberal eugenics.” He has explicitly argued that one type of existential risk involves less “intellectually talented” people outbreeding their more “intellectually” capable peers, thus lowering the average IQ of humanity as a whole (which happens to be the premise of the  2006 absurdist comedy “Idiocracy”). He writes: “Currently it seems that there is a negative correlation in some places between intellectual achievement and fertility. If such selection were to operate over a long period of time, we might evolve into a less brainy but more fertile species, homo philoprogenitus (‘lover of many offspring’).” Bostrom also co-authored a paper in which he approvingly cites the work of Charles Murray, co-author of “The Bell Curve” and a leading champion of race pseudoscience. 

My first instinct was to verify that the email was, in fact, authentic. To do this, I wrote Bostrom’s colleague at Oxford’s Future of Humanity Institute, Anders Sandberg, who quickly alerted Bostrom that I had found the email. Bostrom then wrote a sloppily put-together “apology” in which he said that he deeply regretted using the N-word, but didn’t walk back his claims about race and “intelligence.” This was publicly shared on Twitter/X by Sandberg, who attempted to excuse Bostrom’s language by arguing that “the email has become significantly more offensive in the current cultural context: levels of offensiveness change as cultural attitudes change (sometimes increasing, often decreasing). This causes problems when old writings are interpreted by current standards.” Of course, the N-word was completely unacceptable in 1996 — everyone, including Bostrom, would have known this. As Dr. Timnit Gebru wrote about the snafu:

I don’t know what’s worse. The initial email, Bostrom’s “statement” about it, or [Sandberg’s Twitter] thread. I’m gonna go with the latter 2 because that’s what they came up with in preparation for publicity. Their audacity never ceases to amaze me no matter how many times I see it.

As this suggests, Bostrom’s email and “apology” — described by many as a “non-apology” — triggered a significant backlash, with outlets like Motherboard, The Daily Beast and The Daily Mail covering it. Oxford University students, such as Kwabena Osei, decried Bostrom’s remarks, and Oxford launched an investigation (which ultimately did nothing to penalize Bostrom or his Oxford-based institute). Some people on the EA Forum came to Bostrom’s defense, with one arguing that the apology and its aftermath “(slightly) raised my opinion of Nick Bostrom” because it showed that “Bostrom did not compromise his epistemic integrity by expressing more socially palatable views that are contrary to those he actually holds.”

This brings us up to January of this year, meaning that everything discussed above happened between November 2022 and January 2023 — a whirlwind of scandals and embarrassing stumbles that reversed much of the goodwill that the EA movement had generated after MacAskill’s media tour. In fact, by the middle of November 2022, MacAskill himself had virtually disappeared from the public eye. After having his picture all over major media outlets in articles about EA, he posted a Twitter thread about Bankman-Fried’s corruption and then promptly vanished for six months, only reappearing on social media in May of 2023.

Perhaps this was a smart move on his part, because the bad publicity didn’t stop with FTX, Wytham Abbey, revelations that the EA leadership had lied about Bankman-Fried and Bostrom’s racist email. On Jan. 30, 2023, an academic named Carla Cremer, who once considered herself to be an EA, published an article in Vox titled “How Effective Altruists Ignored Risk.”

The EA community has long been accused of hostility toward women.

The most shocking revelation in the article involved a document from the Centre for Effective Altruism (CEA), co-founded by MacAskill, that was leaked to Cremer in 2019. It discussed how “some people in leadership positions were testing a new measure of value to apply to people: a metric called PELTIV, which stood for ‘Potential Expected Long-Term Instrumental Value.” CEA was planning on using it “to score attendees of EA conferences, to generate a ‘database for tracking leads’ and identifying individuals who were likely to develop high ‘dedication’ to EA.”

If this sounds cultish, strap in, because part of the PELTIV assessment involved subtracting points if an individual’s IQ is only 100. According to Cremer, PELTIV “points could only be earned above an IQ of 120.” EAs were also given a low PELTIV score if they “worked to reduce global poverty or mitigate climate change, while the highest value was assigned to those who directly worked for EA organizations or on artificial intelligence.”

This is exactly the sort of thing that cults do: secretly rank members according to some made-up internal metric, and the use of IQ to determine an EA’s score — i.e., their value to the EA community — gestures back to the 20th-century eugenics movement, which employed IQ in a similar manner.

Another common feature of cults is the suppression of criticism that targets their core beliefs, charismatic leaders and community norms, which EA also has a history of doing. To be clear, EAs describe themselves as being open to critique. They value “epistemic humility,” and are always looking for new “crucial considerations,” a technical term introduced by Bostrom to mean “a consideration that warrants a major reassessment of a cause or intervention.”

In reality, though, critics who target fundamental aspects of the EA worldview run the risk of being ostracized by the community, which is why a group of around 10 EAs posted a comprehensive critique of the movement on the EA Forum without using their real names. It was, in other words, published anonymously, because as the authors themselves put it: speaking out against EA — even as an advocate of EA — poses a “significant risk … to their careers, access to EA spaces, and likelihood of ever getting funded again.” They conclude: “We are anonymous for a reason.”

Although I don’t know for sure, my guess is that some of these authors looked to my own experience as an example. I was once an EA, until I began to criticize the community many years ago, after which it became clear that I was no longer welcome. Even worse, since I began to regularly publish criticisms of the EA movement — though I would now group EA under the acronym “TESCREAL,” which I’ve discussed in previous Truthdig articles — I’ve received numerous threats of physical violence from the EA community. On two occasions, I was sent messages from anonymous online accounts saying — to quote one — “better be careful or an EA superhero will break your kneecaps.” Numerous anonymous emails with threatening language appeared in my inbox, and one email from last June referenced a short film about a murder-suicide, saying that they hope something less extreme is necessary for me to realize my error in criticizing the movement.

Incidents like these make EA look like a bona fide cult, and in fact, someone from the San Francisco Bay Area, who used to be prominent in the movement, told me last year that the community had become a “full grown apocalypse cult” due to the belief, widespread among EAs, that we are on the verge of creating an artificial general intelligence (AGI) that will kill everyone on Earth. This is, incidentally, why a hugely influential figure among EAs named Eliezer Yudkowsky recently argued in Time magazine that governments should engage in military strikes, if necessary, to prevent AGI from being built in the near future, even at the risk of triggering a thermonuclear war that kills nearly everyone on Earth.

As Dr. Sarah Taber, who grew up in a cult, wrote after coming across the EA Forum article by the 10 anonymous EAs: “Reading through it, all I can think is ‘Ohhhh yeah. I remember this stage of trying to break out of a cult.’”

The EA community has long been accused of hostility toward women. In a Time article published last February about the “toxic culture of sexual harassment and abuse” within EA, the journalist Charlotte Alter reported that “effective altruism’s overwhelming maleness, its professional incestuousness, its subculture of polyamory and its overlap with tech-bro dominated ‘rationalist’ groups have combined to create an environment in which sexual misconduct can be tolerated, excused, or rationalized away.”

One woman who spoke with Alter “recalled being ‘groomed’ by a powerful man nearly twice her age who argued that ‘pedophilic relationships’ were both perfectly natural and highly educational.” Another recounted “a much older EA recruit[ing] her to join his polyamorous relationship while she was still in college.” Several others “described EA as having a ‘cult-like’ dynamic,” and complained that “the way their allegations were received by the broader EA community was as upsetting as the original misconduct itself.”

This is obviously horrifying, and it’s not clear that anything significant has changed in the community since the Time article. Still other revelations this year have further tarnished the public image of EA. For example, last July, Quartz reviewed “court filings in a federal bankruptcy court in Delaware,” which “included a memo crafted by an FTX Foundation official and Sam Bankman-Fried’s brother Gabriel Bankman-Fried.” 

It detailed a plan — quoting the court filings — “to purchase the sovereign nation of Nauru in order to construct a ‘bunker/shelter’ that would be used for some event where 50%-99.99% of people die,” to “ensure that most EAs survive.” In other words, the FTX-hatched plan was to buy Nauru and build a bunker specifically to house EAs, in case almost everyone else on Earth perishes in a global catastrophe, so that these EAs can later reemerge and rebuild civilization. It’s narcissistic derangement, and comical, not the least because the island of Nauru will become uninhabitable by the end of the century thanks to climate change.

The EA movement cares a lot about PR and marketing. They put a great deal of thought into shaping their public image, and for a time, the big PR push seemed to be working. But it didn’t take long for the lies, chicanery and scandals to catch up, damaging the “brand name” of EA and longtermism, perhaps irreparably.

Your support matters…

Independent journalism is under threat and overshadowed by heavily funded mainstream media.

You can help level the playing field. Become a member.

Your tax-deductible contribution keeps us digging beneath the headlines to give you thought-provoking, investigative reporting and analysis that unearths what's really happening- without compromise.

Give today to support our courageous, independent journalists.

SUPPORT TRUTHDIG