The following story is co-published with Freddie deBoer’s Substack.

This is the second time I’ve written an updated version of a previous post; this one has been redone from scratch.

If you read this column you’ve very likely already heard of effective altruism, or EA, the philosophy/community that’s concerned with performing charitable acts in a way that creates the most positive good for the most people. If not, here’s an explainer/endorsement from Dylan Matthews that’s a good introduction. If you do know what EA is, you probably also know that it’s lost a considerable amount of shine in the past year, going from a media darling to the recipient of a tremendous amount of skepticism. Matthews himself would, only four months later, write a piece to express some regrets; he did so because the spectacular implosion of the recently-convicted Sam Bankman-Fried, previously an EA philosopher king, caused many people to turn a harsh eye on that world. I don’t think, though, that the machinations of Bankman-Fried or the cultist excess that were revealed in his wake are necessary to question effective altruism. I think that EA is functionally a branding exercise that masquerades as an ethical project, and an ethical project that does not require the affected weirdness that made it such a branding success. While a lot of its specific aspects are salutary, none of them require anything like the ethical altruist framework to defend them; the framework seems to exist mostly to create a social world, enable grift, and provide the opportunity for a few people to become internet celebrities. It’s not that nothing EA produces is good. It’s that we don’t need EA to produce them.

Very often, EA advocates emphasize the common-sensical nature of their project. If we’re going to do charity, they say, let’s do it well; if we’re going to spend money to do good, let’s spend it effectively; if we’re going to try and fix the world, let’s think carefully about how best to do that. The Centre for Effective Altruism defines their project as

both a research field, which aims to identify the world’s most pressing problems and the best solutions to them, and a practical community that aims to use those findings to do good.

This project matters because, while many attempts to do good fail, some are enormously effective. For instance, some charities help 100 or even 1,000 times as many people as others, when given the same amount of resources.

This means that by thinking carefully about the best ways to help, we can do far more to tackle the world’s biggest problems.

Who could argue with that! But this summary also invites perhaps the most powerful critique: who could argue with that? That is to say, this sounds like so obvious and general a project that it can hardly denote a specific philosophy or project at all. The immediate response to such a definition, if you’re not particularly impressionable or invested in your status within certain obscure internet communities, should be to point out that this is an utterly banal set of goals that are shared by literally everyone who sincerely tries to act charitably. You can say that a lot of people engage in ostensibly charitable actions as a con or scam but EA is not immune to that and anyway that’s a separate case entirely. Defined in these broad terms, effective altruism is no more a meaningful philosophy than “do politics good” is a political platform or “be a good person” is a moral system. In the piece linked above Matthews says that “what’s distinctive about EA is that… its whole purpose is to shine light on important problems and solutions in the world that are being neglected.” But that isn’t distinctive at all! Every do-gooder I have ever known has thought of themselves as shining a light on problems that are neglected. So what?

Defined in these broad terms, effective altruism is no more a meaningful philosophy than “do politics good” is a political platform or “be a good person” is a moral system.

Sufficiently confused, you naturally turn to the specifics, which are the actual program. But quickly you discover that those specifics are a series of tendentious perspectives on old questions, frequently expressed in needlessly-abstruse vocabulary and often derived from questionable philosophical reasoning that seems to delight in obscurity and novelty; the simplicity of the overall goal of the project is matched with a notoriously obscure (indeed, obscurantist) set of approaches to tackling that goal. This is why EA leads people to believe that hoarding money for interstellar colonization is more important than feeding the poor, why researching EA leads you to debates about how sentient termites are. In the past, I’ve pointed to the EA argument, which I assure you sincerely exists, that we should push all carnivorous species in the wild into extinction, in order to reduce the negative utility caused by the death of prey animals. (This would seem to require a belief that prey animals dying of disease and starvation is superior to dying from predation, but ah well.) I pick this, obviously, because it’s an idea that most people find self-evidently ludicrous; defenders of EA, in turn, criticize me for picking on it for that same reason. But those examples are essential because they demonstrate the problem with hitching a moral program to a social and intellectual culture that will inevitably reward the more extreme expressions of that culture. It’s not nut-picking if your entire project amounts to a machine for attracting nuts.

If you’d like a more widely-held EA belief that amounts to angels dancing on the head of a pin, you could consider effective altruism’s turn to an obsessive focus on “longtermism,” in theory an embrace of future lives over present ones and in practice a fixation on the potential dangers of apocalyptic artificial intelligence. Even some within the world of effective altruism have grown concerned over the community’s fixation on long-term risk, to the detriment of actually-existing human beings. Once you cast the horizons sufficiently far into an imagined future, you end up in all sorts of wacky places. You start out with a bunch of guys who say that we should defund public libraries in order to buy mosquito nets, to whom you can rationally object that such thinking implies a very reductive view of human flourishing; in other words, you retain the ability to have a meaningful argument about values. And then somehow those guys move on to muttering about Roko’s basilisk, and if you debate them, you’re wasting your time in nerd fantasy land.

The problem then is that EA is always sold as a very pure and fundamentally straightforward project but collapses into obscurity and creepy tangents when substance is demanded. Even if every one of the stances advanced by effective altruists is correct, there’s an inherent disjunction between the supposed purity of its regal project and the actual grab bag of interests and obsessions it consists of in practice. “Let’s be effective in our altruism,” “let’s pursue charitable ends efficiently,” “let’s do good well” – however you want to phrase it, that’s not really an intellectual or political or moral project, because no one could object to it. There is no content there. It’s not meaningful enough to be a philosophy. The idea that private ownership of industry and markets are the best way to achieve the most good for the most people is a meaningful basis for a school of philosophy, whether right or wrong, just as the idea that public ownership of industry and distribution of goods based on need is a meaningful basis for a moral philosophy, right or wrong. They are debatable and thus amount to affirmative political and moral philosophical systems. Generating the most human good through moral action isn’t a philosophy; it’s an almost tautological statement of what all humans who try to act morally do. This is why I say that effective altruism is a shell game. That which is commendable isn’t particular to EA and that which is particular to EA isn’t commendable.

Related The Grift Brothers Sam Bankman-Fried and William MacAskill

Ultimately EA most often functions as a Trojan horse for utilitarianism, a hoary old moral philosophy that has received sustained and damning criticism for centuries. Obviously, you can find a lot more robust critiques of utilitarianism than I can offer here. Still, utilitarianism has always been subject to simple hypotheticals that demonstrate its moral failure. Utilitarianism insists that I give my bread to feed two starving children who are strangers to me instead of my own starving child, which offends our sense of personal commitment; utilitarianism insists that turning in the janitor who raped a woman in a vegetative state is immoral, which offends our sense of bodily autonomy even in the absence of consciousness; utilitarianism insists that it’s your moral duty to lie in court against a man who’s innocent of the charges if doing so stops a destructive riot, which offends our sense of individual rights and justice. Of course many utilitarians will try to wriggle their way out of such conclusions, often with reference to concepts like “rule utilitarianism” which give up all of the flexibility and simplicity that makes utilitarianism attractive in the first place. In my experience, utilitarians also have an annoying habit of handwaving away simple examples of how their philosophy results in repugnant ends, asking that we “get serious” and focus on real issues. But there is no value to a moral acrostic that we cannot actually apply to perfectly plausible hypothetical scenarios. This is why Peter Singer, as repulsive as he can be, is in some sense admirable; he’s willing to take the philosophy to its natural ends.

Of course, effective altruism and utilitarianism also share a denominator problem – you can’t achieve consensus about means if you don’t have consensus about ends, that is, what actually represents the most good for the most people. The entirety of moral philosophy exists because no one has ever come close to resolving that question. And utilitarianism breaks down as soon as you recognize that our ability to predict which actions will generate the most happiness is profoundly limited in a world of chaos and chance. So even if we handed over the keys to the definition of effectiveness to the EA people, we would still be stuck having elementary moral arguments about what we want to do effectively. As the philosopher James W. Lenman writes,

One, I think, fatal, problem is that a theory that tells us to perform at any given time “that action, which will cause more good to exist in the Universe than any possible alternative” is a theory that fails spectacularly to do what we want an ethical theory to do: offer some practical guidance in life. The Universe is just way, way too big, the future ramifications of at least many of your actions way too vast for us to have even the faintest idea what actions will cause more good to exist than any other, not just proximally but in the very very long term, from now to the heat death of the Universe.

Utilitarianism is often criticised for demanding too much of us, imperiously robbing us of any autonomy by seeking to control and direct every aspect of our lives. Really it has the opposite problem. It demands nothing of us. Entirely clueless as we are about the long-term consequences of our actions, any choice we make makes as much sense as any other. Utilitarianism is a fast track to nihilism.

If you’d like instead an argument about utilitarianism that uses the kind of abstraction and quantification common to effective altruism, this piece by Sam Atis does a good job. In short, the concept of increasing net utility will inevitably lead us to approve of risks that will sooner or later extinguish all utility. Such a hypothetical might seem a little too fanciful to be worth debating, but it is in precisely that intellectual world that EA lives, so it’s an appropriate critique.

Is there anything to salvage from effective altruism? Sure, there’s a number of specific insights and recommendations that we would be wise to draw from. A lot of people working under the penumbra of EA are doing very good work indeed. But this is sort of the dilemma for many EA advocates: if we are inspired by the people doing the best, we’ll simply be making a number of fairly mundane policy recommendations, all of which are also recommended by people who have nothing to do with effective altruism. There’s nothing particular revolutionary about it, and thus nothing particularly attention-grabbing. And if that’s the case, you’re unlikely to find yourself in the position that Sam Bankman-Fried was in, grooving along on Caribbean islands with a harem of weirdos, plugged in with deep philosophy types, telling everyone that you’re saving the world. EA has produced a number of celebrities, at least celebrities in that world, to the point where it seems fair to say that a lot of people join the community out of the desire to become one of those celebrities. But what’s necessary to become one is almost entirely contrary to what it takes to actually do the boring work of creating good in the world. This, more than anything else, is why SBF proved to be damning to EA. Any movement can be hijacked by self-dealing grifters. But effective altruism’s basic recruiting strategy is tailor-made for producing them.

Generating the most human good through moral action isn’t a philosophy; it’s an almost tautological statement of what all humans who try to act morally do. This is why I say that effective altruism is a shell game.

The difficult question for EA proponents is why a philosophy dedicated to the mundane work of making the world a better place produces so many adherents who appear viscerally disinterested in the mundane. Core to the appeal of effective altruism is the attitude that its adherents don’t just have an ethical or moral system, but have somehow pierced the veil of human ignorance and live on a higher moral plane than the rest of us. Even overall supporters like The Atlantic’s Derek Thompson have noted the cult-like aspects of the philosophy. What strikes me as most cultish in EA is simply the galactic smugness with which many of its devotees carry themselves. It’s not a coincidence that these people bought a castle; that’s less a statement on their largesse, although there’s that, and more a matter of their self-image as world-historical figures.

Of course, if you’re right and do good in the world, being smug would be small potatoes. It’s just hard to say whether EA people are really doing good in the world. For one thing, you have to consider the constant drip of misappropriation of money, like EA guru Will MacAskill spending $10 million on promotion for his book. (That could buy an awful lot of mosquito nets.) This question of spending priorities are a constant in any charitable endeavor, but are especially acute in regard to a philosophy that’s all about prioritizing resources. The good news is that the people doing the mundane stuff are going to keep on doing it, and I applaud their efforts; I will, however, continue to oppose the tendentious insistence that any charitable dollars for the arts, culture, and beauty are misspent. But that’s the whole point, right – you can keep the malaria prevention and the clean drinking water and abandon all of the folk religion that inspired it. If you can get to doing good charitable work without the off-putting, grift-attracting philosophy that inspired it, of what use is the philosophy? Why bother with the set dressing? Public commenters like Scott Alexander and Matt Yglesias have complained that the Bankman-Fried affair has resulted in an overly harsh backlash to EA. The question I would ask of them is, why not just keep the actual charitable stuff you like, and jettison all the nonsense that took effective altruism in that regrettable direction?

Your support matters…

Independent journalism is under threat and overshadowed by heavily funded mainstream media.

You can help level the playing field. Become a member.

Your tax-deductible contribution keeps us digging beneath the headlines to give you thought-provoking, investigative reporting and analysis that unearths what's really happening- without compromise.

Give today to support our courageous, independent journalists.

SUPPORT TRUTHDIG