The idea is deceptively simple: effective altruism. Altruism means trying to do good, effectiveness means that it actually works. In the early 2010s, this idea gave birth to a movement that I was part of and helped to launch. I had hoped that EA would change the world for the better and I was proud to have contributed.
Last week, the most prominent Effective Altruist, Sam Bankman-Fried, was sentenced to 25 years in prison for masterminding the largest financial fraud in US history. Many months ago, the other visible face of EA, Will MacAskill, stepped down from leadership roles and retreated from public view. Following the loss of its leaders, EA has been in disarray, except those few elements engaged in AI politics.
How did we get from there to here? And what happens next for EA? Most of my friends, some of whom were EAs, believe that EA is now in terminal decline. This judgment is fair; recovering from collapse is in many ways harder than founding something new. There are no inspiring new EA leaders or bold new EA initiatives. Most of its energy is spent. Young people who want to change the world are looking elsewhere, like the Progress movement or Effective Accelerationism.
Yet something remains. Other friends of mine, including some who were leaders in the movement for years, have left on paper but not in spirit. Not fully, at least. There is something about the idea, the original dream, that remains. To understand that, and to think about what should happen next, it will help to review the history, at least as I experienced it.
The Movement
1. Helping it grow
Some time around 2009, Will MacAskill and Toby Ord founded Effective Altruism. It was at first a community, though it had aspirations to be a movement. I met early EAs like Mark Lee and Nick Beckstead at Rutgers, where we were all graduate students in philosophy.
Mark and Nick had (with a third classmate, Tim Campbell) co-founded the Rutgers chapter of Giving What We Can. In early 2011, I founded Leverage, and Mark became an early volunteer. Many of us believed that it was possible to have large, positive impacts on the world and we found a vast space of possible avenues to explore.
The following year, I helped Mark launch the first Effective Altruist student group network: The High Impact NetworK, or THINK. We encountered friction with the Oxford-based EAs over our inclusion of less prestigious schools, but from our perspective, that didn’t matter. Good people could be anywhere and we wanted to find them.
My focus was primarily on research and growing my team, but Effective Altruism seemed like a powerful potential force, and one that would do good by default, so over the next few years I helped to get it off the ground. I considered myself an EA, as did some of the people I worked with, and we all thought a bigger and better EA might be truly impactful.
The largest step we took was running the first major EA conference, EA Summit 2013. This was an event with sixty people, all packed into a single family home in the Oakland hills for a week. We had big names there, like Holden Karnofsky and Eliezer Yudkowsky. Peter Thiel gave a talk; we beamed in Peter Singer and Jaan Tallinn. People loved it, the energy is visible in the video we made.
At this point, the movement was a loose coalition of individuals and groups, each trying to figure out how to do good in the most effective way. Some people focused on large-scale risks, others focused on movement growth. There were prominent cause areas, like global poverty and superintelligent AI, but none of us had real experience or resources so there was plenty to work together on.
Leverage ran the Summit again in 2014, with a week-long leadership retreat followed by a weekend conference with 180 people. We were able to convert the large increase in attendance from 2013 to 2014 into a (true) narrative of movement growth, and it started to look like our job (or at least, my job) was done. “Replaceability” was a key concept in early EA, and it now seemed someone else could take over.
Fortunately, there was now a crew ready to work on EA movement building full-time: Kerry Vaughan and Tyler Alterman, co-founders of what would become the EA Outreach (EAO) crew at the Centre for Effective Altruism (CEA). We had a good working relation with both of them, so in 2015 we handed off the event series. Kerry ran three conferences at the same time under the new name “EA Global” with over 500 total participants. EA had arrived.
2. On the sidelines
For the next several years, I had the chance to watch EA institutionalize. Money arrived. PR was cleaned up. The rate of new ideas slowed to a trickle. Bit by bit, the movement became monolithic, power consolidating under an “EA elite” centered around Will MacAskill and CEA’s main funder, Open Philanthropy.
I was involved intermittently during this time, mostly helping Kerry fight to keep movement building alive. He was able to keep the EAO team independent for a little while, but was eventually out-maneuvered. Tyler was let go in 2016. Kerry himself was forced out in 2019 — though he is probably better equipped to tell this part of the story himself.
The main fault line, from what I could tell, was between the movement builders and the EA “elites.” The movement builders had a “big tent” approach, trying to bring in people interested in all sorts of cause areas. The “elites” were worrying more and more about AI and being in positions of influence if and when AI arrived.
Unsurprisingly, the EA “elites” won out. The movement builders were a bit too naive and also spent less time defending their positions. The “elites” focused more on relations with funders and signs of legitimacy, which let them edge out the movement builders over time.
As power consolidated, the movement took on a now-familiar form, with a small number of highly correlated funders, few new leadership positions, and residual movement growth from past momentum rather than rapid growth from ambitious design. I had thought that was the end of EA, in spirit if not in concrete fact, done in by the conservatism of its leaders.
As it turned out, I was wrong. Behind the scenes, EA leadership had been burned by its own risk-aversion, and was now helping to incubate a new opportunity… in the form of Sam Bankman-Fried and FTX.
3. Watching it fall
EA burst into the public view in August 2022 with a TIME magazine cover for EA and a New Yorker profile on Will MacAskill. It accompanied the release of MacAskill’s second book, and was the first real time EA had drawn attention. I thought about it as EA’s public debut.
The success of Sam Bankman-Fried’s cryptocurrency exchange FTX was, in my mind at least, closely related. Bankman-Fried had started donating large sums of money to political candidates, and while the public was familiar with crypto, it hadn’t really heard of EA. Public spotlighting of EA was a natural consequence.
The rest of the story is well-known. A competitor (Changpeng Zhao from Binance) did a head-fake, there was a bank run on FTX, and suddenly everyone found out that FTX was insolvent and Bankman-Fried had stolen billions of dollars. Tons of money donated by FTX to EA causes was clawed back, the EA movement went into a tailspin, and Bankman-Fried went to jail.
I had only met Bankman-Fried once. He was visiting Leverage and was abrasive, as per his reputation. I knew a disaster was in the making early in 2021, since a friend reported (second-hand) that FTX was front-running trades. But I did not expect things to collapse so quickly.
After the arrest, there was an opportunity for EA leadership to take a moral stand, promise restitution to victims, even defend Bankman-Fried, if that’s what the spreadsheets said to do. Instead, the leaders were silent, no doubt on the advice of lawyers, defending themselves rather than doing anything that looked like effective altruism.
After the collapse, power consolidated further. Some key leaders left or were expelled. From within the community, there are no doubt signs of life, but from the outside it is easy to see that EA’s growth prospects have been destroyed. Some EAs continue to focus on politics and AI, so that remains an open thread. EA as a movement, however, is over.
4. Passing judgment
Even though I had left the movement by 2017, I helped to launch it and bear some of the responsibility for what followed. Without a careful study, it would be hard to say how much I contributed, since there were many other people who helped launch and build the movement, and as each person exercises agency, the total share of causal contribution adjusts.
Much of the responsibility for the collapse falls to Bankman-Fried, as the key and counterfactually relevant instigator. My guess is that much less falls to MacAskill, whom I suspect was more a victim than people realize. Again, a full accounting is difficult, though such a thing would be worthwhile and falls exactly within the purview of what EA says it is about.
More clear than the facts of total responsibility are the facts of failure. When we handed off responsibility for EA movement growth, we did not adequately anticipate the consequences. There was thus a failure there. I also failed to understand the internal politics of EA; I thought the marginal contribution from movement growth to alleviating global poverty would offset the negatives of political infighting, but now I am unsure.
Others will have to speak for themselves. In my view, the Bankman-Fried debacle has shown the complete failure of EA leadership, and perhaps more tellingly, the failure of the community. To my knowledge, there have been no protests or rebellions, or even much quiet quitting, just leaders still being incapable of leadership and people trying to follow them anyway.
That could be that, with EA consigned to the dustbin, alongside the many other movements that capsized and never came back. But something about EA lives on. I can tell this because, even now, I have friends who think about it and who occasionally text me about the “real problem” with EA, even after we all know that it’s long been over.
I would propose that what lives on is a combination of two things. One is something good current and former EAs understand. The other is something many of us have not yet understood, and relates to the core idea of effective altruism.
The Idea
1. Knowing what “EA” is
Things are often named for what they are or aspire to be. EA is like this. There is altruism, or “trying to do good.” There is effectiveness, which means “succeeding.” Together, these mean trying to do good and succeeding or, more simply, doing good.
This is the positive thing that EAs understand. Not everything needs to be self-interested. Not everything is about getting a job or getting a higher salary. There are people suffering in the world, people we haven’t met, people maybe who haven’t even been born, and it is possible to care for those people. Not everyone needs to care. But some of us do, and that’s good.
Caring, however, is not enough. At least, not for us. If people starve somewhere, or die of malaria, or if people die in the future because we didn’t take responsible actions now, those events are real and our caring did not avert them. We want our caring to take the form of action that matters in reality, making the world better, saving people who would have died, making the future good for everyone who will live in it.
That’s effective altruism, the positive sentiment. Not everyone will have the same formulation or the same affect, but the point is the same, the direction is the same. It’s not captured by traditional charity, which does not focus as squarely on effectiveness, or regular jobs, which don’t always focus directly on helping people.
There are versions of this idea that focus on comparisons, or calculations, or preventing suffering, or the astronomical number of people who might live in the future. The original idea of EA was broad enough to include all of these, which is why we all could rally around the simple desire for charities’ programs to produce their intended effects, that attempts to help people actually work in reality.
That is the part of EA we all have understood. Now let’s consider the part we haven’t understood. At least, I hadn’t understood this until a few days ago, while walking around a lake near my home and thinking about economics.
2. Learning what “EA“ isn’t
Names are often given to distinguish a thing from what it is not. If you find an organization called “Real America,” you can be sure they are trying to distinguish themselves from things that are fake, things that aren’t America, and most of all, things that are pretending to be America but in their view are not.
EA is like this as well. The name “Effective Altruism” seeks to distinguish EA from things that are ineffective, things are are self-interested, and most of all, ineffective attempts at altruism. As people will recall, at the beginning of the movement EA spent a lot of time critiquing and distinguishing itself from traditional charity.
The word “effective,” however, hides the actual target. EA is a movement of ideas, but almost no one is in favor of ineffective things, at least in principle. So EA is not defined against the advocates of ineffectiveness, that would be building a straw man into the title of the movement.
Rather, the “Effective” in “Effective Altruism” refers to a particular stance on evidence. EAs don’t just want activities to be effective, they want evidence for that effectiveness to be available now, or at least soon. This is not the same as wanting the positive impacts to occur soon; many EAs are happy with effects that occur billions of years from now. The short timelines are for evidence, not impact.
Impatience for evidence explains why EAs don’t typically focus on the arts or social justice. It’s not because people know that those causes are ineffective or that a given program doesn’t work. Rather, it is because if a given art or social justice-related activity benefits the world, that’s not something EAs think we’re going to get clear enough evidence of soon.
Just as there is some complexity around the “E” in “Effective Altruism,” there is also some around the “A.” At the most basic level, EA’s “altruism” means that it occurs in the non-profit sector rather than the for-profit. This fits with the positive understanding of EA, where for-profit ventures may be self-interested but non-profit activities aim at positive impact.
One might ask, however, why EAs did not broadly adopt for-profit activities as a way of doing good. One can imagine people starting businesses solely or primarily for the sake of benefiting customers, providing value by selling useful products and services at a reasonable price, and growing the business to increase the benefit provided. This is a natural idea, but not an EA one. Why not?
The answer again is evidence. Non-profit activities are more easily identified as actually altruistic. With for-profit activities, one might actually be in it for oneself, unless one actually makes money and then donates it to an effective charity. EA is thus not merely about effectiveness in one’s altruism, it’s about evidence, both for the effectiveness and the altruism, as anyone who has interacted with EA will know.
3. Re-understanding EA
Bringing the EA focus on evidence into view makes it possible to understand EA in a new way. Certainly we like to have evidence. Evidence of effectiveness helps us to make reasonable decisions. Without evidence, it is hard to know what to do. Yet EA also has a totalizing tendency; those who join often want to do everything on the basis of evidence, from choice of charity down to choice of diet and sleep patterns.
Can everything be done on the basis of evidence? That was always my hope, and my main disagreements with EAs were about the feasibility of revolutionary advances in the social sciences that would expand the range of evidence-based judgments. I thought such advances were possible; without them, it was clear only a very narrow of actions could be based on the sort of evidence EAs wanted.
The full implications of demanding high-quality evidence now or soon can be seen by thinking about for- and non-profit activities from a different perspective. As I learned from a friend of mine, for-profit ventures are meant to be self-sustaining, while non-profit ones are meant to be self-extinguishing. This was something I had not fully considered when I had been an EA.
Aiming to self-extinguish is actually one of the hallmarks of the EA mentality. The alternative for a non-profit is often appalling. One can imagine a homelessness charity, following local incentives, failing to solve the problems of homelessness so it can continue to raise money for itself and continue to fail. This is anathema to EA; the solution is for the charity to solve the problem and dissolve itself.
Rather than thinking about EA as simply trying to do good in a way that works, we can think about it as trying to engage in activities that are effective, good, and self-extinguishing. Why self-extinguishing? Any activity that is not self-extinguishing becomes very hard to assess, and any activity that is hard to assess won’t naturally be self-extinguishing. The feature is co-extensive with the core focus of EA.
There are many framings that this new understanding of EA permits. A negative framing is that EAs are scavengers, finding valuable pools of money that are not being deployed and breaking them down. A positive framing is that they are bounty hunters, bound by a code, waiting for the next opportunity to do good. The truth is almost certainly somewhere in the middle.
4. Value outside of EA
Once EA is understood as focusing on finite, self-extinguishing acts, we can more easily understand its proper role in the universe. Should EA be everything? Only if all activities should be self-extinguishing. Should EA be nothing? Only if all good activities should be self-sustaining. The answer is clearly neither of these. There is a role for good, self-extinguishing acts, albeit a limited one.
The limits of such acts, and by extension the limits of EA, can be seen by a comparison with ecology. In a biological ecosystem, the entities that sustain themselves are organisms. They are life. Some activities in an ecosystem are the activities of life, even most of them. But not all. There is also a need for decomposition, time-limited acts where the detritus of the environment is broken down and recycled.
The same can be true for for- and non-profit ventures. For-profit ventures are self-sustaining. They are thus equivalent to life in this analogy. Non-profit ventures are self-extinguishing. Problems are identified and then solved, and ideally never arise again. They are thus primarily negatively motivated, removing one-time barriers to the natural progression of a life made of things that are self-sustaining.
Can all problems be solved by for-profit entities? EAs are not economic ideologues and should not become such. There are obviously real opportunities for good that are not (or not yet) part of a self-sustaining economic process. Providing insecticide-treated bed nets to prevent malaria is one of these; one hopes to do it only until a self-sustaining process can take over.
EA can thus be thought of as a group of good people, looking for time-limited opportunities to help, making the world better without necessarily contributing to their own prosperity. This is the meaning of altruism. Whether EA in practice lives up to this ideal, or whether it dissolves, falls to the people who still bear its label.
To my mind, this captures what was good about EA but also explains its limits. It captures as well the spirit with which I originally founded Leverage, which was to do good and then move on once our work was complete. It also fits with how I have changed, having learned that self-sustaining things are necessary and can be of great value, and that they too can be part of how I contribute to the world.
The Dream
From this vantage point, it is possible to understand more of what happened with EA, how it changed over time, and why it ended up where it did. If one can only think in terms of a certain type of evidence, if one’s only options are self-extinguishing, the world narrows to an incredible degree. Behavior that is patently irrational may come to seem rational from the inside.
From my perspective, EA has been contracting for years. It was broadest in the beginning, when there were many groups and many ideas and none us had figured anything out. 2015 was the last really open year. After that, the variety of people and ideas went down, the range of conversations shrank, the free space within the movement dried up.
What would it look like, to see the world from a more and more narrow perspective? One can imagine a die-hard EA, ready to bite all the bullets in a perverse display of misunderstood virtue, thinking through the Bankman-Fried debacle:
“25 years in prison at 0.2 DALYs per year, so that’s 20 DALYs lost. The $8 billion is being given back, so that nets to zero. Distraction in the media, damage to crypto, those are hard to quantify, so let’s assume the cost is zero. The donations to EA causes were clawed back, but Anthropic kept $500m, and it’s aligned with Open Phil. So that’s $500m total, or $25 million per DALY, which is pretty good.”
But that’s obviously insane. The mistake, apart from the heartlessness, is in rounding the hard-to-quantify things, which would include damage to the EA movement, to zero. The original dream of EA was not to fit the world into a spreadsheet, discarding everything that didn’t fit, but to use reason and evidence to better understand how to do good.
That dream lives on, though it’s up to EAs to decide how much of it shows up in EA. For myself, I expect to continue doing some activities that are self-extinguishing and good, alongside activities that are self-sustaining and as good as I can make them. I imagine my effectiveness in altruism then joining my other pursuits — pursuits that, as I have argued, bear a striking resemblance to life.
Written for some friends of mine on Easter.
My biggest question: Who brought about the pivot from mosquito nets to AI-Safety? How did that play out? Was it an evolution in thinking, or was it even from the start the kernel of the EA movement's current obsession?