Crary avoids explaining her arguments against Effective Altruism

in an article about Effective Altruism being morally “bankrupt”

David Piepgrass
16 min readSep 13, 2021

Effective Altruism is a movement dedicated toward improving the world using hard work, large donations, and analysis meant to make our work and donations as effective as we know how. It involves, for example, comparing very different courses of action to each other and trying to decide which is better. For example, we might compare “funding deworming to improve quality of life in countries where intestinal worms are a big problem” versus “an educational campaign in America to teach people about the Approval voting system” versus “an organization that researches the risk that future smarter-than-human AIs could pose to humanity”. Such analysis is difficult, and EAs may reasonably disagree about which course of action is better, so in practice EAs fund a wide variety of things to various degrees.

I reviewed the first 55% of Alice Crary’s critique, “Against ‘Effective Altruism’” and it took me a few hours so I really need to stop, but I’ve responded to the first 55% to give you a flavor of the kind of thing this person is saying. Who is she? I don’t know, evidently some kind of philosopher. (Who am I? I don’t know, some random EA guy.)

As an EA, I am interested in reasonable criticisms of the EA movement, but this isn’t really doing it for me. It’s not a good sign when a critique begins, not with any actual criticisms, but lots of innuendo that “something is very wrong with this”.

EA has been the target of a fair bit of grumbling, and even some mockery, from activists and critics on the left, who associate consequentialism with depoliticising tendencies of welfarism.

Actually, wait, depoliticizing doesn’t sound bad? Usually “politicizing” is what people complain about.

EA has mostly gotten a pass, with many detractors concluding that, however misguided, its efforts to get bankers, tech entrepreneurs and the like to give away their money cost-effectively does no serious harm. This stance is no longer tenable.

So what does this mean? That convincing people to donate to charity is “harmless” unless you convince them to donate too much, at which point … it’s bad? Or maybe she’s saying that traditional donations, like giving $10 million to a hospital so that they name the oncology wing after you, is good, but buying anti-malaria nets instead, or doing something to discourage bio-weapons programs, would be bad because that’s what EAs are supporting?

The growth of EA has been explosive, with some affiliated organisations, such as Open Philanthropy, now recommending grants amounting to hundreds of millions of dollars annually. […] One result of the windfall is that EA-guided ratings groups serve as king-makers, raising up pro-animal organisations deemed ‘effective’ by EA and denigrating and partly defunding many organisations deemed ‘ineffective’, while pressuring others to artificially shift their missions in order to conform to operative metrics of ‘effectiveness’ and secure funding. This has led to objections from animal advocates (often muted due to fear of alienating EA-admiring funders).

Well, less than ten thousand people have signed the “Giving What We Can” pledge, but I guess it’s fast in annual-percentage terms, and many people surely donate without signing it.

The scare-quotes seem to argue that “effective” giving actually is less effective than what people did before (perhaps worse than not giving at all?), but as long as the argument is implicit, she doesn’t have to explain it. I guess EAs have “denigrated” charities like “Scared Straight” on the basis that they have apparently done harm, but that’s what people tend to do with harmful things (it’s what the author is doing to EA right now). The word “defunding” means “not donating anymore”, right? Yes, that’s what people do if they think they’ve found a better use of money.

Edit: okay, yes, I glossed over the part that says “pressuring others to artificially shift their missions in order to conform to operative metrics of ‘effectiveness’”. But what does this mean? Does it mean “we feel pressured because they won’t give us money unless they think we are effective?” Is Crary saying that I have done something wrong if I choose charity A over charity B on the basis that charity A seems more effective? If so, she should come right out and say that, and explain why.

Yet champions of EA, whether or not concerned with the cause of animals, for the most part adopt the attitude that they have no serious critics and that sceptics ought to be content with their ongoing attempts to fine-tune their practice.

I knew there were critics, I just haven’t seen any that seemed reasonable enough to bother arguing with.

It is important to forcefully make the case that it owes its success primarily not to the — questionable — value of its moral theory but to its compatibility with political and economic institutions responsible for some of the very harms it addresses.

And which moral theory is not questionable? Deontology? Virtue ethics? I have heard commenters reject utilitarianism repeatedly at ACX on the basis of a thought experiment, but surely there are plenty of thought experiments to argue against deontology, virtue ethics, “follow the Bible”, or any other well-known system.

I suspect people are rejecting consequentialism/utilitarianism because they know about anti-utilitarianism thought experiments but are unaware of (e.g.) anti-deontology thought experiments. Also, some people choose “I’ll do whatever I feel is right” … but that’s what consequentialists, deontologists, and virtue ethicists do too! Just with principles to guide them. (And sure, you save yourself from thought experiments if you have a “whatever I feel like” moral system, but only because your system is just a series of data points, like “I feel X is wrong and Y is right”, with no clear principles, and specific to you and you alone, so philosophers won’t ever criticize it. But that doesn’t mean your personal system is the best one.)

Note also that if EA had been founded by ardent supporters of, say, deontology, there would be a bunch of critics saying it should be based on consequentialism; it is impossible to avoid criticism. There’s an Aesop Fable about this, “The Man, the Boy, and the Donkey”.

reflection on EA reveals a straightforward example of moral corruption

But of course you won’t explain what you mean yet: you want the audience to stew on it for awhile.

Some advocates argue that it is not necessary for effective altruists to be consequentialists. Others go further, claiming that EA is ‘independent of any theoretical commitments’. This last claim is false

We can steelman this as “in practice, EA steers people toward consequentialist giving”, and I agree. But consequentialism isn’t part of the GWWC pledge:

I hereby recognize that I can use part of my income to do a significant amount of good.

Since I can live well enough on a smaller income, I pledge that from now and for the rest of my life or until the day I retire, I shall give at least 10% of my income to whichever organisations can most effectively use it to improve the lives of others. I make this pledge freely, openly, and sincerely.

I interpret this as “most effectively according to my own opinion”. So Alice Crary could have attempted to convince individual EAs or EA organizations that they should disregard consequentialism and use some other basis for their decisions, but obviously that’s not her goal here.

self-avowed effective altruists have tended to [take] as their core value the sort of well-being capturable by the metrics of welfare economics

I don’t think that’s true. As an EA, I’m not familiar with the phrase “welfare economics”, but certainly I recognize that there are important aspects to well-being that are not captured by any metrics I am aware of, and I’m aware of that mainly because EAs often talk about that.

This abstract moral epistemology is one of the marks of a moral radicalism that, although sometimes criticised for the extent of its demands, gets celebrated by consequentialists. […] If we take well-being as a value, our ability to act so as to address suffering in any spot on earth places the same moral demands on us as does our ability to address the suffering of an unaccompanied toddler drowning in a shallow pond next to the road on which we’re walking. This radical twist…

The author’s use of “radical” seems be “unusual (and therefore bad)”. She’s trying to create negative feelings without making an actual argument.

Their abstract approach excludes any virtue-oriented view on which the rightness of actions is appropriately engaged responsiveness to circumstances, and this makes it seem more natural to account for rightness by looking to the value of actions’ consequences

Admittedly I know little about virtue ethics, but I don’t think “the rightness of actions is appropriately engaged responsiveness to circumstances” is a clear rule for deciding what is right or wrong. To me, a rule like the following is more clear: “to the extent that the consequences of our actions are predictable, the rightness of our actions depends on the (predictable) consequences of those actions compared to the (predictable) consequences of other actions we could have taken, according to our own judgement of which consequences are better. Also, this is a hard problem, so we should think really carefully about it.” Apparently Alice would disagree, but how is her opinion “righter” than mine, and also, just what is her opinion anyway?

But it is the idea that rightness is a matter of the value of quantifiable consequences, allowing for difficulties of juggling different classes of values, that makes it seem coherent to speak of single judgments about how to do the most good.

There is no single judgement; various EAs make different judgements about how to do the most good, and they debate those ideas. As to whether there may be, in principle, a single best action (which is impossible to determine with certainty in practice) according to a specific formulation of consequentialism, I think EAs would not have a consensus about that either. In short, this seems like a strawman.

EA’s god’s eye image of moral reflection constrains how we can conceive of ethical thought and practice, leaving no room for views intolerant of the idea that moral reflection proceeds from the standpoint of the universe

I haven’t heard EA-style consequentialism described as “moral reflection…from the standpoint of the universe” and I’m not sure how accurate such a description is, but I guess she’s saying that EAs “leave no room” for views “intolerant of the idea that moral reflection proceeds from the standpoint of the universe”. But assuming that’s true, I ask you: why should EAs provide “room” for people who are intolerant of common EA ideas? Like, if I welcome your intolerance, you’ll still be intolerant, right?

excluded are views — e.g. some Kantian constructivisms — that combine accounts of moral reflection as essentially perspectival with understandings of theoretical reflection as maximally abstract. Also excluded are views that combine accounts of moral reflection as essentially engaged with understandings of theoretical reflection on which such reflection likewise goes unregulated by an ideal of abstraction.

Really? How’s that?

EA’s Oxford-trained founders work in a philosophical tradition, indebted to classic empiricism, shaped by the assumption that subjective endowments have an essential tendency to obstruct our access to the world. Thinkers in this tradition often simply take it for granted that any genuine, objective aspects of the world are abstractly accessible. Acquaintance with local history suggests this posture is at least questionable.

I take this to mean “EA thinkers tend to be empiricists who think that people have cognitive biases and limited access to information, but at the same time they assume that the world has some objective properties that can be described theoretically”. And this is true, but I don’t understand her argument against this view.

To sideline this part of Anglophone philosophy is to overlook its most notable resources for criticising consequentialism

Sorry, but it’s hard not to sideline poorly-explained ideas that I find incomprehensible.

The most fully elaborated criticism of EA… is sometimes referred to as the institutional critique.

Don’t have time right now to read that. Is it a better critique than this one?

Advocates of the institutional critique […] decry the neglect, on the part of EA, of coordinated sets of actions directed at changing social structures that reliably cause suffering.

I think she’s saying “EA isn’t good at doing political movements”. But

  1. EA orgs want to get better at that, e.g. consider this six-year-old article entitled “Effective altruists love systemic change”.
  2. EA orgs will eventually get better at that, and
  3. When they do get better at it, they will unavoidably receive much more criticism than they do today (donkey fable again).

EA’s metrics are best suited to detect the short term impact of particular actions, so its tendency to discount the impact of coordinated actions can be seen as reflecting ‘measurability bias’

But problems with metrics are discussed often in EA circles, and EAs often rely also on intuition and surrogate measurements to fill in gaps where the thing we want to know has no accurate measurement. Hits-based giving is one example of EAs dealing with the severe uncertainties involved in finding the best interventions.

Another example is this popular post on the EA forum, “Growth and the case against randomista development”. If Alice is right, such examples shouldn’t exist or be popular.

The institutional critique of EA can be brought to bear on Animal Charity Evaluator’s 2019 ratings. Animal Charity Evaluator’s favouring of welfare improvements in the conditions of farmed animals can be taken to reflect forms of (‘measurement’) bias in its metrics, which are best suited to detect the outcomes of simpler efforts with clear short term impacts. This orientation speaks for striving to change the methods of meat companies in ways that leave unquestioned the larger political context in which the companies operate. The result is that, despite its sincere pro-animal stance, Animal Charity Evaluator is at risk of strengthening an industrial agricultural system that reproduces horrific animal suffering on a massive scale.

Well, if you’ve got a more effective way to reduce animal suffering that only needs a few thousand people to accomplish, let’s hear it. (Does Crary even understand that EAs are all about effectiveness? It’s right there in the name.) But if your solution is something like “run a conventional political movement”, well, conventional political movements have been around for a long time and their track record leaves something to be desired. I mean, everybody knew about global warming since 1988, and 33 years later we still haven’t reached “peak oil”. In contrast, specific plant-based meat companies have been around for a few years and are already displacing factory-farmed meat. (I know, EAs probably can’t take most of the credit for that, but we’re certainly big fans. I had two Beyond burgers last night, how about you Alice?)

While replies to the institutional critique bring out that there is room to include collective actions among EA’s objects of assessment, […] they leave unexamined questions about whether it is confused to insist on causal effects as the standard for evaluating collective attempts to change the normative structure of society.

These attacks charge that it is morally and philosophically problematic to construe moral reflection as abstract.[18] […] Effective altruists who respond to the philosophical critique take Williams to be urging us to protect our integrity even at the cost of doing the wrong thing.[19]

“protect our integrity at the cost of doing the wrong thing”? That sounds like an oxymoron, surely you jest? Also 19 is actually two references, the first of which is 36 pages… I don’t have time to read it, but somehow I doubt that “protect our integrity at the cost of doing the wrong thing” is a fair summary.

they dismiss these critics’ gestures as without philosophical interest

I bet it’s more like “they argue that these critics are misguided and explain why”

It is not difficult to develop philosophical critics’ worries about a god’s eye morality so that they rise to the level of a devastating objection. All that is required is to combine worries about point-of-viewless moral reflection with views about values […] on which concepts of values determine neutrally unavailable worldly patterns.

Um… what?

Why should effective altruists take seriously an attack on a philosophical worldview that many of their colleagues take as an unquestioned starting point? The late twentieth and early twenty-first centuries witnessed significant philosophical assaults on abstract conceptions of reason, and there is a notable philosophical corpus in which the merits of these assaults get debated. Although it is by no means obvious that those who favour abstract views have better arguments, and although their interlocutors raise fundamental questions about these views’ tenability, abstract construals of reason have for more than half a century played an organising role in the discipline of philosophy […]. This suggests that the construals’ staying power is at least partly a function of ideological factors independent of their philosophical credentials. That — the fact that these conceptions of reason are manifestly open to contestation — is one reason why effective altruists should attend to a philosophical critique that depends for its force on rejecting abstract images of reason. A second reason for effective altruists to attend to the philosophical critique has to do with the seriousness of the moral charge it levels against them. It alleges nothing less than that their image of the moral enterprise is bankrupt and that moral assessments grounded in this image lack authority.

I think she’s saying “EAs should take my attack seriously because

  1. those who favour abstract views are biased by ideology and
  2. I am making a very serious charge that your moral enterprise is bankrupt.”

This means, apparently, that she has decided against explaining what in the world she was talking about in the previous paragraph.

Effective altruists invite us to regard the rightness of a social intervention as a function of its consequences, with the outcome involving the best states of affairs counting as doing most good. This strategy appears morally confused when considered in terms of the ethical stance of the philosophical critique. To adopt this stance is to see the weave of the world as endowed with values that reveal themselves only to a developed sensibility. To see things this way is to make room for an intuitively appealing conception of actions as right insofar as they exhibit just sensitivity to the worldly circumstances in question. This is consistent with allowing that right actions can have the end of promoting others’ happiness or flourishing. Here acting rightly includes acting, when circumstances call for it, in ways that aim at the well-being of others, and, with reference to this benevolent pursuit of others’ well-being, it makes sense to talk — in a manner that may seem to echo effective altruists — about good states of affairs. But it is important that, as Philippa Foot once put it, ‘we have found this end within morality, forming part of it, not standing outside it as a good state of affairs by which moral action in general is to be judged’. 24 Here right action also includes acting, when circumstantially appropriate, in ways that aim at ends — e.g. giving people what they are owed — that can conflict with the end of benevolence. Apt responsiveness to circumstances sometimes requires acting with an eye to others’ well-being and sometimes with an eye to other ends. In cases in which it is not right to attend to others’ well-being, it is incorrect to say that, because we haven’t thus attended, we achieve a morally worse result. Things only seem this way if we allow our understanding to be shaped by what now appears to be a confused understanding of morality. What we should say is that the result we wind up with is morally best. That is what it comes to to say that, within the context of the philosophical critique, there is no room for EA-style talk of ‘most good’.

In summary, “the philosophical critique” uses a non-consequentialist moral framework in which “there is no room for EA-style talk of ‘most good’”.

But if there is no “most good”, does that mean there cannot exist “more” or “less” effective ways to make the world a better place? Or is it rather that “most good” is the eye of the beholder? If you interpret EAs as trying to move the world toward a state that they individually consider better from their own perspective, I think that’s an accurate interpretation. Again, the “standpoint of the universe” thing is misleading; EAs aren’t saying “there’s One Best Thing”, though coincidentally different EAs tend to agree in broad strokes about various goals and methods, as humans often do.

Yes, philosophically they tend to use consequentialist analysis, but look: Alice still hasn’t proposed an alternative. She’s done little more than point out that some philosophers disagree. We don’t need a big long essay to tell us that. We need a big long essay to explain the disagreement in plain language, using examples or case studies.

This critique alleges that EA’s claim to be doing the most good founders on a misunderstanding of the nature of morality…

Again, Crary says there is a critique but doesn’t care to say what it is.

The resulting composite critique presupposes, in line with the philosophical critique, that values are essentially woven into the texture of the social world and that EA’s Archimedean take on moral reflection deprives it of resources needed to describe — irreducibly normative — social circumstances.

I don’t think this tells us how to decide how to donate money or make the world better. Or is that an unworthy goal in the first place? Also, has anyone actually adopted this other philosophy? Or is it better to do nothing rather than to donate using a philosophy Crary regards as wrong?

The upshot of this new line of criticism is an update of the institutional critique, charging that EA cannot give accurate assessments of sets of actions because it forfeits capacities necessary for all social assessment. This means that the tendency of EA-affiliated organisations to wrongly prioritise evaluation of the proximate effects of particular actions is not a fixable methodological flaw.

The surface-level logic here is a non sequitur, and Crary hasn’t really tried to make a case that EAs prioritise wrongly, except to say “look, references! it’s in the references!”

It is often right to act in ways that aim to improve the welfare of others.

So, it’s often wrong too, even if it is effective?

But recognising the instances in which this is (or isn’t) right requires capacities for engaged social thought that EA disavows.

EAs do not disavow engaged social thought.

Further, when it comes to evaluating actions coordinated with an eye to social transformation, EA’s image of the moral enterprise is patently implausible. Such actions are efforts to restructure the normative organisation of society, and their relevant ‘effects’, far from obeying merely causal laws, are at home in the unpredictable realm of politics. Attempts to evaluate these efforts in EA’s terms are manifestly confused.

Look, we’ve all heard of the Law Of Unintended Consequences, but that doesn’t mean we ignore it, nor does it prove we should abandon all hope of trying to predict the consequences of actions once they cross some invisible line into “politics”. Indeed, how is the debate we’re having now not already political? Also, what’s with “effects” being in scare quotes, and what’s the purpose of the word “merely” here?

So, I’ve passed the halfway point of the essay, and I have decided that she isn’t actually making an argument, she’s just gesturing vaguely in the direction of arguments that other people have made. To anyone who has time to investigate those: let me know what you find.

Update: other EAs have responded to a similar critique, also by Alice Crary, here and here. As the second link says, “I found the reasoning in this talk perhaps more compelling than any other broad external critique of EA that I have read before.” It appears that Crary first made a letter to someone urging them not to join Effective Altruism and then converted it into the current slightly-more-abstact-and-scholarly-sounding form with references.

--

--

David Piepgrass

Software engineer with over 20 years of experience. Fighting for a better world and against dark epistemology.