Written as part of the Giving What We Can September internship; status: draft
An open problem in the giving community is whether all of an individual’s donations should be directed towards a single charity, or whether one should split between multiple charities. This question is particularly relevant to the effective altruism movement, whose goal is to bring about the greatest positive impact, since the impact-per-dollar difference between different charities might be tens, hundreds, or even thousands of times. Data from Disease Control Priorities in Developing Countries 2nd Edition (DCP2), a comprehensive collection of articles on setting priorities in global health, estimates the cost-effectiveness of 108 interventions on different types of illness, in terms of disability-adjusted life years (DALYs) per $1,000, shown in the graph below. DALYs are a common measure in global health, representing the years of life lost due to disability, plus the number of years lived with a disability multiplied by a weighting of its severity.
These cost-effectiveness estimates range from 0.02 to 300 DALYs per $1,000, with a median of 5, meaning that the best program might produce 15,000 times the benefit compared to the worst, and 60 times the benefit compared to the median, so an effective altruist should focus on these. The chart is also heavily skewed, suggesting that the most cost-effective charities are rare, so one should be careful when researching and splitting their giving between interventions.
While an economist might argue that it is clear that a purely altruistic and rational small individual donor should always choose the action which maximises expected utility, there are a whole host of reasons of varying validity that one should consider when deciding whether to diversify their donations, whether psychological, social, economic, philosophical, or otherwise.
In this literature review of effective altruist writing and discussions, as well as academic research, we will look at the main reasons donors already give to multiple charities, explore whether these considerations are justified in light of trying to do the most good, and see if there are any other compelling reasons for diversifying or not. In doing so, we will assume that the donor is aligned with effective altruism, that is, they care about donating to the most effective charities, and doing the most good with the resources at their disposal. Finally, we will also look at how the size of the donation itself changes some of the kinds of questions we should ask.
- What are the main reasons why donors diversify their donations? How valid are these reasons?
- Are there other valid reasons for donation diversification?
- How does the size of a grant affect the question of whether donations should be diversified?
An important concept for cost-effectiveness is expected value, an average of the value, or “goodness”, of outcomes, weighted by the probabilities of these occurring. For example, in a simplified example we are giving $100, and have a choice between charity A which runs an intervention distributing deworming pills to children for $1 each with 90% effectiveness, and charity B does the same but the pills are $0.50 with 70% effectiveness. The expected number of children dewormed by giving to charity A is 100/1*0.9 = 90, whereas it is 100/0.5*0.7 = 140 for charity B, so we should donate to A all else equal, which maximizes the expected value.
This is the argument most frequently given in favour of small donors giving to a single charity, that maximizing expected value follows mathematically from the goal of effective altruism of maximizing positive impact (we discuss how things change for large donors, such as foundations, later). In principle, there must exist a most cost-effective charity with the highest expected value, or at least one which we can estimate to the best of our knowledge at the time of donating, so Landsburg writes the following against diversifying, in his example between CARE and a cancer society:
Giving to either agency is a choice attached to a clear moral judgment. When you give $100 to CARE, you assert that CARE is worthier than the cancer society. Having made that judgment, you are morally bound to apply it to your next $100 donation. Giving $100 to the cancer society tomorrow means admitting that you were wrong to give $100 to CARE today.
You might protest that you diversify because you don’t know enough to make a firm judgment about where your money will do the most good. But that argument won’t fly. Your contribution to CARE says that in your best (though possibly flawed) judgment, and in view of the (admittedly incomplete) information at your disposal, CARE is worthier than the cancer society. If that’s your best judgment when you shell out your first $100, it should be your best judgment when you shell out your second $100.
That is, if one expects CARE to be the most effective charity when giving $100, they should also expect it to be the most effective charity when giving out $100 the day after, assuming they had learned no new information in the meantime, since $200 should be expected to accomplish exactly twice as much good as $100 as it is unlikely to hit diminishing returns.
While convincing on the face it, this argument does not take into account important points including the quirks of the human brain and behaviour which make diversifying seem so compelling, some of which may lead to an increase in total impact in the long run, the social impact of our giving, or the difficulties in making decisions under uncertainty and explicitly estimating cost-effectiveness, and in coordinating with charities and as a community.
What are the main reasons why donors diversify their donations? How valid are these reasons?
Heuristics and Biases
Landsburg’s idea relies on using expected utility theory, but there is a great deal of literature suggesting that actual human behaviour often departs from this model in systematic ways. Kahneman and Tversky’s demonstrated this when inventing prospect theory in 1979, having showed that people were loss averse, that the idea that the utility gained from acquiring some object is less than the utility lost from giving it up.
One famous experiment in which this effect was displayed was conducted by Kahneman, Knetsch, and Thaler (1990). In a classroom setting, students were seated, and a decorated mug, worth about $5 at retail, was placed in front of one third of them (the “sellers”), along with a questionnaire stating that they now owned the mug, and could indicate whether they would like to sell it, and asked to name a price, between $0.50 and $9.50, in steps of $0.50. Some students (the “choosers”) who had not received a mug were also given a questionnaire, giving them the option of either receiving a mug or some amount of money, and they indicated their preferences between them, also ranging from $0.50 to $9.50.
The sellers would then frame the experiment as a loss; they could retain the mug, or exchange it for a sum of money, whereas the choosers thought of it as a gain; they began with nothing, but will receive either a mug or some money, so they strictly stand to do better than they started. The results were striking: in two tests, the median values of the mug were $7.00 and $7.12 for the sellers, and $3.12 and $3.50 for the choosers. While the choices the two groups made were equivalent, simply giving the sellers ownership of the mug made them value it twice as highly as the choosers, even though it is very unlikely they would have paid that much to purchase one if they were not granted ownership.
In Choices, Values and Frames, Kahneman and Tversky (1983) describe another experiment in which the majority of subjects were shown to be averse to risk, preferring option A, a sure gain of a smaller reward, $800, over option B, a choice with a higher expectation but a lower probability, 85%, of a greater reward, $1,000. The expected value of A is $800, since it is guaranteed, but the expected gain from B is 0.85*$1000 + 0.15*$0 = $850, so the decision runs counter to expected utility theory. This might imply that people are more likely to donate to a cause with a high probability of a small effect, over one with a lower probability of having a huge effect, which may not make sense for a person trying to maximize their impact.
How are these findings relevant to donation diversification? Kuhn (2014) writes in “How many causes should you give to?”:
The feeling of higher stakes is basically because of loss aversion bias—it feels disproportionately worse to make one really bad choice (like donating everything to your top charity when the second-place one later turns out to be better) than to make a lot of slightly bad choices (like splitting your donation among your top five charities when your top charity was actually the best one). There’s just as much potential to make a bad decision when donating to multiple charities, but there’s less potential to feel bad about it.
People tend to donate periodically, such as every few months or a year, rather than in a single lump sum, and their expectations and ranking of a charity’s or intervention’s cost-effectiveness changes over time as they learn new information. If people feel disproportionately bad when our estimates lower than rise, this may have implications on their willingness to donate in future if they discover that a past donation turned out not to be as effective as they had hoped, so it may make sense for an effective altruist to donate to multiple top causes if they expect this to take a toll on them in the long run. Indeed, Snowden (2015) argues that all that matters is the possibility of hearing bad news, which seems higher for effective altruists, who often keep up with recommendations and changes in rankings by organisations like GiveWell, and news in global health, than the general population of givers:
In reality, [a donor] might never find out the actual impact of his donation. But the argument still holds if there is some risk that new evidence may come to light, suggesting that the charity to which he donated had less of a chance of doing good than previously thought. For example, a recent re-evaluation of a landmark study on mass deworming found that errors in the calculations meant that the positive spillover effects of mass deworming treatment on school attendance were no longer significant (Aiken, Davey, Hargreaves, & Hayes, 2015; Miguel & Kremer, 2004).
Snowden goes on to say that, for the reason above, if a set of charities are sufficiently close in their estimated impact, and one expects to continue giving in the future, it may be rational to split from a purely altruistic perspective and a different decision theory may be more appropriate, instead of attempting to maximize expected utility:
Nevertheless, as [the donor’s] past and future selves can be considered as holding separate sets of preferences, it is accurate to describe [the donor’s] past self as purely altruistic while being aware that, under certain conditions, his future self might fall short of this lofty ideal. His past self therefore adopts a sophisticated choice procedure to realise the outcomes which are in fact the object of his past self’s purely altruistic preferences.
A common reason for donating to multiple charities might be scope neglect, where the valuation of a problem is not proportional to its size, the best known example being shown by Desvouges et al. (1992). In this study, three groups of subjects were told that varying numbers of migrating birds die each year by drowning in uncovered oil ponds, which could be prevented by covering these oil ponds with nets, and asked how much they would be willing to pay to provide them. For 2,000, 20,000, and 200,000 birds, the mean responses were estimated at, respectively, $80, $78, and $88. Since each of these achieves ten times as much good as the last, it seems like people should be willing to pay roughly ten times more each time.
Carson and Mitchell (1995) showed that the effect also occurred when valuing human lives, when by increasing the supposed risk associated with chlorinated drinking water by a factor of 600 from 0.004 to 2.43 annual deaths per 1000, people’s willingness to pay increased merely by a factor of 4, from $3.78 to $15.23. Another study found no effect when varying human lives saved by a factor of 10.
There are a few different explanations given for this phenomenon, the most widely accepted being that that people tend to imagine just a single bird suffering, and have difficulty visualizing large numbers, called “judgment by prototype”, which might affect how much one is willing to give to any particular cause, so they might instead spread their charitable giving over multiple causes. Another contender is the “purchase of moral satisfaction”, that a person’s willingness to pay is unrelated to the birds, and instead a fact about their psychology, that they will spend enough to create a “warm glow” feeling. If donating $100 to one charity feels equally rewarding as giving $1,000 to it, people may spread their giving out to create multiple “warm glows”.
Finally, Harrison (1992) describes a “good cause dump” explanation, that people have a certain amount they are willing to give to a cause such as “the environment”, and any intervention in the area elicits this amount.
Overall, these seem like biases for effective altruists to try to avoid. If one knows that a cause is much more cost-effective than any other, they should be willing to give the majority, if not all, of their donations there, regardless of the number of “warm glows” felt. However, if multiple causes are sufficiently close together in cost-effectiveness, it may be worth diversifying if one expects the “warm glows” (or risk of “loss”) to lead to greater donations in the future, as Snowden (2015) argued. For the first case, Yudkowsky (2009) argues that people should “purchase fuzzies and utilons” (“warm glows” and trying to maximize utility) separately, since it is more efficient:
But the main lesson is that all three of these things—warm fuzzies, status, and expected utilons—can be bought far more efficiently when you buy separately, optimizing for only one thing at a time. Writing a check for $10,000,000 to a breast-cancer charity—while far more laudable than spending the same $10,000,000 on, I don’t know, parties or something—won’t give you the concentrated euphoria of being present in person when you turn a single human’s life around, probably not anywhere close. It won’t give you as much to talk about at parties as donating to something sexy like an X-Prize—maybe a short nod from the other rich. And if you threw away all concern for warm fuzzies and status, there are probably at least a thousand underserved existing charities that could produce orders of magnitude more utilons with ten million dollars. Trying to optimize for all three criteria in one go only ensures that none of them end up optimized very well—just vague pushes along all three dimensions.
“Variety is the spice of life” is often true when eating different foods throughout the day or watching movies, or to reduce risk when making investments, but people will seek variety even after they no longer have good reason to do so, especially when making simultaneous rather than sequential choices (Read et al. 1995, Fox et al. 2005). In one study by Fox et al. (2005), participants were given small amounts of money and asked where it should be donated: One group of participants was asked how to split between local and international charities, and the other between five different charities (one international, four local); the median responses were 50% and 20% in each group, an exact equal split between possible choices, which seems very unlikely to be optimal. In the context of giving, diversification bias could cause people to donate to multiple charities, even when they could have accomplished much more good by giving to a single one. Plausibly, if this is due to the “warm glow” effect, and scope neglect fades such that they would feel a similar warm glow next time they give to the same charity (this may not be the case), this might mean that people should commit to donating to one cause at a time, sequentially during the year, rather than doing all their donating in one go such as at the end of the year, if they want to combat this bias.
In Heuristics and Biases in Charity, Baron and Szymanska (2010) break down the bias when applied to charity into three possible explanations: One is the desire to feel multiple “warm glows”, discussed in Scope Neglect, the second is the failure to distinguish between maximization and allocation (or discomfort in doing so), and the third is a tendency to think of donating to charities like investments.
For the second, Ubel et al. (1996) told their subjects:
All else equal, kidney transplants are more likely to succeed if they are transplanted into patients who ‘match’ the kidney donor. The number of successful transplants one year after transplant is eight out of ten for patients with a complete match and seven out of ten for patients with a partial match.
They were then asked how to maximize the success rate when allocating 100 kidneys between two groups: 100 patients with a complete match, and 100 patients with a partial match. Fewer than 20% of the participants gave the correct answer (to give all the kidneys to the completely match group), and most said to allocate the kidneys equally, giving 50 to each group.
Another study by Ubel et al. (1996) in the same year highlighted the discomfort many people, in this case a set of prospective jurors, medical ethicists, and experts in medical decision making, show when making trade-offs based on cost-effectiveness. The groups were asked to choose between two screening techniques for a population at low risk of colon cancer: one was cheaper and would save 10% more lives for the same cost (1,100 versus 1,000), but the screening would only work for half the population, whereas the other was slightly more expensive but applicable to the whole population.
Of the three groups, 56% of prospective jurors, 53% of medical ethicists, and 41% of experts in medical decision making recommended the less cost-effective method, mostly justified by concepts of equality and fairness, but some saying that it would be politically infeasible to offer the screening to only half the population, or that the benefit in lives saved for doing so was too small to justify it.
The final explanation given is an analogy with investment: If it is reasonable to diversify in that case, why not when donating to charity? Diversifying makes sense when investing, because there are diminishing returns in wealth: for example, the first $10,000 we earn is more valuable than the second, since we buy the things we need the most, and if we have $10,000, losing all of it is much worse than gaining another. This means investing in multiple things is worthwhile to protect against the risk of losing everything, despite the fact that we gain less money in expectation. However, in charity, an individual’s donation is unlikely to hit the point of diminishing returns in utility (except when the donations are especially large), and we can expect each dollar given to do roughly an equal amount of good. The risk from loss aversion, rather than from diminishing returns in wealth, might be a reason to diversify, if one expects a donation which turns out to be ineffective to negatively affect their giving in the long run.
There is also evidence that people diversify just for the sake of it: Baron and Szymanska (2011) ran three studies asking questions along the lines of: “A can save one life for $10,000. B can save one life for $12,500. The people helped are from the same groups, with the same problems.” The mean allocations to B in the three similarly phrased questions were between 12% and 18%, and between 38% and 43% gave B at least some of the funding. In others studies they conducted, many people allocated some funds to a less effective charity that ran multiple projects over a more effective charity which ran only one project.
There is quite a wide literature suggesting that people give to charity for selfish reasons such as to gain the social benefits, as well as altruistic ones. For example, when charities publicize donations, they often assign donors to categories based on the amount donated rather than showing the exact amount, and studies have shown that donations tend to be concentrated around the lower ends of each bracket (Glazer et al. 1996, Harbaugh 1998). A neuroscientific fMRI study (Izuma et al. 2009) showed that when participants were asked to choose between giving some money to charity or keeping it for themselves, both with and without an observer, there were high activations in the striatum (which plays a large role in the brain’s reward system) when donating the money while being observed (gaining a social reward), and when keeping the money in the absence of observers (monetary reward without social cost). It can also be a means to signal income or wealth (Glazer et al. 1996) or trustworthiness (Fehrler 2010). Peer pressure from donating in pairs causes people to give larger amounts, but to be less happy with doing so (Reyniers et al. 2013).
In a talk (Alexander 2013), Robin Hanson of the blog Overcoming Bias argued that signalling made donating to a single charity feel difficult:
Then he started talking about how you should only ever donate to one charity – the most effective. I’d heard this one before and even written essays speaking in favor of it, but it’s always been very hard for me and I’ve always chickened out. What Robin added was, once again, a psychological argument – that the reason this is so hard is that if charity is showing that you care, you want to show that you care about a lot of different things. Only donating to one charity robs you of opportunities to feel good when the many targets of your largesse come up and burdens you with scope insensitivity (my guess is that most people would feel more positive affect about someone who saved a thousand dogs and one cat than someone who saved two thousand dogs. The first person saved two things, the second person only saved one.)
If people are more likely to donate, and donate more, when given the opportunity to gain the social rewards, this seems like a plausible mechanism for why people give to multiple charities: there are more opportunities for the causes to appear in conversation, or for the donation to be seen in other public places. This is a concern for effective altruists, but again if one expects these charities to be close in cost-effectiveness, the opportunity to talk about a range of causes with people might be beneficial, since it may help in spreading effective altruist ideas. Kaufman (2013) writes that if effective altruists give to one charity this might put others off the movement:
It can also be worth it to give to multiple organizations because of what it indicates to other people. I help fund 80,000 Hours because I think spreading the idea of effective altruism is the most important thing I can do. But it looks kind of sketchy to only give to metacharities, so I divide my giving between them and GiveWell’s top pick.
As Landsburg’s argument went in the beginning of Findings, if we know or expect that some charity A is more effective, by an expected value calculation, than charity B, we should be willing to give all our donations to A, and if they are suspected to be roughly equal, we should still be willing to give everything to charity A, as it would not matter in this case. As we have seen in Diversification Bias, when people are making difficult trade-offs, there is a tendency to choose variety (diversify), even when there is a clearly superior choice, or to say that the choices are incommensurable. Karnofsky (2011), writing for GiveWell, calls this “denying the choice”:
One of the things that has surprised us about the world of charity is how many people insist on answering, “Both” or “You can’t/shouldn’t be asking that question.” To them, all that matters is whether a charity does some good, not how much good it does or how it compares to other options.
In a world with limited resources, improving or saving more lives seems strictly better, and it seems like we have to make the tough call in this case. However, in many situations the choice is not so clear, and explicitly calculating expected values is hard, or impossible, to do; our formulae might leave out unknown variables such as those from the long term flow through effects, intuitions and heuristics about interventions, such as those gained from many hours or talking with experts in the area, which are difficult to quantify or even communicate, and the large uncertainties in comparing things like improving the probability of a positive far future, which could affect a huge number of lives, with giving to global health organizations, with weighing the suffering of animals in factory farms and the wild against that of humans.
Karnofsky (2011) lays out other misgivings with an explicit expected value (EEV) approach. One is the possibility of Pascal’s mugging: tinier and tinier non-robust, unreliable probabilities of an intervention working out may be worth donating to because their expected payoff, such as positively affecting or enabling trillions of lives in the far future, is so large. EEV also does not reward better grounded estimates from finding out new information, or penalize ignorance. It also does not allow for heuristics and Bayesian priors: for example, being skeptical for most interventions in general, so looking for fairly strong evidence of impact, putting less weight on anti-common sense conclusions, or more on ones that are supported by multiple perspectives.
However, these are not arguments against maximizing expected value, just for adopting a different, less explicit approach when doing so: it still seems worth making the judgment call and trying to donate to the single most cost-effective charity. (Again, this might change depending on how comfortable one is with the uncertainty from loss aversion.)
It is possible that committing to only donate to one’s top charity could reduce the effect of uncertainty, by incentivizing donors to be more rigorous when making their decision:
Deciding on only one charity involves projecting extremely dissimilar things (e.g. global health interventions and catastrophic risk reductions) onto a single axis (good done per dollar) and comparing them. If you give yourself the option, it’s very tempting to throw up your hands and declare that they’re incommensurable so you have to donate some money to both. Deciding only to donate to whichever cause comes out on the very top in your analysis makes the stakes feel higher, which is a strong motivator to do your work thoroughly and reduce your uncertainty as much as possible.
Are there other valid reasons for donation diversification?
Coordinating many donors (as opposed to the different coordination problems which arise from being a single big donor, discussed later and briefly at the end of this section) can be an issue when lots of relatively small donations add up to a large total given, such that it fills a charity’s funding gap and hits diminishing returns, and other top charities worth funding, which may become more cost-effective than the initial charity past this point, end up with less than they needed. This is similar to the objection that everybody gave to the charity with the highest expected value (e.g. GiveWell’s first ranked), they would have all the funding, and the other recommended charities would have nothing. Hoskin (2013) argues that this should not be a problem since most people are unlikely to agree on the best charity to give to, but that if they do (e.g. they are effective altruists trying to maximize their impact), when donors hit diminishing returns on the this charity, the next donors could move on to the next best (most cost-effective). However, it seems like enough people could agree on the best charity to give to, as effective altruism grows, for this to be a problem. Kuhn (2014) gives an example, and explains why coordinating donations could be difficult in practice:
For instance, suppose that SCI and GiveDirectly are both soliciting donations from 100 donors with $1,000 each. Suppose that SCI is more cost-effective initially, but eventually diminishing returns kick in, so the globally optimal allocation is $70,000 to SCI and $30,000 to GiveDirectly. (And furthermore, suppose that everyone has perfect knowledge of cost-effectiveness.) If the individual donors only donate to the charity that looks more cost-effective when they make their donation, then everyone will donate to SCI, which is suboptimal. If, instead, everyone donates in proportion to the optimal outcome, then everyone will give $700 to SCI and $300 to GiveDirectly, which achieves the optimum.
This problem is mitigated if the donors donate at different times—for instance, if the first 70 donors give everything to GiveDirectly, then the last 30 could notice that GiveDirectly now looks under-funded and make up the gap. But this requires donors to donate frequently, and organizations to give frequent updates on their current funding level; it seems to me that current levels of transparency and donation frequency aren’t high enough to make this work very well.
GiveWell has also recommended diversifying among its top charities according to the proportions allocated on its website:
For donors who think of themselves as giving not only to help the charity in question but to help GiveWell, we encourage allocating your dollars in the same way that you would ideally like to see the broader GiveWell community allocate its dollars. If every GiveWell follower follows this principle, we’ll end up with an overall allocation that reflects a weighted average of followers’ opinions of the appropriate allocation. (By contrast, if every GiveWell follower reasons “My personal donation won’t hit diminishing returns, so I’ll just give exclusively to my top choice,” the overall allocation is more likely to end up “distorted.”)
Tracking donations could be especially hard during “Giving Season”, where Network for Good’s Digital Giving Index for 2015 reports that 30% of online donations occurred in December, and 11% occurred in just its last three days.
Another complication with continuously updating donors on the funding needs of organisations is the following game theoretic problem: If two (or many) donors, Alice and Bob, are thinking of giving to the same charity but worried about room for more funding concerns, they are incentivized to wait on the other to see what they do, since if the charity fills its funding gap, one of them can instead make their donations to their next favourite cause, which seems unfair to the other, or not donate at all, which seems worse. In response to this problem, Karnofsky (2014) said:
Posting too many updates would risk a similar problem to the one described above. We don’t want a situation in which each donor’s gift to charity X causes another donor to give less to the charity; this would create an incentive for donors to try to wait each other out.
We do intend to post a notice when a given charity appears sure to hit its “maximum” – the most we think it could absorb before hitting a point of seriously diminishing returns. This will help donors avoid supporting a charity to go well over what we think is a reasonable amount of funding, without allowing donors to “cancel each other out” at a more granular level. In other words, if you give to a charity, you can expect that your donation will raise the total amount the charity takes in, by the amount of your donation – unless the charity ends up easily hitting its maximum, in which case your donation will be partly offset by the behavior of other donors who agree with us about where the maximum ought to be. We think this is a reasonable practical position.
It is possible that moral uncertainty is a cause for diversifying, depending on how plausible one believes possible moral frameworks are, as Bostrom writes on Overcoming Bias:
It seems people are overconfident about their moral beliefs. But how should one reason and act if one acknowledges that one is uncertain about morality – not just applied ethics but fundamental moral issues? if you don’t know which moral theory is correct?
It doesn’t seem you can simply plug your uncertainty into expected utility decision theory and crank the wheel; because many moral theories state that you should not always maximize expected utility.
Karnofsky made a comment along these lines in “Hits-based giving”:
Something we haven’t written about yet, though we have talked about it at events, is the idea that we want to entertain multiple different plausible worldviews and recommend some giving based on each. Such an approach could mean having a significant chunk of our giving that behaves as though the “Astronomical Waste” argument is entirely correct; this would still be consistent with putting limited weight on the argument overall. By a worldview that estimates a very large number of potential future generations (along with some other assumptions), mitigating potential risks from advanced AI seems considerably more important than speeding the end of factory farming; by a worldview that looks less far into the future and has a relatively high estimate of the moral relevance of animal suffering, I believe that other animal-welfare-oriented reforms (such as corporate campaigns) seem more promising than funding work on cultured meat.
MacAskill (2014) discussed similar ideas to Bostrom in his thesis, arguing for maximizing “expected choice-worthiness”.
Support and Outreach
In a talk at The Philosophical Foundations of Effective Altruism conference, Snowden (2016) argues that the signalling effect is a reason for donating to multiple charities, since there are more opportunities to talk about it, and more causes close to or interesting to others to get them involved, and spread the ideas of effective altruism.
Another reason is to show support for organizations which, while not currently the most cost-effective in expectation, still operate according to effective altruist values, such as being transparent or working on interventions with good evidence of effectiveness, or in neglected areas. This could be an extra consideration for people with a greater ability to inspire and encourage others to give, as noted by Neil Bowerman in a forum post on MacAskill’s decision for where to donate. Some GiveWell staff also split their donations to show support.
In addition to the overall amount of funding charities receive, they also care about how predictable the funding sources are (Seaman et al. 2010), as an unstable situation can cause more uncertainty and worry in its decision making, whereas a stable revenue allows them to plan ahead, grow, and take up promising new opportunities (GiveWell 2011), which could help make them even more cost-effective. Kuhn (2014) and Tomasik (2013) argue that this is an argument against effective altruists giving to single, especially small, charities, the former writes:
Organizations value some other things besides having money in the bank, like having predictable sources of funding in the future as well. And that’s a red mark for the single-donation strategy, because it makes your donations far less predictable. A single swap in your charity rankings (between first and second place) could redirect your entire donation stream. If you’re giving to multiple organizations, on the other hand, your donation streams are much less sensitive to small changes in ranking.
This applies more strongly to smaller organizations: as mentioned above, UNICEF would barely blink at $200,000 more or less of funding, but it might be the difference between life and death for a smaller organization. Some organizations, like GiveWell, have gone so far as to pass up large funding from a single source because of this worry:
At the same time, both we and Good Ventures agree that it would be a bad idea for GiveWell to draw all – or too great a proportion – of its support from Good Ventures.
One reason for this is that it would put GiveWell in an overly precarious position. While our interests are currently aligned, it is important to both parties that we would be able to go our separate ways in the case of a strong enough disagreement. If Good Ventures provided too high a proportion of support to GiveWell, the consequences of a split could become enormous for us, because we wouldn’t have a realistic way of dealing with losing Good Ventures’s support without significant disruption and downsizing. That would, in turn, put us in a position such that it would be very difficult to maintain our independence.
Kuhn (2014) writes that it may be worthwhile donating to a single charity because of the overhead in giving to many:
Being a large donor for an organization takes overhead: you need to follow it more closely, update your opinion of it more frequently, figure out how your money was used, schedule and itemize your donations, and so on. If you only donate to one organization, you minimize these costs, freeing you up to do other effective things.
This is plausible, but it seems like there are organizations (as well as advice from people in the effective altruism community in general) such as GiveWell, Giving What We Can, Animal Charity Evaluators, and others, which have charity recommendations for small donors while laying out their reasoning fairly concisely, and could advise larger donors on where to give to significantly reduce their overhead.
How does the size of a grant affect the question of whether donations should be diversified?
“Room for more funding” is key to effective altruism: It is not enough for an intervention to be cost-effective to be worth giving to, we must also know the marginal impact our donation; how much good will it in particular achieve? The idea of diminishing returns is that, at some point, each extra dollar does less and less good. This is generally not an issue for small donors, since they are unlikely to hit the bounds and could be warned (e.g. by GiveWell) in time to change their mind if so, but could be a problem for larger donors, and a strong reason for diversification. For example, a deworming charity might have secured government agreements to distribute $X worth of pills, but be struggling to negotiate agreements to give out any more than that.
This happened in 2013, when the Against Malaria Foundation (AMF) was temporarily removed from GiveWell’s top charities list:
Since naming the Against Malaria Foundation (AMF) as our #1 charity in late 2011, we have tracked $10.6 million in donations made to it as a result of our recommendation. In that time, AMF has held the funds while attempting to negotiate a net distribution to spend them on. It has not yet finalized a distribution large enough to spend the bulk of these funds (though it has funded a smaller-scale (~$1 million) distribution in Malawi).
We have been following its negotiations, and this post discusses why AMF has been unable to finalize a sufficiently large distribution. At this time, we plan not to recommend more donations to AMF until and unless it commits the bulk of its current funds to net distributions. Note that this decision does not reflect a negative view of AMF, but rather reflects room for more funding related issues.
Donor Coordination and Incentives
The problem of Alice and Bob in Coordination is especially relevant when there is a large donor in play, which has the ability to fill any funding gaps. If Alice is a small individual donor, and Bob is making decisions on behalf of a large foundation giving out grants to charities, Alice is incentivized to wait to see how much Bob donates to her choice of charity before giving herself, since Bob may fill the funding gap, or reach diminishing returns. Bob, on the other hand, will want to wait for Alice and other small donors to give first, so that he can make a smaller grant to the charity, and use the remaining money to give to his second choice of organization. However, in this case, the smaller donors are essentially transferring their money to Bob so that he can grant it out elsewhere, but Alice and the others may not have the same choice of next top charity as Bob, so they may altogether lose the desire to give to charities that Bob is also interested in funding. If major donors are always willing to fill all the funding gaps in effective altruism, this might cause people to lose interest in the movement, so this is a reason that big donors may want to diversify. GiveWell has written about this problem when recommending grants to the foundation Good Ventures:
As we’ve written before, trying to anticipate and adjust to other givers’ behavior can lead to thorny-seeming dilemmas. We do not want to be in the habit of – or gain a reputation for – recommending that Good Ventures fill the entire funding gap of every strong giving opportunity we see. In the long run, we feel this would create incentives for other donors to avoid the causes and grants we’re interested in; this, in turn, could lead to a much lower-than-optimal amount of total donor interest in the things we find most promising.
Encouraging other donors to help support the causes and organizations we’re interested in – and ensuring that they have genuine incentives to do so – will sometimes directly contradict the goal of fully funding the best giving opportunities we see. Thinking about GiveWell’s top charities provides a vivid example. If we recommended that Good Ventures fully fund each of our top charities, GiveWell would no longer recommend these charities to individual donors. In the short run, this could mean forgoing tens of millions of dollars of potential support for these charities from individuals (this is how much we project individuals will give to our top charities this year). In the long run, the costs could be much greater: we believe that individual-donor-based support of GiveWell’s top charities has the ability to grow greatly. A major donor who simply funded top charities to capacity would be – in our view – acting very suboptimally, putting in a much greater share of the funding than ought to be necessary over the long run.
This worry may be less important if Alice and Bob each know the other is an effective altruist (especially if they are most interested in the same areas), so have similar second choice charities:
Being the donor of last resort is also not as valuable as it first looks once you’re part of a community. If you give to an organisation and use up its room for more funding, then you free up another donor to go donate somewhere else. If that person shares your values and has reasonable judgement, then they’ll donate somewhere else that’s pretty good. This is especially true if you are modest about your own judgement. If someone else in your community who is smart thinks something is a good donation opportunity, then you should assign a reasonable probability to them being right.
GiveWell (Karnofsky 2012) has said that directing donations to multiple top charities encourages them to work together, so that they can learn from and track promising charities.
A more subtle version of this idea pertains to learning opportunities. In a sense GiveWell is like a “large donor” with a few million dollars of anticipated money moved. If we direct major funding to more than one charity, we will have improved access to each such charity and will have improved opportunities to track its progress and learn from it. In addition, though we don’t anticipate moving enough money to overwhelm any of the three charities’ room for more funding, there is an argument that each marginal dollar means less to the charity in terms of improving its prominence, ability to experiment and plan, probability of turning out not to be able to scale further, etc.
Since it is hard to put explicit probabilities on the success of different causes and interventions and magnitudes, especially perhaps ones with small probabilities of large payoffs, the Open Philanthropy Project has also argued for a “hits-based giving” approach, funding a large number of high-risk, high-reward organizations:
One of our core values is our tolerance for philanthropic “risk.” Our overarching goal is to do as much good as we can, and as part of that, we’re open to supporting work that has a high risk of failing to accomplish its goals. We’re even open to supporting work that is more than 90% likely to fail, as long as the overall expected value is high enough.
And we suspect that, in fact, much of the best philanthropy is likely to fail. We suspect that high-risk, high-reward philanthropy could be described as a “hits business,” where a small number of enormous successes account for a large share of the total impact — and compensate for a large number of failed projects.
This is a similar perspective to venture capitalists such as YCombinator, which each year funds a large number of startups in a batch, expecting most of the returns and impact to come from a very small portion of these, such as Airbnb, Dropbox, or Stripe, and enables them to learn from many at once. Paul Graham often notes how startups are counterintuitive, and since the best looking ideas are already funded (akin to the low hanging fruit in charity, and the reason effective altruists like to focus on neglected areas), they might expect the best investments to be in ideas that initially seem bad. (This might mean that the best charities do not seem like the most cost-effective to most people with an explicit or rough expected value estimate, and that it might be difficult to convince others of one’s intuitions as to why to fund the cause, although this does not necessarily affect whether to diversify.) The YCombinator acceptance rate is under 3% (85 of around 3,000 in 2014), which might suggest that it is difficult to make judgments about investments past a certain point, and that it is worth taking a gamble on many to see the most returns. There is plausibly an analogy with charity here, although this seems unlikely as most charities would need a much greater amount of funding, and funding many companies to protect against risk makes more sense in investing than in donating, as discussed in the section on diversification bias.
For large donors such as foundations, diminishing returns makes diversifying almost a necessity. This seems especially true for the effective altruism-aligned donors, who are concerned with growing the community and spreading EA values, due to the incentive problem: it may be worth filling the gaps for the most pressing interventions, but leaving some room for more funding with other top charities for small donors to give. Diversifying may give foundations the opportunity to learn more, and to influence more charities and the philanthropy sector in general to adopt EA ideas like evidence, transparency, and cost-effectiveness.
The case for smaller donors donating to a single charity seems much more convincing. The EA community is diverse, and despite using the same framework, people have a range of comparative advantages that make them better suited to working in different areas, and they have differing opinions on the most important causes. This makes reasons for diversification, such as coordination, moral pluralism, and simplicity (since people can communicate their ideas on cost-effectiveness) less of an issue.
That said, the predictability of a charity’s funding does seem very important if EAs want to incentivize charities to work with EA, at least while EA is small and growing and one does not expect it to naturally seep into philanthropy in general, and there is an argument that much of an individual’s impact can come from talking about EA with others, which diversifying donations may be useful for. The latter is currently unclear, and as long EAs can point to multiple people within the community who donate to multiple different charities and work on different causes, is perhaps less of a problem. There may also be a risk of diluting EA if people donate to charities which are much less cost-effective than the best, since the ideal is to maximize impact.
Finally, there are many heuristics and biases which pull people towards diversifying, and on the whole, these seem like things for effective altruists to work to minimize; if one charity is a lot more effective than another, EAs should usually not be swayed by “warm glows”, neglect of scope, and variety without good reason. If an EA personally expects that an, in hindsight, ineffective donation (such as if an intervention is suddenly “debunked”, or is merely very worried about the risk this happening), might affect long-run impact such as a reduction in donations, this may be a cause to diversify to charities sufficiently close in estimated cost-effectiveness.
Overall, the case for most small donors giving to one charity seems quite strong, and it seems like they should usually only diversify when charities are sufficiently close in impact (this may happen fairly often for EA charities, since many are very strong) for other considerations to outweigh this. More research especially into the effects on outreach, and into coordination as a community and with charities, would help in weighing the case.
Jamison, Dean T., et al. (2006), eds. Disease control priorities in developing countries. World Bank Publications.
Landsburg, Steven E. (1997). Giving Your All. Slate, January 11.
Snowden, J. (2015). Does risk aversion give an agent with purely altruistic preferences a good reason to donate to multiple charities? In submission.
Kahneman, D., Knetsch, J. L., & Thaler, R. H. (1991). Anomalies: The endowment effect, loss aversion, and status quo bias. The journal of economic perspectives, 5(1), 193-206.
Kahneman, D., & Tversky, A. (1979). Prospect theory: An analysis of decision under risk. Econometrica: Journal of the econometric society, 263-291.
Tversky, A., & Kahneman, D. (1991). Loss aversion in riskless choice: A reference-dependent model. The quarterly journal of economics, 1039-1061.
Kahneman, D., Knetsch, J. L., & Thaler, R. H. (1990). Experimental tests of the endowment effect and the Coase theorem. Journal of political Economy, 1325-1348.
Kahneman, D., & Tversky, A. (2000). Choices, values, and frames. Cambridge University Press.
Desvousges, W. H., Johnson, F. R., Dunford, R. W., Boyle, K. J., Hudson, S. P., & Wilson, K. N. (1992). Measuring Nonuse Damages Using Contingent Valuation: An Experimental Evaluation of Accuracy. Research Triangle Institute Monograph 92-1.
Carson, R. T., & Mitchell, R. C. (1995). Sequencing and nesting in contingent valuation surveys. Journal of environmental economics and Management, 28(2), 155-173.
Baron, J., & Greene, J. (1996). Determinants of insensitivity to quantity in valuation of public goods: Contribution, warm glow, budget constraints, availability, and prominence. Journal of Experimental Psychology: Applied, 2(2), 107.
Kahneman, D., Ritov, I., Schkade, D., Sherman, S. J., & Varian, H. R. (1999). Economic preferences or attitude expressions?: An analysis of dollar responses to public issues. In Elicitation of Preferences (pp. 203-242). Springer Netherlands.
Kahneman, D., & Knetsch, J. L. (1992). Valuing public goods: the purchase of moral satisfaction. Journal of environmental economics and management, 22(1), 57-70.
Baron, J., & Szymanska, E. (2011). Heuristics and biases in charity. The science of giving: Experimental approaches to the study of charity, 215-235.
Kuhn, B. (2014). How many causes should you give to? http://www.benkuhn.net/how-many-causes.
Harrison, G. W. (1992). Valuing public goods with the contingent valuation method: a critique of Kahneman and Knetsch. Journal of environmental economics and management, 23(3), 248-257.
Yudkowsky, E. (2009). Purchase Fuzzies and Utilons Separately. Less Wrong. http://lesswrong.com/lw/6z/purchase_fuzzies_and_utilons_separately/.
Alexander, S. (2013). Investment and Inefficient Charity. Slate Star Codex. http://slatestarcodex.com/2013/04/05/investment-and-inefficient-charity/.
Read, D., Antonides, G., Van den Ouden, L., & Trienekens, H. (2001). Which is better: simultaneous or sequential choice?. Organizational behavior and human decision processes, 84(1), 54-70.
Read, D., & Loewenstein, G. (1995). Diversification bias: Explaining the discrepancy in variety seeking between combined and separated choices. Journal of Experimental Psychology: Applied, 1(1), 34.
Ubel, P. A., DeKay, M. L., Baron, J., & Asch, D. A. (1996). Cost-effectiveness analysis in a setting of budget constraints—is it equitable?. New England Journal of Medicine, 334(18), 1174-1177.
Ubel, P. A., DeKay, M., Baron, J., & Asch, D. A. (1996). Public preferences for efficiency and racial equity in kidney transplant allocation decisions. In Transplantation proceedings (Vol. 28, No. 5, pp. 2997-3002).
Harbaugh, W. T. (1998). What do donations buy?: A model of philanthropy based on prestige and warm glow. Journal of Public Economics, 67(2), 269-284.
Glazer, A., & Konrad, K. A. (1996). A signaling explanation for charity. The American Economic Review, 86(4), 1019-1028.
Izuma, K., Saito, D. N., & Sadato, N. (2010). Processing of the incentive for social approval in the ventral striatum during charitable donation. Journal of Cognitive Neuroscience, 22(4), 621-631.
Fehrler, S. (2010). Charity as a Signal of Trustworthiness. IZA Discussion Paper No. 5299.
Reyniers, D., & Bhalla, R. (2013). Reluctant altruism and peer pressure in charitable giving. Judgment and Decision Making, 8(1), 7.
Kaufman, J. (2013). Give to multiple charities. https://www.jefftk.com/p/give-to-multiple-charities.
Karnofsky, H. (2011). Denying the choice. GiveWell. http://blog.givewell.org/2009/11/19/denying-the-choice/.
Tomasik, B. (2013). When Should Altruists Be Financially Risk-Averse? http://reducing-suffering.org/when-should-altruists-be-financially-risk-averse
Seaman, B. A., & Young, D. R. (Eds.). (2010). Chapter 1: Income Diversification. Handbook of research on nonprofit economics and management. Edward Elgar Publishing.
GiveWell (2011). GiveWell’s “excess assets” policy. http://www.givewell.org/about/official-records/Excess-Assets-Policy
Hoskin, B. (2013). Should you only donate to one charity? Giving What We Can. https://www.givingwhatwecan.org/post/2013/11/should-you-only-donate-to-one-charity/
Karnofsky, H. (2012). Our top charities for the 2012 giving season. GiveWell. http://blog.givewell.org/2012/11/26/our-top-charities-for-the-2012-giving-season/
Network for Good (2015). 2015 Online Giving Trends. http://www.networkforgood.com/digitalgivingindex/2015-online-giving-trends/.
Karnofsky, H. (2014). Donor coordination and the “giver’s dilemma” http://blog.givewell.org/2014/12/02/donor-coordination-and-the-givers-dilemma/.
Karnofsky, H. (2013). Change in Against Malaria Foundation Recommendation Status (room-for-more-funding-related). GiveWell. http://blog.givewell.org/2013/11/26/change-in-against-malaria-foundation-recommendation-status-room-for-more-funding-related/.
Fox, C. R., Ratner, R. K., & Lieb, D. S. (2005). How subjective grouping of options influences choice and allocation: diversification bias and the phenomenon of partition dependence. Journal of Experimental Psychology: General, 134(4), 538.
Karnofsky, H. (2015). Giving Now vs. Later. Good Ventures. http://www.goodventures.org/research-and-ideas/blog/giving-now-vs-later.
Karnofsky, H. (2015). Hits-based giving. Open Philanthropy Project. http://www.openphilanthropy.org/blog/hits-based-giving.
Graham, P. (2012). Black Swan Farming. http://paulgraham.com/swan.html.
Bostrom, N. (2009). Moral uncertainty – towards a solution? http://www.overcomingbias.com/2009/01/moral-uncertainty-towards-a-solution.html.
MacAskill, W. (2014). Normative Uncertainty. Doctoral dissertation, University of Oxford.
Todd, B. (2016). The Value of Coordination. 80,000 Hours. https://80000hours.org/2016/02/the-value-of-coordination.
Tomasik, B. (2013). Charity Cost-Effectiveness in an Uncertain World. https://foundational-research.org/charity-cost-effectiveness-in-an-uncertain-world.
Snowden, J. (2016). Talk: Does risk aversion give an agent with purely altruistic preferences a good reason to donate to multiple charities? The Philosophical Foundations of Effective Altruism, University of St. Andrews. http://ceppa.wp.st-andrews.ac.uk/research-projects/effective-altruism/effective-altruism-conference-presentations/.
Karnofsky, H. et al. (2015). Staff members’ personal donations for giving season 2015. GiveWell. http://blog.givewell.org/2015/12/09/staff-members-personal-donations-for-giving-season-2015/.
MacAskill, W. (2013). Where I’m giving and why: Will MacAskill. http://effective-altruism.com/ea/5m/where_im_giving_and_why_will_macaskill/.
 Jamison et al. 2006.
 Landsburg 1997.
 Landsburg 1997.
 Kahneman and Tversky 1979.
 Baron and Greene 1996.
 Kahneman et al. 1999.
 Kahneman and Knetsch 1992.
 Baron 2011.
 Tomasik 2013.
 Kuhn 2014.
 Network for Good 2015.
 Bostrom 2009.
 Karnofsky 2016.
 MacAskill 2016.
 Karnofsky et al. 2015.
 Karnofsky 2013.
 Karnofsky 2015.
 Todd 2016.
 Graham 2012.