9 “Writing the paper”: The Impact of Impact Information on Donor Behavior: A Multi-Experiment Synthesis
While hundreds of billions of dollars are donated to charity each year, the effectiveness of these charities differs by orders of magnitude, even within similar categories. Furthermore, many individuals do not donate substantially even though they believe that the cost of saving a life is small.
For people to systematically choose the most efficient charities they must be aware of the differences in effectiveness, suggesting a role for advertising and publicity. The EA movement and organisations like GiveWell and Impact Matters/Charity Navigator are increasingly presenting these to larger and more mainstream settings. However, we have limited evidence (which we survey here and meta-analyze here, in conjunction with the present experiment and a subsequent experiment) on how potential donors react to quantitative measures of charitable impact.
We present (*) evidence from our own recent field experiment with a US-based international relief charity. In each of two annual Thanksgiving email solicitations (2018: N= 182,594; 2019: N = 79,754) a randomly-selected half of all recipients were presented the ‘treatment’: measures of impact per-dollar. Our results suggest the ‘unrealistic’ impact presentations (presented in 2018) increased the propensity-to-give in direct response to the email (95% Bayesian credible intervals: approximately +50% to +150% of the mean incidence rate of 5.4 per 1000). More realistic impact information (presented in 2019) seems to have had a negative or near-zero effect on donation incidence (credible intervals: -80% to +40%).
* We will probably publish a specific write-up of the evidence from this particular experiment. It is reasonable to use journal peer-review to establish the credibility of each piece of evidence on its own (although we would prefer that other systems existed.)
We next present (*) evidence from a second field experiment, involving a large postal solicitation at a major Swiss charity. Again, these results suggest, at most, a small effect of quantitative impact-per-cost information.
We bring together our own results with previous comparable work in a meta-analysis. (At the moment the ‘intake’ part of the meta-analysis is informal and not a registered Cochrane/PRISMA meta-analysis, but we plan to do this systematically.) In particular Karlan and Wood (2017)’s field experiment, and the final experiment in Bergh and Reinstein (2020) offer plausibly comparable evidence.
Focusing on ‘impact information here’, rather than the suggested donations (at least initially)↩︎
The ‘absolute rate’ interpretation is not easily defined in smoothed density plots; however, the relative rates are meaningful. The fact that thes rates include zeroes allow us to interpret relative comparisons (in the second graph) without confounding ‘conditional on positive’ selection issues.↩︎
Reductio: consider a study with two participants only, one treated, and one control↩︎