2 Previous experimental evidence (ours and others)

2.1 Overview

In this section we will outline and review the ‘previous’ work that is most relevant to our question.*

* By ‘previous’ we mean work that has been published or shared by others, work that is not analyzed here for the first time. This largely parallels the evidence on the ‘Effect of analytical (effectiveness) information on generosity’ presented in the ‘Increasing effective charitable giving’ synthesis.

What is the impact of being presented with (or actively pursuing) effectiveness information (which is naturally analytical information) on generosity and the willingness to donate?

We consider a range published work (from a non-systematised literature review) including laboratory and hypothetical experiments, and non-experimental analysis of related questions (such as ‘the impact of general charitable ratings). However, we focus on field and ’field-like’ experiments involving truthful analytical ‘impact-per-dollar’ relevant information in settings involving substantial donations. In particular, this includes Karlan and Wood (2017) and Bergh and Reinstein (final experiment).

The experimental evidence is limited, mixed and overall indeterminate. We could only find a single large-scale field trial closely bearing on ‘impact-per-dollar’ information. (Karlan and Wood 2017) ran a field experiment on a charitable mailing campaign for Freedom From Hunger, reporting mixed (null overall, positive for some subgroups, negative for others), and some underpowered results. Small, Loewenstein, and Slovic (2007) (involving laboratory experiments with real donations) find that giving to an identifiable victim is reduced when statistics are also presented and “priming analytic thinking reduced donations to an identifiable victim relative to a feeling-based thinking prime.” Bergh and Reinstein (2020)’s work (M-turk charitable giving, plus one small-scale field-ish context) mainly finds a near-zero effect (e.g., in pooled analysis across all six experiments, see table 1 right column, table 2 bottom, and appendix page 51. *

Further related evidence is also indeterminate (see fold).

Trussel and Parsons (2007) finds a positive impact of providing a particular type of analytical charity financial information on a subset of donors.

There is also mixed evidence on the impact of real-world charity ratings (Yoruk 2016; Brown ea 2017; Gordon ‘ea ’19), and mixed evidence on ’excuse-driven information seeking’ (Exley ’16b; Fong & O, ’10; Metzger & G ’19).

There is some further related evidence lab experiments, but the results are limited. Fong and Oberholzer-Gee (2010) apparently find that exogenous information about recipients increases donations, but they do not report this specifically. There is some speculation, but again, mixed evidence, that individuals already in a “system 2” (deliberative) frame are more likely to be positively affected by impact information. There is also a distinction to be further explored between “output information” (how the donation is used) and “impact information”. The former is seen to increase generosity in several studies. [Reference needed here]

We defer the presentation of our ‘new evidence’ (especially from the Donors’ Voice and ICRC experiments), to the following sections.

This will largely overlap, and be integrated with the discussion in the EA Barriers chapter Effect of analytical (effectiveness) information on generosity

2.2 Field evidence: “Effectiveness” info: Karlan and Wood (2017)

Karlan and Wood (2017) (KW, 2017) run a field experiment on a charitable mailing involving scientific impact information. Their treatment involved adding scientific impact text to an appeal, while removing some emotional text:

Effectiveness treatment language:

According to studies on our programs in Peru that used rigorous scientific methodologies, women who have received both loans and business education saw their profits grow, even when compared to women who just received loans for their businesses. But the real difference comes when times are slow. The study showed that women in Freedom from Hunger’s Credit with Education program kept their profits strong–ensuring that their families would not suffer, but thrive.

Control (emotional) treatment language:

Because of caring people like you, Freedom from Hunger was able to offer Sebastiana a self-help path toward achieving her dream of getting “a little land to farm” and pass down to her children. As Sebastiana’s young son, Aurelio, runs up to hug her, she says, “I do whatever I can for my children.”

They find a null effect overall with fairly tight bounds. They report a near-0 impact on incidence (in table 2: column 1), and a standard error under 1%, relative to a baseline incidence of 14%. They estimate an impact on amount of donated +2.35 USD, with a standard error of 1.98; this is relative to a mean of 14.17 USD. (In their Winsorised results the point estimate is -0.074, the standard error is 0.82, and the mean is 11.30)

When they differentiate this by an ex-post classification (which was not preregistered), they find positive effects of this information for ‘prior large donors’ and negative effects for ‘prior small donors’.

This is probably the most substantial and relevant study. We consider this further in our meta-analysis further below.

Nonetheless, the study is presents only a single piece of evidence from a specific context. It also has some potential limitations in considering our main question. In particular:

  1. Their treatment changed two things changed at once: impact information was added which emotional information was removed

  2. It may have been entangled with a Yale/institution effect

  3. The nature of ‘impact’ information was not particularly quantative; it did not express a an ‘impact per dollar’

2.3 Cost-Effectiveness, field-ish setting: Bergh/Reinstein (Essex piggyback)

Study 6 results summarized

Limited power to detect differences in amounts donated or incidence by treatment.

Image reduced donation (incidence and amounts) to Guide Dogs (fairly strong and significant effect)

(And ‘increased’ donations to River Blindness, but not statistically significantly)

2.3.1 Study 6 setup

  • Connected to EssexLab 2019 Omnibus online survey

    • \(\approx\) 20 minutes, many psycho/demog/polit/econ questions, mostly unrelated to charity
  • Completion \(\rightarrow\) raffle for 1 of 20 Amazon vouchers worth £50 each

  • Given information about blindness in general

  • (Conditional) donation part

Advantages over S1-5 (Mturk): Not a ‘donation experiment,’ real impact measures, distinct emotional simuli, choose between distinct causes, larger amounts, tied to rich Omnibus survey data

Note these are the later/non-British-nationality participants in the Omnibus.

The chances of winning are unknown to participants but could be inferred to be of magnitude of between 1/10 and 1/300. Ex post about 1/20.


The ‘last’ Omnibus questions just before this

Some of these questions are conditionally included based on the earlier ones, for obvious reasons.


2 \(\times\) 2, balanced blocked randomisation, between sbjt:

  1. Analytic information about ‘cost per outcome’ & ‘cost per impact’


  1. Empathy-inducing image: picture of blind girl (Yes/No)

Also block randomize the order the charities are presented; need to examine this or control for it. And blocking balances especially across previous unrelated variation (Pat’s treatment).


Introduction screen (all)


Image treatment (Half of participants)


Control: description, choice (1/2 of subjects)

Remember, 1/2 of these have charity order reversed


Info-treatment: description, choice (1/2 of subjects)

Remember, 1/2 of these have charity order reversed


Donation amount choices

Donation slider


Donation slider

2.4 Laboratory and hypothetical giving work

2.4.1 Small, Loewenstein, and Slovic (2007) (Small, Lowenstein, Slovic, 2007)

Studies involving charitable giving in a laboratory setting (‘lab experiment’).

Their third and fourth studies are the most relevant,

S3: In their third study the “identifiable victim” treatment leads to greater generosity than their treatment presenting donation statistics.

S4: In the fourth study, priming analytic thinking (using math) leads to less giving to identifiable victims than when they prime feelings (‘impressions’).

2.5 Effectiveness (and other) information; crossed with emotional information: Bergh (and Reinstein)

Context

  • M-turk, Americans, hit approval rating 98%+, 100 hits or less

\(\rightarrow\) final sample sizes: 398, 614, 611, 608, and 433 in Studies 1-5 respectively (variation tracking design complexity)

Some exclusions based on attention checks

The percentage of women varied between 57 and 60%, the median age across all studies was 29-30 years (SDs from 9.55 to 10.64)


Payments

  • 1.50 USD (2 USD in s4) baseline
  • Bonuses: 3 USD in S1-S2, 5 USD in S5
  • Raffle: 50 USD (1:25 odds) in S3-S4


Donation asks (from bonus) & treatments

  • 1 charity (or 2 wiin same category); Syria relief, Polio
  • (Between-subject) Image &/or information
  • Commit (& choose), select amount; vary ordering

Baseline pay is 2 USD in study 4 because it’s longer

Bergh setups

Studies 1-3:

Study 4:

Study 4

Pooling experiments 1-6 (1-5 are M-Turk, 6 is a ‘field-like’ experiment)

Overall:

Strong positive effect of images on donation incidence, amounts

Little impact of effectiveness information on donation incidence, amounts

  • “Fairly tightly-bounded null”


Strong impact of images (at extensive and intensive margin; see regressions— OR CI [1.14, 1.60])

Meta-analysis is simple pooling (fixed effects). We aim to improve on this in later sections.

See also (hypothetical experiments)

  • Caviola, Schubert and Nemirow, 2020
  • Mulesky, 2020: realistic and unrealistic effectiveness information

(Discussed in barriers book here.)

2.6 Indirectly-relevant work

2.6.1 Analytical ‘overhead’ (etc) information: Parsons (2007)

2 x 2 mailing appeal for People with Aids Coalition-Houston,

  • Add “Service efforts and accomplishment info”(SEA)
  • Add favorable “FINANCIAL” spending/overhead ratio info

FINANCIAL (alone) \(\rightarrow\) 180% increase in odds of donating among prior donors (\(p<0.05\))

(Other effects mainly insignificant, underpowered)

Unsure if it’s a logit or LPM – confusing writing Not effect-coded; no measure of overall impact of FINANCIAL across both SEA treatments Probably not preregistered I’d like to see CI’s

2.6.2 Information as an ‘excuse’ not to give; allows motivated reasoning

Exley, 2016b: Greater discounting of ‘less-efficient’ charity in charity-charity decision-making than in charity-self d-m

Exley issues: Experimenter demand (M-turk focus), not really ‘impact’ information Fong: Selection effects. In their tables, exogenous provision of information seems to increase donations overall.

Also … it’s evidence on the deservingness of the recipients, not on impact of a charity itself.


Fong & O, ’10

“Dictators [charitable giving] who acquire information mostly use it to withhold resources from less-preferred types, leading to a drastic decline in aggregate transfers”

But…


Metzger & G, ’19

Laboratory experiment, donations to real charities (high/low-performing NGOs)

  • More purchasing of ‘recipient type’ than ‘impact’ info

  • Mixed & weak evidence on excuse-driven information-seeking


Caveats…

Opportunity to buy info on ‘recipient type’ increased giving, on ‘admin costs’ decreased giving (marginal significance for both), no effect of ‘aid impact’ but wide CI

‘Free info’ on each of these had insignificant effects (underpowered!)

Lots of caveats; e.g., recipient type (artists vs children) may have been seen as a proxy for impact

2.6.3 Ratings and information in general: mixed evidence

  • Yörük (2016, JEMS): RD w/ Charity Navigator; significant for ‘small’ charities only

  • Charity Navigator stars based on continuous score across categories (not EA criteria)

  • Identification via RD: Impact of crossing a ‘star’ threshold on amounts raised Results: Significant impact for small charities only: finds a nearly 20% effect of a one star increase

  • See also Brown ea (2017), Gordon ea (2009)

References

Bergh, Robin, and David Reinstein. 2020. “Empathic and Numerate Giving: The Joint Effects of Victim Images and Charity Evaluations.” Social Psychological and Personality Science, 1948550619893968.
Fong, Christina, and Felix Oberholzer-Gee. 2010. “Truth in Giving: Experimental Evidence on the Welfare Effects of Informed Giving to the Poor.” Journal of Public Economics.
Karlan, Dean, and Daniel H. Wood. 2017. “The Effect of Effectiveness: Donor Response to Aid Effectiveness in a Direct Mail Fundraising Experiment.” Journal of Behavioral and Experimental Economics 66: 1–8. https://doi.org/10.1016/j.socec.2016.05.005.
Small, Deborah A, George Loewenstein, and Paul Slovic. 2007. “Sympathy and Callousness: The Impact of Deliberative Thought on Donations to Identifiable and Statistical Victims.” Organizational Behavior and Human Decision Processes 102 (2): 143–53.
Trussel, John M, and Linda M Parsons. 2007. “Financial Reporting Factors Affecting Donations to Charitable Organizations.” Advances in Accounting 23: 263–85.