INTRODUCTION AND BACKGROUND
1
Introduction: Effective giving, responses to analytical ‘effectiveness information’
1.1
Definition of ‘effective giving’ for our purposes, motivation*
1.2
Raises questions
1.3
Barriers and the use of analytical effectiveness information
1.4
Presenting analytical effectiveness information: The key issues
1.5
Evidence for related effects and channels that seem similar (but
not
analytical effectiveness information)
2
Previous experimental evidence (ours and others)
2.1
Overview
2.2
Field evidence: “Effectiveness” info: Karlan and Wood (2017)
2.3
Cost-Effectiveness, field-ish setting: Bergh/Reinstein (Essex piggyback)
2.4
Laboratory and hypothetical giving work
2.5
Effectiveness (and other) information; crossed with emotional information: Bergh (and Reinstein)
2.6
Indirectly-relevant work
3
ICRC donation suggestion and cost-info trial: Project description, summary, timing, background
3.1
Overview
3.2
Population
3.3
Treatments (outlined)
3.4
Timing
3.5
Treatment specifics (i.e., ‘experimental conditions’)
3.6
Treatment selection and ssignment (‘randomization’) procedure
3.7
Treatment specifics (i.e., ‘experimental conditions’)
4
Analysis focusing on ‘impact of impact information’
4.1
Code and data setup
Links to experiment source, description, data characterization (ICRC Donor’s voice experiments: all input and description)
Discussion of input, clean, mutate
Description/depiction/codebook of (summary) data
4.2
ICRC ‘impact information’ treatments: Questions and tests
4.3
Preregistered tests
4.4
Exploratory analysis
4.5
Bayesian intervals, equivalence tests, probing the ‘tight null effect’
4.6
Q: Does including impact information affect the amount raised?
DONOR’S VOICE/CRS
5
Donor’s Voice experiments: description
5.1
Plans from prereg
OSF—link
:
5.2
Context
5.3
Setup (first trial)
5.4
Setup (second trial)
5.5
Treatment assignment/randomization
(Setup, input, description)
(Setup)
(ALL INPUT AND DESCRIPTION)
Input, clean, mutate
Description/depiction/codebook of (summary) data
6
Analysis: Questions and tests
6.1
Q: Does including impact information affect the propensity to donate?
Focal: Bayesian Test of Difference in Proportions
6.2
Q: Does including impact information affect the amount raised?
6.3
Incidence of donating exactly $10
6.4
Secondary: Email open rates, click-throughs
7
(Preregistration for the first run, with brief note on extension to the second run)
Main question
Key dependent variables
Conditions/treatments
Key analyses
Secondary analyses
Sample size
Other
27 Nov 2018 update on plans for second trial (see above)
(Documentation of Excel files sent to us, and explanations from partner)
7.1
2018 trial data
7.2
2019 trial data
7.3
2020 update on the above
7.4
Table: excel sheets sent to us
7.5
Inputting ‘raw data sent to us’, creating organised data files
7.6
Hand work in excel
7.7
Post-implementation: Other notes from partner, responses to questions about implementation
META-ANALYSIS AND MULTI-EXPERIMENT WRITEUP
8
Meta-analysis: Impact of impact information
8.1
Preliminary data collection and scoping
8.2
Summary of results; forest plots
8.3
Simple meta-analysis
8.4
Bayesian meta-analysis, multiple levels, meta-regression
8.5
Publication bias and other robustness checks
8.6
Proposed systematic intake procedure
9
“Writing the paper”: The Impact of Impact Information on Donor Behavior: A Multi-Experiment Synthesis
9.1
Introduction
Published with bookdown
Impact of impact treatments on giving: field experiments and synthesis
META-ANALYSIS AND MULTI-EXPERIMENT WRITEUP