8 Meta-analysis: Impact of impact information

Discussion of ‘how to make this a meta-analysis’ here

Note that we are aware of issues surrounding

  • “meta-analyzing work including one’s own research”

  • Challenges incorporating the ‘grey literature’ studies below (not yet published or peer reviewed, maybe not even indexed)

8.1 Preliminary data collection and scoping

(The code below brings together this data)

For each, collect (if applicable) or generate

  • study
  • wave
  • treatment
  • donation (amount)
  • d_donation (whether donated – obviously it can be inferred from donation)
  • treat_eff_info (whether given ‘effectiveness information’)
  • (other demographics in common; other major treatments, any ‘noncompliance or other filtering’ type variable)

8.1.1 Bring in and clean Karlan and Wood data

Stata import from /other_experimental_data/karlan_wood_dataverse_files/Effectiveness Dataverse Files/Effectiveness Dataverse Files/data/FFH_analysis.dta


karlan_ffh <-  readstata13::read.dta13(
  here::here("other_experimental_data", "karlan_wood_dataverse_files", "Effectiveness Dataverse Files", "Effectiveness Dataverse Files", "data", "FFH_analysis.dta"),
  convert.factors = TRUE, generate.factors = TRUE,
  encoding = "UTF-8", fromEncoding = NULL, convert.underscore = FALSE, missing.type = TRUE, convert.dates = FALSE, replace.strl = TRUE, add.rownames = FALSE, nonint.factors = FALSE, select.rows = NULL,
  select.cols = NULL, strlexport = FALSE, strlpath = ".") %>%
  # ... Conversion of stata dates ####
  mutate(across(contains("_date"), ~as.Date('1900-01-01') + days(as.integer(.x)) )) %>%  ##needlessly fancy; only 2 date vars
 # rdr_cbk("full_codebook_karlan.Rmd") #THIS doesn't mesh with bookdown

See codebooks/full_codebook_karlan.html

8.1.2 Bring in/clean Bergh and Reinstein data (mostly done)

dp_sx_mt, created in code/ImportData.R

… reconstruct minimal creation steps from raw data here, if possible.

8.1.3 Bring in/clean DV statistics

base_stats, dv_ranks, created in analysis/dv_input_anal.Rmd

(Maybe … reconstruct minimal creation steps from raw data here, if possible.)

8.1.4 Bring in ICRC

created in icrc_input_clean.R

8.1.5 Other possible data

  • Kassirer and Karlan multi-charity choice field experiment (OSF link)

Sam Kassirer and Dean Karlan ran a large scale trial … c 1 million video-gamers could each make a choice between 3 charities, with c $1 million distributed. The environmental charity (ERP) was preferred by substantially more people than the mental health charity (MHA); the education charity supporting diversity in tech (Code.org) was very much the least-favored.

One of their important between-participant treatment conditions was “whether one of the charity descriptions included ImpactMatters impact information” (by the way, this info is/was rather overoptimistic IMO).

Their interventions seemed to have ‘null effects’, and these effects (need to check) seem likely to be bounded close to zero. The charity did not seem more likely to be chosen when the impact information was provided. The ‘frames’ used to motivate giving — vote for the charity that “does the most good”, makes you “feel the most good” to support, vs. no frame — also seem to have had little direct or interaction effect.

8.1.6 Bring in other statistics and meta-data, descriptors

From meta_analysis_preparation/impact_of_impact_analytical.xlsx

8.1.7 Create nested tibble

Codebooks for each data frame and each experiment (including additional features not analyzed here) are in the docs\codebooks folder (Please link here)

8.2 Summary of results; forest plots

8.3 Simple meta-analysis

8.4 Bayesian meta-analysis, multiple levels, meta-regression

8.5 Publication bias and other robustness checks

8.6 Proposed systematic intake procedure

Cochrane/PRISMA, preregistration