6  EAMTT & giving trials: Presentation (EG Summit)

David will present and then open up discussion on:

  1. The EA Market Testing Team, its resources and future plans

  2. Specific trials/field experiments1




This page is currently publicly hosted on the WWW, without a password. Feel free to share it in small circles. I put in and linked a lot in this presentation, lots of fine print. Some links are password protected: ask me for access (and obviously, please do not share those widely.)









6.1 Ideas, goals, evolution of EAMTT & effective giving research

Because of funding (limited funding for this, large grant for The Unjournal I need to reduce my direct involvement in EAMTT, promoting EA testing/measurement, and experiments in effective giving/social fundraising.




Why don’t people seem to prioritize effectiveness in their giving and altruistic behavior, and what could change this?” … We have little evidence and little organized ambitious research on this super-important question.

EA orgs, effective charities, and key (academic) researchers are unusually well-aligned and motivated to work together. The EAMTT will bring these groups together to share knowledge and tools, run trials and experiments, analyze data, and share our evidence and insights. We will target robust results yielding practical insights on approaches that can ‘move the needle’.



Phase 1: Academic research: ‘barriers to effective giving’

Synthesis: Limited reliable/practical academic work on effective giving \(\rightarrow\) “Increasing effective giving (& action): The puzzle, what we (need to) know”2


A key focus: Impact of impact information on giving: field experiments and synthesis






Phase 2: EAMTT – Bringing together and engaging Academics, EA orgs, and marketers

~The idea:

Academics are eager to run and analyze field experiments, EA orgs are running A/B trials these but don’t have the bandwidth to carefully plan, analyze, and communicate them. EAMTT could organize this effort, bringing in marketing/digital experts to generate shared resources and knowledge, and communicate the results.



See What is the “EA Market Testing Team”?

  • “Promote EA, make giving effectively/significantly … a cultural norm”


Working both with our partner organizations and independently through surveys and trials. …

  • to improve the design, messaging, outreach

  • Measuring and testing ‘what works and when’

  • Communication…




… Building and organizing resources and a knowledge base on





Phase 2.5: Partial pivot towards ‘encouraging EA mindset and involvement’

(Funding and steer \(\rightarrow\))

  • 80K (large-sample messaging trials on Prolific & M-Turk; repeated, panel, ‘real clicks’)

  • EA Future Leaders; promoting remote fellowships3

  • Work with CEA/UGAP on groups4


See What we’ve (helped) accomplish, resources below.





Phase 3 (~WIP): Consulting + public goods, rigorous analysis (‘pivot back’)5

  1. The funding landscape/approach has changed6

Initiatives developed alongside EAMTT, with our encouragement ~and tool/spaces





\(\rightarrow\) EAMT pivot to a heightened focus on

  1. Advising, proposing, helping design & coordinate experiments, trials, and initiatives.

  2. Transparent presentation of the results, rigorous statistical analysis

  3. Synthesizing, sharing, and communicating this knowledge and skills base

    • and encouraging open communications of tools and insights



  • Facilitating mutually-beneficial partnerships and collaborations between academics/researchers and EA orgs/marketers in this space

  • Maintaining and building knowledge base, making it more accessible, integrated with other resources

  • Running and furthering selected independent trials





6.2 What we’ve (helped) accomplish, resources



EAMTT org, communications and information-sharing

See: Slack, Airtable, Gitbooks, Data presentations

  • Shared marketing insights, e.g., GWWC & 80k

  • Sparked/Facilitated…





GWWC, 1FTW, and TLYCS (Trials and insights)

GWWC: Pledge Page placement, Giving Guides, Message Testing, YouTube Remarketing



Pledge page trial


Consider the Posteriors (from Google’s Bayesian modeling)

  • E.g., ‘Separate Bullets’ has a 25% chance of being 4-20% better (click rate) than the original, a 25% chance being 20-36% better, a 22.5% chance of 36-76% better, etc.

  • But this is clicking not necessarily follow-through8





Giving Guides

See forum post and data presentation here

  • Note ‘divergent delivery’ and

  • ‘cost-per-click vs. click-per-view’ issues







6.3 80k & ‘making EAs’ (Trials & insights)

Prolific (+Prime) Study with some ‘real clicking’ and followup

Full report and notebook9

(bit.ly/80ktest)

For ‘interest in 80k’ average Likert metric’, initial runs (top-5 conditions + control)


“Probability of superiority across population”


Probability of a higher (average Likert) rating for any individual (vs counterfactual)



Note: There are many more steps before we get at “Value of Information” here


In followup, the differences do not really persist








6.4 Charity Elections: ongoing trial and assessment

Intermediate report10



6.5 Resources we’ve built (some are still ongoing)

EA Market Testing - Public Gitbook - Use ‘lens’ to ask questions of this knowledge base (we could also integrate the resources below, and improve this)

Private version (Discusses ongoing/private trials and issues)

EA Market testing data analysis

Impact of impact information on giving: field experiments and synthesis

innovationsinfundraising.org

Tools for promoting ‘effective birthday fundraising on Facebook’

Tools, platform, design for ‘seeding and messaging on (effective) fundraisers on JustGiving’








6.6 Challenges

  • the ‘public goods vs consulting’ problem
  • skeptical of generalizability across orgs/situations
  • skepticism about EA outreach, popular effective giving

\(\rightarrow\) Lack of compensated time/staff (researchers, communicators, developers) for this very ambitious proposed project11


Coalitions and cooperation: Academia vs. Marketing vs. EA orgs*

  • must ‘publish well’ … inform their discipline/science, \(\neq\) ‘learn what’s best for EA marketing’;12
  • Frequentist statistical inference/NHST vs. decision-relevant Bayesian updating/ Divergent delivery issue


  • reputation for overselling ‘big wins’ rather than cautious evidence

  • incentives for preserving ‘secret sauce’, building brand


  • Bandwidth (to cooperate on trials)
  • Short timelines and urgencies, shifting priorities
  • Statistics and data science limitations



Methodology: Thorny problems in social science and quantitative marketing

  • in social science (stated responses vs meaningful attitudes/behaviors, abstraction vs. realistic social context),
  • statistics and experimetrics (attrition, rare outcomes),
  • marketing attribution & ‘ultimate outcomes’ (what are the pivotal points in a ‘long funnel’?, lift vs shift),
  • reinforcement learning (‘learning best’ in a large space).
  • … The payoffs are potentially big, but we need to be convinced the time investments are worth it



  • It’s a “big project”; need intermediate feedback and encouragement
  • Limitations to sharing (at some orgs)
  • Implementing trials on platforms’ can be hard
  • Making progress on projects while justifying work, securing funding
  • Specific distractions
  • Need team of interested peers for dialogue, encouragement, focus







6.7 Ongoing (and planned) projects

Just Giving ‘seed donation’ (+ message?) trials

The experiment

full presentation here, password protected

Context: JustGiving = UK-based social charitable fundraising

  • E.g., pages like this

Go to link; I can’t share details on a public page

  • Funding (‘channeling funds from aligned donors’)
    • Possibly a UK-based earner, to enable Gift Aid
  • Work with ML statistics for a matched design to increase statistical power
  • Work with Bayesian statistics to make decision-relevant inference








6.8 Whither EAMTT+?

Consulting + public goods model?

Public goods and coordination: generating, proving, communicating value

Driven by EA ‘fundraising’ orgs’ high-value questions (feedback and synthesis)


Light-touch resource provision, feedback, coordination?

  • Orgs are likely to do some trials independently. Curate guides/workshops on ‘how your inferences can go wrong’ (key examples of ‘failure modes’)


  • General trial design and data collection template (useful?)

  • Geo experiments template

  • “Ways to infer the counterfactual donation-boost from Google search ads”


Framework and matchmaking for academic/marketer/EA org collaboration?

  • A standard understanding, templates, and approach,to enable win-win cooperations



Management: New ‘founder’ and champions? Integrate under another umbrella?




Updates from this summit

Effective Giving ecosystem is coordinating and working to share data

  • Leadership from EA:GHD

  • Coordination from GWWC13

  • Yesterday: quick collaboration on Google paid search ads ‘benchmarking’ (NL, DE, ES, SE, etc.)


\(\rightarrow\) Could focus on:

  • High-value coordinated trials (see earlier EAMT high value questions)

  • Environments and tools to enable inference

    • data collection and sharing
    • paths with full-funnel tracking/attribution within defined units
    • (or) pivotal choice points










  1. on the effect of impact information on giving, and social influences on giving in social fundraising↩︎

  2. … laid out an agenda, taxonomy, and platform for building the evidence base (see also innovations in fundraising, and shared Airtable)↩︎

  3. I pushed ‘video testimonial’ ads which were produced, and substantially more successful ($0.79 per click vs $1.14-$3.75 per click) Tracking results in Google Analytics (150 ‘engaged sessions’) and outcomes within CEA intake data (still premature). FTX thing ended this.↩︎

  4. University (and local) Groups and UGAP: coalescing and documenting organizers, active groups, data collection and potential experimentation; see work and progress here (joint with volunteer Kynan Behan). An ambitious ready-to-fund agenda. Supported by qualitative (interview and survey) work.↩︎

  5. I’ll come back to the latter below↩︎

  6. Focus on consulting models. Lack of funding for promoting giving. Prefer implementing professional marketing directly, less interest in longer-duration testing.↩︎

  7. E.g., 1. Kyle Smith \(\rightarrow\) Effective Institutions Project, Effective Giving on foundation data). 2. Michael Zoroob (Meta research liaison) –> $15k+ in ad funding, process and method insights relevant to EA Psych Lab, Good Impressions↩︎

  8. The attribution/long-funnel issue.↩︎

  9. eamt_path_presentation-9↩︎

  10. Password protected, please email me to request access.↩︎

  11. Several volunteers have done useful work as well, but it has been hard to retain them and↩︎

  12. The Unjournal might be able to play a role here, particularly if we take on work specifically targeting ‘other-regarding behavior, effectiveness in altruistic decisionmaking’ etc.↩︎

  13. through Lucas and others↩︎