## [1] TRUE
Hosted HERE
Also see the ‘curated links’ HERE
Press “O” To See All Slides/Slide Map!
Columns denote ‘sections’; descend a column within each section
You may have to shrink the screen to be able to see the whole slide
I intend to post a more readable ‘bookdown’ version of this and add a link here
EA principles, some key hubs
Grants, fellowships, jobs
GPI/Forethought fellowship, [pre-doctoral](https://globalprioritiesinstitute.org/wp-content/uploads/GPI-Predoctoral-Research-Fellow-Economics.pdf
EA Funds grants, e.g,, for “Promising research into animal advocacy or animal well-being”
Agendas and research questions
We can now do better than this vague hope!
The founders of Effective Altruism took ideas from Philosophy, Economics, and other parts of academia to build a rigorous approach to ‘doing the most good in the world’, and to exploring and measuring this.
Miraculously, EA also has a passionate and influential group of supporters, and a substantial pool of funds for research, interventions, and advocacy!
Big opportunity for academic researchers to have impact, inspiration, funding
Partnering within academia, EA research audience
Grants
Helping students
Leave academia for greener (?) pastures at an EA-aligned org
EA and global priorities research offers a huge opportunity for academic researchers to have an positive impact (on the allocation of funds, and on the market of ideas).
There are opportunities for funding to support your research within academia, to promote the impact of your research (and gain valuable feedback), to help students find meaningful careers/research
…and/or you may want to leave academia work directly for an EA-aligned organization (like I did).
Doing the ‘most good’ given limited resources (some relationship to utilitarianism)…
but how do we define ‘the most good’?
Results of impromptu survey at LIS conference: What they think EA’s prioritize
.kable_styling <- hijack(kableExtra::kable_styling, full_width=FALSE)
.kable <- hijack(knitr::kable, format.args = list(big.mark = ",", scientific = FALSE))
lis_ea_priority_guess <- factor(c(
"Cause prioritization",
"Global Poverty",
"Cause prioritization",
"Climate Change",
"Global Poverty",
"Climate Change",
"Global Poverty",
"Global Poverty",
"Cause prioritization",
"Cause prioritization",
"Climate Change",
"Cause prioritization",
"Climate Change",
"Artificial Intelligence Risk",
"Climate Change",
"Global Poverty",
"Cause prioritization",
"Cause prioritization",
"Climate Change",
"Global Poverty",
"Cause prioritization",
"Cause prioritization",
"Cause prioritization",
"Global Poverty",
"Global Poverty",
"Global Poverty",
"Cause prioritization",
"Climate Change",
"Climate Change",
"Global Poverty",
"Global Poverty",
"Climate Change",
"Global Poverty",
"Global Poverty",
"Cause prioritization",
"Climate Change",
"Global Poverty",
"Global Poverty"
)
)
lis_ea_priority_guess <- tibble(
lis_ea_priority_guess = lis_ea_priority_guess
)
lis_ea_priority_guess %>% tabg(lis_ea_priority_guess) %>%
.kable(caption = "What LIS respondents *think* EAs prioritize most:" ) %>%
.kable_styling()
lis_ea_priority_guess | n | percent |
---|---|---|
Global Poverty | 15 | 0.39 |
Cause prioritization | 12 | 0.32 |
Climate Change | 10 | 0.26 |
Artificial Intelligence Risk | 1 | 0.03 |
… ‘Highly engaged’ (self-rated)
How much effective altruism/global priorities research funding is there?
(
op_research_grants_tab <-
open_phil_grants %>%
filter(possible_research==TRUE) %>%
group_by(year) %>%
dplyr::summarise(total = format(sum(amount, na.rm = TRUE), big.mark=",", scientific=FALSE), grants = n()) %>%
arrange(-year) %>%
mutate(year=as.character(year)) %>%
.kable(caption = "Open Phil (likely) research funding by year") %>%
.kable_styling()
)
year | total | grants |
---|---|---|
2021 | 19,464,223 | 14 |
2020 | 77,681,029 | 76 |
2019 | 122,463,958 | 63 |
2018 | 38,589,084 | 52 |
2017 | 57,666,403 | 45 |
2016 | 22,386,936 | 27 |
2015 | 2,591,000 | 9 |
2014 | 1,437,720 | 3 |
2013 | 445,000 | 2 |
(
op_res_grants_tab_yr_area <-
open_phil_grants %>%
filter(possible_research==TRUE) %>%
dplyr::group_by(year, focus_area) %>% # drop_na(!!yvar, !!treatvar) %>%
summarise(total = sum(amount_usd_k, na.rm = TRUE)) %>%
spread(year, total, fill=0) %>%
arrange(-`2020`) %>%
.kable(caption = "OpenPhil (likely research) grants by year and area, in $1000 USD") %>%
.kable_styling()
)
## `summarise()` has grouped output by 'year'. You can override using the `.groups` argument.
focus_area | 2013 | 2014 | 2015 | 2016 | 2017 | 2018 | 2019 | 2020 | 2021 |
---|---|---|---|---|---|---|---|---|---|
Scient. Res. | 0 | 0 | 0 | 7,085 | 27,500 | 19,720 | 43,770 | 43,751 | 672 |
Biosec. | 0 | 0 | 300 | 1,943 | 7,747 | 8,704 | 1,625 | 12,792 | 1,000 |
AI risk | 0 | 0 | 1,186 | 6,333 | 10,798 | 3,128 | 61,288 | 10,891 | 15,450 |
Glob. Catastr. | 0 | 0 | 0 | 3,070 | 3,758 | 0 | 1,703 | 4,586 | 1,500 |
Farm Animal | 0 | 0 | 0 | 820 | 2,022 | 4,580 | 3,296 | 3,261 | 841 |
Other | 0 | 0 | 10 | 500 | 1,550 | 47 | 1,050 | 806 | 0 |
Glob. Health/Dev. | 0 | 0 | 0 | 0 | 2,864 | 210 | 3,176 | 678 | 0 |
Macro-econ | 0 | 0 | 0 | 700 | 0 | 700 | 1,150 | 600 | 0 |
Immig. Pol. | 0 | 1,185 | 390 | 0 | 0 | 400 | 0 | 200 | 0 |
Crime/Justice | 445 | 0 | 180 | 1,636 | 1,427 | 1,101 | 5,066 | 115 | 0 |
Land Ref. | 0 | 0 | 275 | 300 | 0 | 0 | 340 | 0 | 0 |
US pol. | 0 | 253 | 250 | 0 | 0 | 0 | 0 | 0 | 0 |
(
op_res_grants_line <-
open_phil_grants %>%
group_by(year, focus_area) %>%
mutate(total = sum(amount_usd_k, na.rm = TRUE)) %>%
ggplot() +
aes(x = year, y = amount_usd_k, colour = focus_area) +
geom_jitter(width = 0.5, height = 0.2, size=0.8) +
scale_colour_discrete(labels = function(x) str_wrap(x, width = 15)) +
geom_line(aes(x=year, y=total)) +
ylab("Grant amounts in $1k")
)
(
open_phil_grants %>%
filter(possible_research==TRUE) %>%
filter(year==2020) %>%
group_by(Focus.Area) %>%
summarise(total = format(sum(amount, na.rm = TRUE), big.mark=",", scientific=FALSE), grants = n()) %>%
dplyr::arrange(-grants) %>%
.kable(caption = "Open Phil (likely) research funding, 2020") %>%
.kable_styling()
)
Focus.Area | total | grants |
---|---|---|
Scientific Research | 43,750,718 | 33 |
Farm Animal Welfare | 3,261,351 | 13 |
Potential Risks from Advanced Artificial Intelligence | 10,891,345 | 8 |
Biosecurity and Pandemic Preparedness | 12,792,330 | 7 |
Global Health & Development | 678,358 | 5 |
Criminal Justice Reform | 115,000 | 3 |
Global Catastrophic Risks | 4,586,224 | 2 |
Macroeconomic Stabilization Policy | 600,000 | 2 |
Other areas | 805,703 | 2 |
Immigration Policy | 200,000 | 1 |
(
op_res_grants_tab_orgs_area <-
open_phil_grants %>%
filter(possible_research==TRUE) %>%
dplyr::group_by(Organization.Name) %>% # drop_na(!!yvar, !!treatvar) %>%
summarise(total = sum(amount_usd_k, na.rm = TRUE), `number of grants` = n()) %>%
arrange(-total) %>%
filter(total>5000) %>%
.kable(caption = "OpenPhil (likely research) grants by corganization and area, in $1000 USD") %>%
.kable_styling()
)
Organization.Name | total | number of grants |
---|---|---|
Georgetown University | 55,250 | 2 |
UC Berkeley | 29,552 | 18 |
Nuclear Threat Initiative | 20,439 | 6 |
Sherlock Biosciences | 17,500 | 1 |
Machine Intelligence Research Institute | 14,756 | 5 |
University of Washington (Institute for Protein Design) | 11,368 | 1 |
Stanford University | 8,294 | 1 |
Open Phil AI Fellowship | 6,760 | 4 |
Arizona State University | 6,421 | 1 |
University of Southern California | 6,238 | 3 |
Rutgers University | 5,982 | 2 |
MIT Synthetic Neurobiology Group | 5,970 | 2 |
Stanford University | 5,752 | 10 |
Telethon Kids Institute | 5,300 | 1 |
Foundation for Food and Agriculture Research | 5,292 | 6 |
Harvard University | 5,068 | 3 |
Maybe 500 million USD per year in EA/adjacent donations (+ about 250 million from OpenPhil)
GiveWell moving ~$80M per year
Founders Pledge
Longview
Effective Giving
EA Funds
“Gates Foundation seems to do ~$100M-500M/yr of grants in global economic development that seem to have cost-effectiveness on par with GiveWell work”
It’s not all research funding, but some of it is, and it is all interested in prioritization/effectiveness research.
lis_engage_priority <- factor(c(
"Pursuing a high-impact career",
"Effective charitable donations/Earning to give",
"Effective charitable donations/Earning to give",
"Effective charitable donations/Earning to give",
"Effective charitable donations/Earning to give",
"Pursuing a high-impact career",
"Pursuing a high-impact career",
"Pursuing a high-impact career",
"Pursuing a high-impact career",
"Pursuing a high-impact career",
"Pursuing a high-impact career",
"Pursuing a high-impact career",
"Effective charitable donations/Earning to give",
"Effective charitable donations/Earning to give",
"Pursuing a high-impact career",
"Pursuing a high-impact career",
"Effective charitable donations/Earning to give",
"Pursuing a high-impact career",
"Pursuing a high-impact career",
"Effective charitable donations/Earning to give",
"Pursuing a high-impact career",
"Pursuing a high-impact career",
"Pursuing a high-impact career"
),
levels = c("Pursuing a high-impact career", "Effective charitable donations/Earning to give", "Avoiding environmental damage through personal actions", "Political action")
)
lis_engage_priority <- tibble(
lis_engage_priority = lis_engage_priority
)
lis_engage_priority %>% tabg(lis_engage_priority) %>%
.kable(caption = "What LIS respondents *think* most 'effective altruists' engage... is most strongly advocated?:" ) %>%
.kable_styling()
lis_engage_priority | n | percent |
---|---|---|
Pursuing a high-impact career | 15 | 0.65 |
Effective charitable donations/Earning to give | 8 | 0.35 |
Avoiding environmental damage through personal actions | 0 | 0.00 |
Political action | 0 | 0.00 |
EAs are moving towards pursuing impact through their careers.
80k hours statements
Since 2015 “80,000 Hours thinks that only a small proportion of people should earn to give long term” (MacAskill)
There is also some support for politial influence, at least they say “the hour you spend voting is likely to be the most impactful one in your entire year on average… …influence over how hundreds of thousands or millions of dollars are spent.”
Why am I telling you my story?
Telling you my story because it might help you understand the strengths and limitations of academia and working at an EA org, and whether this aligns with your interests.
From my web CV …
Berkeley:
Limited audience…
‘How does this inform government policy?’, ‘How does it inform/relate to standard Economics (tractable mathematical) models of optimization?’, ‘Will this publish well’?
- ‘Does one donation come at the expense of another’?
Should an ‘efficient altruist’ purchase ‘fair trade’ products, bundling consumer choices with additional revenue to poor farmers/workers?
Considering ideas with a pre-EA policy audience.
‘Does one donation come at the expense of another’?
Things I care about: but did they line up with concepts in the discipline?I really cared about ideas and impact.
Essex, UK:
Experiments/trials and observational work on charitable and gift-giving: social influences, types of income/uncertainty
Applied microeconomic theory
‘Impact’ (ESRC grant, REF focus)
Building teaching/research/outreach resources, such as
innovationsinfundraising.org and ‘barriers to effective giving’
“Researching and writing for Economics students”
Positives: A fairly supportive environment, research freedom, many great colleagues, moderate teaching, targets ‘deep and rigorous theoretical work’, some of the smartest people
Limitations: academic politics and poor upper-management, countervailing rewards system, constant discussion of points/games (value drift), many/most students are dis-engaged
Standard publications as the only way to prove value; limits collaborative and nonstandard work
UK academia rewards either ‘REF-points publications’, box-ticking accreditations, or building favor with executive administration
In 2021 I left my secure academic post…
To pursue greater impact as a researcher at Rethink Priorities, a think tank “dedicated to figuring out the best ways to make the world a better place.” RP is closely tied to the Effective Altruism movement. My research into effective charitable giving is made possible by a grant from an individual donor under the advising of Longview Philanthropy.
To build tools and programs promoting open, collaborative, and robust research, as well as teaching, learning, and research training outside of traditional university degree schemes.
(Should you do it too? Back to this at the end)
Categorization and links to Economics and Psychology ‘theory’
Meta-analysis and synthesis
Field experiments and trials in large-scale contexts
Survey methods: representativeness, survey design
Identifying key questions for ‘tracking a movement and its impact’, e.g.,
Analysis: Visualisation, descriptive, predictive, and
…
Receptiveness to parts of EA message
Support for policies (e.g., animal welfare)
Future work: moral weights, measurement of satisfaction, ‘near-term’ (global health) evaluation, ‘shallow reviews’
Support and guidance to other RP projects (e.g., modeling the meat industry; designing behaioural trials)
Connection with academia; publishing, recruiting, advising students
Near term, Long-term future, Animal welfare, Prioritization research and ethics
Land animals in factory farms, plot over time
E.g.,
How to create better systems for creating and disseminating knowledge
Whether and how to ‘discount’ future individuals (income or happiness)?
How to make choices under moral uncertainty
How to value ‘more happy people’ versus ‘happier people’ (population ethics)
Defining a moral, value and choice framework, working out thorny moral decision-optimization issues
Empirical measurement of value and ‘what works to achieve it’
Empirical evidence on persuasion: ‘how to get people to act pro-socially and effectively’
Defining a moral framework, considering ‘what has value’, and ‘how to learn and choose’?
… how to value things and be consistent, how to use uncertainty and information in making altruistic choices
E.g.,
moral weights
population ethics
moral uncertainty .
From GPI agenda:
Under what conditions would a social planner or philanthropist prioritise policies that primarily increase social welfare in the far future rather than in the near term? For instance, under what condition would such agents prioritise saving for future generations (Ramsey 1928) or reducing the risk of human extinction (Baranzini and Bourguignon 1995)?
Should one have the same levels of epistemic modesty about unusual moral views as one should about unusual empirical views?
To what extent should we be risk averse in our approach to doing good, and what are the implica/tions of reasonable risk aversion for global prioritisation? (Quiggin 1982…)
Social welfare criteria that are used to compare states that differ in population size typically specify a critical welfare level at which lives that are added to the population have zero contributive value to social welfare (Blackorby et al. 1995…). What kinds of lives have zero contributive value in this sense (Cockburn et al. 2014…)?
‘What has value and moral worth?’ (e.g., sentience research)
How to measure value? (e.g., pain/pleasure DALY)
How to achieve value?
Direct interventions and policies, direct/indirect, short and LT impacts
Very long term impacts, inference with deep uncertainties
Should we loosen migration restrictions to increase global welfare? What is a politically feasible level of migration? (from Rhys-Bernard syllabus)
Estimating, in terms of SWB, the impact of potentially highly-effective interventions, including: psychotherapy for common mental disorders; cataract surgery for blindness; deworming tablets to improve lifelong earnings (from Happier lives institute research priorities)
- Measurement of ‘pain and pleasure’ informing moral worth and how to measure the success of interventions - Interventions – which ones work, how effective are they - Very hard ones – predicting long term and areas of deep uncertainty
Empirical (behavioural) … How to get people and institutions to care about others (altruism) and about being effective in doing so?
Barriers to considering effectiveness and acting effectively (my focus – open project HERE)
Applied work : message testing, information and choice-architecture
“Technical and Philosophical Questions That Might Affect Our Grantmaking”
See also: this [list of lists(https://forum.effectivealtruism.org/posts/MsNpJBzv5YhdfNHc9/a-central-directory-for-open-research-questions)]
Rethink Priorities 2020 Impact and 2021 Strategy (includes broad agenda)
Happier lives institute research priorities
Academia:
Publications, grants, citations, students placed in jobs, awards
Need to be ‘first to publish’ on a new topic; supporting evidence less valued
RP:
model estimates impact by considering the probability of our influence targets updating in the correct direction … amount of money/resources changed, how much better (or worse) that change is, the counterfactual years of credit due to the work, and the costs of the project.
‘How to measure research impact?’
Money-metric impact of a random-ish sample of projects:
‘Multiplying the uncertain estimates of’ (Bayesian):
Divide by project cost
\(\rightarrow\) “Impact per dollar spent”
\(\rightarrow\) Extrapolate to total value of RP
The EA research landscape - a new ‘funder’ and consumer of research
We care about research value, accuracy, and impact, not as mch ‘innovation’ or theoretical rigor
We are not so tied to traditional institutions (publishers, traditional grantmakers)
\(\rightarrow\) ‘Evaluated project repo’ rather than frozen publications?
Rethink Priorities is hiring – “Staff Researcher (Global Health and Development)”
several positions
and very soon “a longtermist and a meta/movement building person”
GPI/Forethought fellowship, [pre-doctoral](https://globalprioritiesinstitute.org/wp-content/uploads/GPI-Predoctoral-Research-Fellow-Economics.pdf
EA Funds grants, e.g,, for “Promising research into animal advocacy or animal well-being”
OP: ramping up, very interested in funding research but “In general, we expect to identify most giving opportunities via proactive searching and networking”
Contact EA organisations; present your work
Help build open-content
Encourage your students
YES if you (because you like)…
Care about impact and social good; research outcomes, implications, applications
Rigorous analytical framework (Philosophy, Economics, Statistics/data) anchored and connected to the practical
“Interdisciplinary” in the right way
Positive, supportive environment: people motivated by outcomes, zero competitiveness/politics AFAIK
High-achieving, super-literate and ‘switched on’ colleagues
Note: RP and GPI and others differ dramatically in focus and approach; I’m mainly talking about RP.
NO if you want to…
Focus on deeply theoretical ‘pure’ research without a direct connection to impact, a completely independent research agenda
Earn a lot of money, have the long-term job security of ‘tenure’
MAYBE if you want to…
Teach/work with a large group of motivated young people, guide careers
Publish in academic journals and the equivalent, present at academic conferences
Prestige/public intellectual
Economics: Both theory and empirical/application, development economics, behavioral, preference and choice theory, macroeconomics, and more
Philosophy: The answers to ‘arcane’ philosophical questions are now driving very important decisions and uses of funds
Psychology and behavioral science, marketing: “How to get people to care”, “how to do messaging and how to measure the results”
Maths, statistics, computer science: AI-risk, data science, mathematics of uncertainty and forecasting, controlling technology, statistical experimental design…
Political science and international relations/area studies
Biology and neuroscience (RP just hired an entymologist): Wild animal welfare, animal sentience
Other sciences: Existential risks (natural and human)
Other humanities - some possibilities discussed here (considering ‘longtermism’)
EA tries to be analytically rigorous and to make decisions based on empirical evidence
There is a lot of money your research can move
There is money to get the research done (e.g., FHI, Open Phil, CEA, Center for Long Term risk…)
A diverse set of relevant research areas, …
Short of leaving academia, how can academics optimize their impact despite academia’s limitations? (Relatedly, what of what academics tend to do is low impact and should be reconsidered?) +1
Response (summarizing my vocal answer):
Question: Like the process of building a start-up company when research reaches a certain stage of development in academia, is there a similar/parallel route for research funded by EA? Response: As a participant reminded me, for charitable activities there is the Incubation Program - CHARITY ENTREPRENEURSHIP, and some other things mentioned below
An app was mentioned as a route for research insights being applied in the wider world. I wonder, funding the app, are there funding routes within EA or would it eventually require funding from private equity/venture capital, and would this then distort the original aim of the app? (answer - Charity Incubator / charityentrepeneurship.com / sparkwave.tech)
You talk about many fields and skills, would there be a training/ network you would suggest to join/follow to effectively contribute?
Response: Can you clarify your question? There are definitely some workshops and trainings, such as those run by GPI. Here is a list of syllabi: https://docs.google.com/document/d/1kYNd7URm0TPClM4iDIM_cjtpsOK9Vp1_HDbR2tMzVTQ/edit
I’d like to be involved in creating more ‘research skills for global priorities/EA’ training myself, perhaps outside of academia
Can we get involved with rethink priorities part time? (Michal Strahilevitz) Response (summarizing my vocal answer): You could reach out to us to discuss your research and our work, definitely! Also, we have and do hire people part-time and on a project basis