1
Introduction
Basic statistical approaches and frameworks
Regression and control approaches, robustness
Causal inference through observation{-#caus_inf_obs}
Causal paths and levels of aggregation
Experiments and surveys: design and analysis
Other approaches, techniques, and applications
Some key resources and references
2
BASIC STATISTICAL APPROACHES AND FRAMEWORKS
2.1
‘Learning and optimization’ as an alternative to statistical inference
2.2
Statistical inference
2.3
Bayesian vs. frequentist approaches
2.3.1
Interpretation of frequentist CI’s (aside)
2.4
Causal vs. descriptive; ‘treatment effects’ and the potential outcomes causal model
2.4.1
DAGs and Potential outcomes
2.5
Theory, restrictions, and ‘structural vs reduced form’
2.6
‘Hypothesis testing’
2.6.1
McElreath’s critique
2.6.2
Bayesian vs. frequentist hypothesis ‘testing’
2.6.3
Individual vs. joint hypothesis testing: what does it mean?
2.6.4
Other issues
3
Hypothesis testing, statistical comparisons and inferences
3.1
Frequentist
3.1.1
Parametric
3.1.2
Nonparametric
3.2
Randomization and permutation-based
3.3
Bayesian and hybrid
3.3.1
Bayes factor – what is it, what can it do?
3.4
Packages: The “Infer” package in R
Overview
3.4.1
SPECIFY()
: Specifying response (and explanatory) variables
REGRESSION AND CONTROL APPROACHES, ROBUSTNESS
4
Basic statistical inference and regressions: Common mistakes and issues
4.1
Basic regression and statistical inference: Common mistakes and issues briefly listed
4.1.1
Bad control
4.1.2
Does ‘controlling for more’ increase the probability that a (controlled) difference between too groups represents a causal effect? Not really (Informal discussion in fold)
4.1.3
“Bad control” (“colliders”)
4.1.4
Choices of lhs and rhs variables
4.1.5
Functional form
4.1.6
OLS and heterogeneity
4.1.7
“Null effects”
4.1.8
Multivariate tests and ‘tests for non-independence’
4.1.9
Multiple hypothesis testing (MHT)
4.1.10
Interaction terms and pitfalls
4.1.11
Choice of test statistics (including nonparametric)
4.1.12
How to display and write about regression results and tests
4.1.13
Bayesian interpretations of results
4.2
Aside: effect and contrast coding of categorical variables
5
Robustness and diagnostics, with integrity; Open Science resources
5.1
(How) can diagnostic tests make sense? Where is the burden of proof?
5.1.1
Further discussion: the DiD approach and ‘parallel trends’
5.2
Estimating standard errors
5.3
Sensitivity analysis: Interactive presentation
5.4
Supplement: open science resources, tools and considerations
5.5
Diagnosing p-hacking and publication bias (see also
meta-analysis
)
5.5.1
Publication bias – see also
considering publication bias in meta-analysis
5.6
Multiple hypothesis testing - see above
6
Control strategies and prediction, Machine Learning (Statistical Learning) approaches
6.1
See also
“notes on Data Science for Business”
6.1.1
Limitations to inference from learning approaches
6.1.2
Tree models
6.2
Notes Hastie: Statistical Learning with Sparsity
6.2.1
Introduction
6.2.2
Ch2: Lasso for linear models
6.2.3
Chapter 3: Generalized linear models
6.2.4
Chapter 4: Generalizations of the Lasso penalty
6.3
Notes: Mullainathan
CAUSAL INFERENCE THROUGH OBSERVATION
7
Causal inference: IV (instrumental variables) and its limitations
Some casual discussion
7.1
Instrument validity
7.2
Heterogeneity and LATE
7.3
Weak instruments, other issues
7.4
Instrumenting Interactions
7.5
Reference to the use of IV in experiments/mediation
8
Causal inference: Other paths to observational identification
8.1
Fixed effects and differencing
8.2
DiD
8.3
RD
8.4
Time-series-ish panel approaches to micro
8.4.1
Lagged dependent variable and fixed effects –> ‘Nickel bias’
8.4.2
Age-period-cohort effects
CAUSAL PATHS AND LEVELS OF AGGREGATION
9
Mediation modeling and its massive limitations
9.1
Mediators (and selection and Roy models): a review, considering two research applications
9.2
DR initial thoughts (for NL education paper)
9.3
Econometric Mediation Analyses (Heckman and Pinto)
Relevance to Parey et al
9.3.1
Summary and key modeling
9.3.2
Common assumptions and their implications
9.4
Pinto (2015), Selection Bias in a Controlled Experiment: The Case of Moving to Opportunity
Summary
Relevance to Parey et al
Introduction
Identification strategy brief
Results in brief
Framework: first for binary/binary (simplification)
Framework for MTO multiple treatment groups, multiple choices
9.5
Antonakis approaches
10
Selection, corners, hurdles, and ‘conditional on’ estimates
10.1
‘Corner solution’ or hurdle variables and ‘Conditional on Positive’
10.2
Bounding approaches (Lee, Manski, etc)
10.2.1
Notes: Training, Wages, and Sample Selection: Estimating Sharp Bounds on Treatment Effects, David Lee, 2009, RESTUD
11
Multi-level models
11.1
Introduction (Qstep)
11.2
Some basic theory
11.2.1
Level 1 model
11.2.2
Level 2
11.2.3
Alternative/Naive approaches
11.2.4
‘old way’: two-stage regression
11.2.5
How many higher-level units do you need?
11.3
Fitting mlm in practice
11.4
“Stimuli” (treatments) as a random factor
EXPERIMENTS AND SURVEYS: DESIGN AND ANALYSIS
12
Survey design and implementation; analysis of survey data
12.1
Survey sampling/intake
Probability sampling
Non-probability sampling
12.2
Case: Surveying an unmeasured and rare population surrounding a ‘social movement’
Background and setup
Our ‘convenience’ method; issues, alternatives
Our methodological questions
12.2.1
Sketched model and approach: Bayesian inference/updating for estimating demographics and attitudes of an rare/hidden population
13
Experimental design: Identifying meaningful and useful (causal) relationships and parameters
13.1
Why run an experiment or study?
13.1.1
Sitzia and Sugden on what theoretically driven experiments can and should do
13.2
Causal channels and identification
13.3
Types of experiments, ‘demand effects’ and more artifacts of artificial setups
13.4
Within vs between-subject designs
13.5
Generalizability (and heterogeneity)
14
Robust experimental design: pre-registration and efficient assignment of treatments
14.1
Pre-registration and Pre-analysis plans
14.1.1
The benefits and costs of pre-registration: a typical discussion
14.1.2
The hazards of specification-searching
14.2
Designs for
decision-making
14.2.1
Notes on Bandit vs Exploration problems/Thompson vs Exploration sampling
Sequential
14.2.2
Adaptive
14.3
Efficient assignment of treatments
14.3.1
See also
multiple hypothesis testing
14.3.2
How many treatment arms can you ‘afford?’
14.3.3
Other notes and resources
15
(Ex-ante) Power calculations for (Experimental) study design
15.1
What is the point of doing a ‘power analysis’ or ‘power calculations?’
15.1.1
What are the practical benefits of doing a power analysis
15.2
Key ingredients for doing a power analysis (and designing an experimental study in light of this)
15.3
The ‘harm to science’ from running underpowered studies
15.4
Power calculations without real data
15.5
Power calculations using prior data
15.5.1
From Reinstein upcoming experiment preregistration
15.6
Digression: Power calculations/optimal sample size for ‘lift’ in a ranking case
15.6.1
Design: Which questions to ask the audience about the proposed titles, and in what order?
Which statistical test(s)/analyses to run (if any) and what measures to report?
How to assign the ‘treatments,’ and how large a sample is optimal, considering ‘power’ (or ‘lift’)?
15.7
Survey design digression: sample size for a “precise estimate of a ‘population parameter’” (focus: mean of a Likert scale response)
15.7.1
How to measure and consider the precision of Likert-item responses
15.7.2
Computing sample size to achieve this precision
16
‘Experimetrics’ and measurement of treatment effects from RCTs
16.1
Which error structure? Random effects?
16.2
Randomization inference?
16.3
Parametric and nonparametric tests of simple hypotheses
16.3.1
Parametric tests
16.3.2
Non-parametric tests
16.4
Adjustments for exogenous (but non-random) treatment assignment
16.5
IV in an experimental context to get at ‘mediators?’
16.6
Heterogeneity in an experimental context
16.7
Incorporate above: Notes on “The econometrics of randomised experiments” (Athey and Imbens)
16.7.1
Abstract and intro
16.7.2
Randomised experiments and validity
16.7.3
Potential outcomes/ Rubin causal model framework (covered earlier)
16.7.4
3.2 Classification of assignment mechanisms
16.7.5
The analysis of Completely randomized experiments
16.7.6
Randomization inference for Average treatment effects
16.7.7
Quantile treatment effect (Infinite population context)
16.7.8
Covariates (if not stratified) in completely randomized experiments
16.7.9
Randomization inference and regression estimators
16.7.10
Regression Estimators with Additional Covariates [DR: seems important]
16.7.11
Stratified randomized experiments: analysis
16.7.12
7 The Design of randomised experiments and the benefits of stratification
16.7.13
7.1 Power calculations
16.7.14
Stratified randomized experiments: Benefits
16.7.15
Re-randomization
16.7.16
Analysis of Clustered Randomised Experiments
16.7.17
Noncompliance in randomized experiments (DR: Relevant to NL lottery, not to charity experiments)
16.7.18
Heterogenous Treatment Effects and Pretreatment Variables
16.7.19
Data-driven Subgroup Analysis: Recursive Partitioning for Treatment Effects
16.7.20
10.3.2 Non-Parametric Estimation of Treatment Effect Heterogeneity
16.7.21
10.3.3 Treatment Effect Heterogeneity Using Regularized Regression
16.7.22
10.3.4 Comparison of Methods
OTHER APPROACHES, TECHNIQUES, AND APPLICATIONS
17
Boiling down: Construct validation/reliability, dimension reduction, factor analysis, and Psychometrics
17.1
Constructs and construct validation and reliability
17.1.1
Validity: general discussion
17.1.2
Reliability: general discussion
17.1.3
(
raykovMetaanalysisScaleReliability2013?
)
17.2
Factor analysis and principal-component analysis
17.3
Other
18
Meta-analysis and combining studies: Making inferences from previous work
18.1
Introduction
18.2
An overview of meta-analysis, from Christensen et al 2019, ch 5, ’Using all evidence, registration and meta-analysis
18.2.1
The origins [and importance] of study [pre-]registration
18.2.2
Social science study registries
18.2.3
Meta-analysis
18.2.4
Combining estimates
18.2.5
Heterogeneous estimates…
18.3
Excerpts and notes from
‘Doing Meta-Analysis in R: A Hands-on Guide’
(Harrer et al)
18.3.1
Pooling effect sizes
18.3.2
Bayesian Meta-analysis
18.3.3
Forest plots
18.4
Dealing with publication bias
18.4.1
Diagnosis and responses: P-curves, funnel plots, adjustments
18.5
Other notes, links, and commentary
18.6
Other resources and tools
18.6.1
Institutional and systematic guidelines
18.7
Example: discussion of meta-analyses of the Paleolithic diet
BELOW
19
Bayesian approaches
19.1
My (David Reinstein’s) uses for Bayesian approaches (brainstorm)
19.1.1
Meta-analysis of previous evidence
19.1.2
Inference, particularly about ‘null effects’
19.1.3
‘Policy’ and business implications and recommendations
19.1.4
Theory-driven inference about optimizing agents, esp. in strategic settings
19.1.5
Experimental design
19.2
‘Statistical thinking’ (McElreath) and
AJ Kurtz ‘recoded’ (bookdown)
: highlights and notes
19.2.1
1. The Golem of Prague (map ant the territory)
19.2.2
2. Small Worlds and Large Worlds
19.3
Title: “Introduction to Bayesian analysis in R and Stata - Katz, Qstep”
19.3.1
Why and when use Bayesian (MCMC) methods?
19.3.2
Theory
19.3.3
Comparing models … Equivalent of ‘likelihood’
19.3.4
On choosing priors
19.3.5
Implementation
19.3.6
Generate predictions from a WinBUGS model
19.3.7
Missing data case
19.3.8
Stata
19.3.9
R mcmc pac
19.4
Other resources and notes to integrate
20
Notes on Data Science for Business by Foster Provost and Tom Fawcett (2013)
20.1
Evaluation of this resource
Ch 1 Introduction: Data-Analytic Thinking
Example: During Hurricane Frances… predicting demand to gear inventory and avoid shortages … lead to huge profit for Wal-Mart
Example: Predicting Customer Churn
20.1.1
Data Science, Engineering, and Data-Driven Decision Making
20.1.2
Data Processing and “Big Data”
20.1.3
Data and Data Science Capability as a
Strategic Asset
20.1.4
Data-Analytic Thinking
20.1.5
Data Mining and Data Science, Revisited
20.2
Ch 2 Business Problems and Data Science Solutions
20.2.1
Types of problems and approaches
20.2.2
The Data Mining Process
20.3
Ch 3: Introduction to Predictive Modeling: From Correlation to Supervised Segmentation
20.3.1
Models, Induction, and Prediction
20.3.2
Supervised Segmentation
20.3.3
Summary
20.3.4
NOTE – check if there is a gap here
20.4
Ch. 4: Fitting a Model to Data
20.4.1
Classification via Mathematical Functions
20.4.2
Regression via Mathematical Functions
20.4.3
Class Probability Estimation and Logistic Regression
20.4.4
Logistic Regression: Some Technical Details
20.4.5
Example: Logistic Regression versus Tree Induction
20.4.6
Nonlinear Functions, Support Vector Machines, and Neural NetworksThe two most common families of techniques that are based on fitting the parameters of complex, nonlinear functions are nonlinear supportvector machines and neural networks.
20.5
Ch 5: Overfitting and its avoidance
20.5.1
Generalization
20.5.2
Holdout Data and Fitting Graphs
20.5.3
Example: Overfitting Linear Functions
20.5.4
Example: Why Is Overfitting Bad?
20.5.5
From Holdout Evaluation to Cross-Validation
20.5.6
Learning Curves
20.5.7
Avoiding Overfitting with Tree Induction
20.5.8
A General Method for Avoiding Overfitting
20.5.9
A General Method for Avoiding Overfitting
20.5.10
Avoiding Overfitting for Parameter Optimization
20.6
Ch 6.: Similarity, Neighbors, and Clusters
20.6.1
Similarity and Distance
20.6.2
Similarity and Distance
20.6.3
Example: Whiskey Analytics
20.6.4
Nearest Neighbors for Predictive Modeling
20.6.5
How Many Neighbors and How Much Influence?
20.6.6
Geometric Interpretation, Overfitting, and Complexity Control
20.6.7
Issues with Nearest-Neighbor Methods
20.6.8
Other Distance Functions
20.6.9
Stepping Back: Solving a Business Problem Versus Data Exploration
20.6.10
Summary
20.7
Ch. 7. Decision Analytic Thinking I: What Is a Good Model?
20.7.1
Evaluating Classifier
20.7.2
The Confusion Matrix
20.7.3
Problems with Unbalanced Classes
20.7.4
Generalizing Beyond Classification
20.7.5
A Key Analytical Framework: Expected Value
20.7.6
Using Expected Value to Frame Classifier Use
20.7.7
Using Expected Value to Frame Classifier Evaluation
20.7.8
Evaluation, Baseline Performance, and Implications for Investments in Data
20.7.9
Summary
20.7.10
Ranking Instead of Classifying
20.7.11
Profit Curves
20.8
Contents and consideration
Meta-analysis arbitrary example: the ‘Paleo diet’
20.9
Conceptual: Thoughts on nutritional studies and meta-analysis issues
20.9.1
Limited compliance; ‘what are we aiming to measure and why?’
20.9.2
Control group: what is being measured?
20.9.3
What is being tested and how broadly should we interpret the results?
20.10
Manheimer et al
20.10.1
Strengths and limitations
20.10.2
Overall results, interpretation, consideration of evidence presented in
Manheimer et al.
(
2015
)
20.10.3
My rough conclusions from Manheimer et al
20.10.4
External critiques and evaluations of Manheimer et al, (esp Fenton) authors’ response
20.11
Other meta-analyses and consideration of the Paleo diet
20.11.1
Process of finding relevant work (informal)
20.12
Focus: Boers et al
20.13
Overall analysis
20.13.1
Limitations and uncertainties to my own analysis; proposed future steps
21
Getting, cleaning and using data
21.1
Data: What/why/where/how
21.2
Organizing a project
21.3
Dynamic documents (esp Rmd/bookdown)
21.3.1
Managing references/citations
21.3.2
An example of dynamic code
21.4
Project management tools, esp. Git/Github
21.5
Good coding practices
21.5.1
New tools and approaches to data (esp ‘tidyverse’)
21.5.2
Style and consistency
21.5.3
Using functions, variable lists, etc., for clean, concise, readable code
21.5.4
Mapping over lists to produce results
21.5.5
Building results based on ‘lists of filters’ of the data set
21.5.6
Coding style and indenting in Stata (one approach)
21.6
Additional tips (integrate)
Some key points from
R for data science
(see my hypothesis notes)
21.6.1
From a named list
21.6.2
List to vector
21.6.3
Unnesting
21.6.4
21.7
Making tidy data with broom
22
List of references
Published with bookdown
Statistics, econometrics, experiment and survey methods, data science: Notes
CAUSAL INFERENCE THROUGH OBSERVATION