The first theme in this book: what is statistics and what is trying to do?

After a conceptual discussion, we go over some

Conceptual: approaches to statistics/inference and causality

2.1 ‘Learning and optimization’ as an alternative to statistical inference

In many real-world cases we use data and ‘statistics’ not to learn about the world for its own sake, but simply in order to make the ‘best’ decision.

These ‘decision optimization’ cases are referred to as ‘reinforcement learning’ (I think). In collecting data and planning our analysis and experimental interventions (if any), we need not be directly concerned with ‘statistical power’ nor with hypothesis testing for its own sake. We might prefer to learn a little bit about each of many possible options, with a very low chance of finding a ‘statistically significant result,’ over learning a lot about one or two particular options. See the ‘movie titles’ example discussed in section @ref{#lift-test}.

2.2 Statistical inference

2.3 Bayesian vs. frequentist approaches

2.3.1 Interpretation of frequentist CI’s (aside)

2.4 Causal vs. descriptive; ‘treatment effects’ and the potential outcomes causal model

2.4.1 DAGs and Potential outcomes

2.5 Theory, restrictions, and ‘structural vs reduced form’

2.6 ‘Hypothesis testing’

2.6.1 McElreath’s critique

2.6.2 Bayesian vs. frequentist hypothesis ‘testing’

2.6.3 Individual vs. joint hypothesis testing: what does it mean?

2.6.4 Other issues

(Mention, link to later discussion: issues of the overall false error rate/coverage with MHT)