The OES SOP
Overview
Purposes of this document
Nature and limitations of this document
We (mostly) focus on randomized field experiments.
We present examples using R
Structure of the document
Help us improve our work!
About this document
1
Statistical and causal inference for policy change
2
Basics of Experimental Design and Analysis
2.1
Statistical Power: Designing Studies that effectively distinguish signal from noise
2.2
Error Rates of Tests
2.3
Bias in Estimators
3
Design-Based Principles of Statistical Inference
3.1
An example using simulated data
3.1.1
How do we calculate randomization-based standard errors?
3.2
Summary: What does a design based approach mean for policy evaluation?
4
Randomization and Design
4.1
Coin flipping randomization versus Urn-drawing randomization
4.2
Urn-Drawing or Complete Randomization into 2 or more groups
4.3
Factorial Designs
4.4
Block Random Assignment
4.4.1
Using only a few covariates to create blocks
4.4.2
Multivariate blocking using many covariates
4.5
Cluster random assignment
4.6
Other designs
4.7
Randomization assessment
4.7.1
What to do with “failed” randomization assessments?
4.7.2
How to minimize large chance departures of randomization?
5
Analysis Choices
5.1
Completely or Urn-Draw Randomized Trials
5.1.1
Two arms
5.1.2
Multiple arms
5.1.3
Multiple Outcomes
5.2
Covariance Adjustment (the use of background information to increase precision)
5.2.1
Intuition about bias in the least squares estimator of the ATE with covariates
5.2.2
The Lin Approach to Covariance Adjustment
5.2.3
The Rosenbaum Approach The Covariance Adjustment
5.3
How to choose covariates for covariance adjustment?
5.4
Block-randomized trials
5.4.1
Testing the null of no effects with binary outcomes and block randomization: Cochran-Mantel-Haenszel (CMH) test for K X 2 X 2 tables
5.4.2
Estimating an overall Average Treatment Effect
5.5
Cluster-randomized trials
5.5.1
Bias when cluster size is correlated with potential outcomes
5.5.2
Incorrect false positive rates from tests and confidence intervals
6
Procedures when we have missing data
6.1
Missing Independent of Potential Outcomes (MIPO)
6.2
Missing Independent of Potential Outcomes Given X (MIPO|X)
6.3
Bounds
6.4
Sensitivity Analyses
7
Observational Studies (Policy Evaluations without Randomization)
7.1
What justifies statistical inference in an observational study?
7.2
Approaches to making the case for an as-if-randomized comparison.
7.2.1
Synthetic Control Methods
7.2.2
Regression Discontinuity Designs
7.2.3
Matching
7.2.4
Regression adjustment
8
Power Analysis
8.1
An example of the off-the-shelf approach
8.2
An example of the simulation approach
8.3
When to use which approach
8.4
Additional examples of the simulation approach
8.4.1
A two-by-two design with interaction
8.4.2
Covariate adjustment with the Lin estimator
8.4.3
Incorporating DeclareDesign into OES Power Tools
9
Other Topics
9.1
Non- or Partial-Compliance and the Local Average Treatment Effect
9.2
Overly influential points
9.3
Issues with the field work or generation of random numbers
9.4
How to choose covariates for covariance adjustment
10
Working with data
10.1
General questions we ask of a data set
11
Glossary of Terms
12
Appendix
References
Published by the OES
OES Standard Operating Procedures for The Design and Statistical Analysis of Experiments.
Chapter 11
Glossary of Terms
Average treatment effect
ATE