Abstract
Chetty, Friedman, and Rockoff (2014a, b) study value-added (VA) measures of teacher effectiveness. CFR (2014a) exploits teacher switching as a quasi-experiment, concluding that student sorting creates negligible bias in VA scores. CFR (2014b) finds VA scores are useful proxies for teachers’ effects on students’ long-run outcomes. I successfully reproduce each in North Carolina data. But I find that the quasi-experiment is invalid, as teacher switching is correlated with changes in student preparedness. Adjusting for this, I find moderate bias in VA scores, perhaps 10-35 percent as large, in variance terms, as teachers’ causal effects. Long-run results are sensitive to controls and cannot support strong conclusions.
Citation: Rothstein, Jesse. “Measuring the Impacts of Teachers: Comment.” American Economic Review, 107(6):1656-84. June 2017.