Rosenberg misstates the findings of the Calder Center’s “Portability of Teachers Effectiveness Across Schools,” by Zeyu Xu, Umut Ozek, and Matthew Corritore. It uses value-added data from tens of thousands of elementary and secondary school teachers in North Carolina and Florida, and it found, on the average, “either no change or a slight increase in a teacher’s measured effectiveness after changing schools.”
She then goes overboard in arguing, “This research should muffle critics of high-stakes evaluation systems who argue that teachers in low-performing schools cannot reach the same student achievement gains as other teachers and are punished when student achievement is incorporated into a teacher’s evaluation.”
Actually, Xu, et.al find the “pattern is most likely to be driven by regression to the mean and that it is not associated with school switches.” Their conclusion, thus, is not relevant to the policy concerns about value-added evaluations. Xu et. al also warn that their findings should not be over-generalized.
The Calder study finds that “high-performing teachers in the pre-move period tended to have lower value-added in the post-move period, whereas low-performing teachers in the pre-move period tended to have higher value-added in the post-move period.” In other words, it places an even greater burden of proof on those who believe that statistical models can disentangle estimates of individual teachers’ effectiveness from those of their schools’ effectiveness.
Xu et. al conclude, “A clear pattern emerges: Low performers tended to gain in value-added after a school move, and high performers tended to lose in value-added after a school move.” Consequently, “the average ‘move effect’ found among all teachers who changed schools could simply be driven by the proportion of movers who were high performers relative to the proportion of lower-performing movers.”
Let’s just discuss the subset of Reading teachers with at least two years of pre-move data. In North Carolina, for instance, “high-performing” secondary teachers saw a drop of .037 std when moving to a higher-poverty school. Their “low-performing” counterparts saw an increase of .08 std when moving to a lower performing school.
Let me be clear. Had the Calder scholars found evidence against my positions, I hope that I would have faced those facts, and not tried to shoe-horn them into my predetermined opinions.
So, I want to do the opposite of Rosenberg and other “reformers.” I must emphasize that despite the large size of the Calder study, only a small number of teachers provided evidence that is relevant for policy discussions regarding value-added evaluations. It is unusual for high-performing teachers to leave their high-performing schools for low-performing or high-poverty schools. And many of them move to high-poverty, high-performing selective schools. But, Calder crunched this data for 13,456 secondary Reading teachers in two states. Only, 109 shifted to a higher-poverty school!
Of course, if Rosenberg should have realized that the rarity of teachers like Mr. Keating making such a move argues against her position. It is not a good idea to try out risky and untried policy gambles just because they seem like a good ideas to some "reformers."