Monday, January 24, 2022

How to Learn Nothing from the Failure of VAM-Based Teacher Evaluation

The Annenberg Institute for School Reform is a most exclusive academic club lavishly funded and outfitted at Brown U. for the advancement of corporate education in America. 

The Institute is headed by Susanna Loeb, who has a whole slew of degrees from prestigious universities, none of which has anything to do with the science and art of schooling, teaching, or learning.  

Researchers at the Institute are circulating a working paper that, at first glance, would suggest that school reformers might have learned something about the failure of teacher evaluation based on value-added models applied to student test scores. The abstract:

Starting in 2009, the U.S. public education system undertook a massive effort to institute new high-stakes teacher evaluation systems. We examine the effects of these reforms on student achievement and attainment at a national scale by exploiting the staggered timing of implementation across states. We find precisely estimated null effects, on average, that rule out impacts as small as 1.5 percent of a standard deviation for achievement and 1 percentage point for high school graduation and college enrollment. We also find little evidence of heterogeneous effects across an index measuring system design rigor, specific design features, and district characteristics.

So could this mean that the national failure of VAM applied to teacher evaluation might translate to decreasing the brutalization of teachers and the waste of student learning time that resulted from the implementation of VAM beginning in 2009? No such luck.  
 
The conclusion of the paper, in fact, clearly shows that the Annenbergers have concluded that the failure to raise test scores by corporate accountability means (VAM) resulted from laggard states and districts that did not adhere strictly to the VAM's mad methods.  In short, the corporate-led failure of VAM in education happened as a result of schools not being corporate enough:

Firms in the private sector often fail to implement best management practices and performance evaluation systems because of imperfectly competitive markets and the costs of implementing such policies and practices (Bloom and Van Reenen 2007). These same factors are likely to have influenced the design and implementation of teacher evaluation reforms. Unlike firms in a perfectly competitive market with incentives to implement management and evaluation systems that increase productivity, school districts and states face less competitive pressure to innovate. Similarly, adopting evaluation systems like the one implemented in Washington D.C. requires a significant investment of time, money, and political capital. Many states may have believed that the costs of these investments outweighed the benefits. Consequently, the evaluation systems adopted by many states were not meaningfully different from the status quo and subsequently failed to improve student outcomes.

So the Gates-Duncan RTTT corporate plan for teacher evaluation failed not because it was a corporate model but because it was not corporate enough!  In short, there were way too many small carrots and not enough big sticks.
 

No comments:

Post a Comment