Karen Lewis points to the corporate boot-licking study by Raj Chetty that speculates on how much income would be added if the worst teachers (based on VAM, hah!) were replaced by average teachers. The Chetty study was used as the principal documentation to support the court decision in California to roll back teacher tenure. Read here excerpts from Bruce Baker's takedown of the Chetty study.
"A child's learning is the function more of the characteristics of his classmates than those of the teacher." James Coleman, 1972
Showing posts with label Chetty. Show all posts
Showing posts with label Chetty. Show all posts
Thursday, June 12, 2014
Thursday, April 10, 2014
The Raj Chetty VAM Balloon Gets Popped
From NEPC:
Contact:
William J. Mathis, (802) 383-0058, wmathis@sover.net
Moshe Adler, (917) 453-4921, ma820@columbia.edu
URL for this press release: http://tinyurl.com/kpwt6ft
BOULDER, CO (April 10, 2014) – A
highly influential but non-peer-reviewed report on teacher impact suffers from
a series of errors in methodology and calculations, according to a new review
published today.
Professor Moshe Adler reviewed two
recent reports released in September 2013 as National Bureau of Economic
Research working papers. Dr. Adler’s review for the Think Twice think tank
review project is published today by the National Education Policy Center,
housed at the University of Colorado Boulder School of Education.
Adler is an economist affiliated
with both Columbia’s Urban Planning Department as well as the Harry Van Arsdale
Jr. Center for Labor Studies at Empire State College, SUNY, and the author of Economics
for the Rest of Us: Debunking the Science That Makes Life Dismal (New
Press, 2010).
Adler reviewed Measuring the
Impact of Teachers, parts I and II, written by Raj Chetty, John N.
Friedman, and Jonah E. Rockoff. Part I is subtitled, Evaluating
Bias in Teacher Value-Added Estimates, and Part II, Teacher Value-Added
and Student Outcomes in Adulthood. Taken together, the two-part report
asserts that students whose teachers have higher value-added scores achieve
greater economic success later in life.
The documents (as is standard for
NBER working papers) were not peer-reviewed, yet as Adler points out, the
research on which they were based has gained extraordinary attention – turning
up as references in President Obama’s 2012 State of the Union address, in
expert court testimony by the principal author (Chetty), in extensive news coverage, and even as a
justification for Chetty’s 2012 MacArthur Foundation “genius” award.
That sort of credibility, Adler
suggests in his review, may not be warranted – as demonstrated by a series of
problems that he finds with the new two-part report and the research that
undergirds it.
The report’s own results reveal
that calculating teacher value-added is unreliable, Adler writes. Additionally,
the report includes a result that contradicts the central claim; it relies on
an erroneous calculation to support a favorable result; and it assumes that the
miscalculated result holds across students’ lifetimes – “despite the authors’
own research indicating otherwise,” the reviewer notes.
Finally, Adler explains, the
studies relied on by the report as support for its methodology don’t actually
provide that support.
“Despite widespread references to
this study in policy circles, the shortcomings and shaky extrapolations make
this report misleading and unreliable for determining educational policy,”
Adler concludes.
Monday, January 09, 2012
Chetty Study: "simpleminded, technocratic, dehumanizing and disturbing"
The heart of Bruce Baker's first review of the corporate boot-licking conclusions by Chetty & Co.:
. . . . The authors find that teacher value added scores in their historical data set vary. No surprise. And they find that those variations are correlated to some extent with “other stuff” including income later in life and having reported dependents for females at a young age. There’s plenty more.
These are interesting findings. It’s a really cool academic study. It’s a freakin’ amazing data set! But these findings cannot be immediately translated into what the headlines have suggested – that immediate use of value-added metrics to reshape the teacher workforce can lift the economy, and increase wages across the board! The headlines and media spin have been dreadfully overstated and deceptive. Other headlines and editorial commentary has been simply ignorant and irresponsible. (No Mr. Moran, this one study did not, does not, cannot negate the vast array of concerns that have been raised about using value-added estimates as blunt, heavily weighted instruments in personnel policy in school systems.)
My 2 Big Points
First and perhaps most importantly, just because teacher VA scores in a massive data set show variance does not mean that we can identify with any level of precision or accuracy, which individual teachers (plucking single points from a massive scatterplot) are “good” and which are “bad.” Therein exists one of the major fallacies of moving from large scale econometric analysis to micro level human resource management.
Second, much of the spin has been on the implications of this study for immediate personnel actions. Here, two of the authors of the study bear some responsibility for feeding the media misguided interpretations. As one of the study’s authors noted:
“The message is to fire people sooner rather than later,” Professor Friedman said. (NY Times)This statement is not justified from what this study actually tested/evaluated and ultimately found. Why? Because this study did not test whether adopting a sweeping policy of statistically based “teacher deselection” would actually lead to increased likelihood of students going to college (a half of one percent increase) or increased lifelong earnings. Rather, this study showed retrospectively that students who happened to be in classrooms that gained more, seemed to have a slightly higher likelihood of going to college and slightly higher annual earnings. From that finding, the authors extrapolate that if we were to simply replace bad teachers with average ones, the lifetime earnings of a classroom full of students would increase by $266k in 2010 dollars. This extrapolation may inform policy or future research, but should not be viewed as an absolute determinant of best immediate policy action.
This statement is equally unjustified:
Professor Chetty acknowledged, “Of course there are going to be mistakes — teachers who get fired who do not deserve to get fired.” But he said that using value-added scores would lead to fewer mistakes, not more. (NY Times)It is unjustified because the measurement of “fewer mistakes” is not compared against a legitimate, established counterfactual – an actual alternative policy. Fewer mistakes than by what method? Is Chetty arguing that if you measure teacher performance by value-added and then dismiss on the basis of low value-added that you will have selected on the basis of value-added. Really? No kidding! That is, you will have dumped more low value-added teachers than you would have (since you selected on that basis) if you had randomly dumped teachers? That’s not a particularly useful insight if the value-added measures weren’t a good indicator of true teacher effectiveness to begin with. And we don’t know, from this study, if other measures of teacher effectiveness might have been equally correlated with reduced pregnancy, college attendance or earnings.
These two quotes by authors of the study were unnecessary and inappropriate. Perhaps it’s just how NYT spun it… or simply what the reporter latched on to. I’ve been there. But these quotes in my view undermine a study that has a lot of interesting stuff and cool data embedded within.
These quotes are unfortunately illustrative of the most egregiously simpleminded, technocratic, dehumanizing and disturbing thinking about how to “fix” teacher quality. . . . .
Subscribe to:
Comments (Atom)