"A child's learning is the function more of the characteristics of his classmates than those of the teacher." James Coleman, 1972

Wednesday, February 29, 2012

If The Plan Is to Kill Us With Tests, Then We Will Kill the Tests

As Michael Winerip points out in his commentary below, the line has been drawn between those who believe that a 26 point margin or error is good enough to make high stakes education policy decisions, and those who now ask Cuomo, Bloomberg, Duncan, and Gates: "Have you no shame?"

Join us in Occupying the DOE March 30-April 2.  Free sleeping space indoors for over 500 people.  This madness will not hold.  Info below from United Opt Out:

From the NYTimes.  My bolds:
Feb. 28, 2012, 11:18 a.m.
I’m delighted that the New York City Education Department has released its teacher data reports.
Finally, there are some solid numbers for judging teachers.

Using a complex mathematical formula, the department’s statisticians have calculated how much elementary and middle-school teachers’ students outpaced — or fell short of — expectations on annual standardized tests. They adjusted these calculations for 32 variables, including “whether a child was new to the city in pretest or post-test year” and “whether the child was retained in grade before pretest year.” This enabled them to assign each teacher a score of 1 to 100, representing how much value the teachers added to their students’ education.

Then news organizations did their part by publishing the names of the teachers and their numbers. Miss Smith might seem to be a good teacher, but parents will know she’s a 23.

Some have complained that the numbers are imprecise, which is true, but there is no reason to be too alarmed — unless you are a New York City teacher.

For example, the margin of error is so wide that the average confidence interval around each rating for English spanned 53 percentiles. This means that if a teacher was rated a 40, she might actually be as dangerous as a 13.5 or as inspiring as a 66.5.

Think of it this way: Mayor Michael R. Bloomberg is seeking re-election and gives his pollsters $1 million to figure out how he’s doing. The pollsters come back and say, “Mr. Mayor, somewhere between 13.5 percent and 66.5 percent of the electorate prefer you.”

There are a few other teensy problems. The ratings date back to 2010. That was the year state education officials decided that their standardized test scores were so inflated and unreliable that they had to use their own complex mathematical formula to recalibrate. One minute 86 percent of state students were proficient in math, the next minute 61 percent were.

Albert Einstein once said, “Not everything that can be counted counts, and not everything that counts can be counted,” but it now appears that he was wrong.

Of course, no one would be foolish enough to think that people would judge a teacher based solely on a number like 37. As Shael Polakow-Suransky, the City Education Department’s No. 2 official, told reporters on Friday, “We would never invite anyone — parents, reporters, principals, teachers — to draw a conclusion based on this score alone.”

Within 24 hours The Daily News had published a front-page headline that read, “NYC’S Best and Worst Teachers.” And inside, on Page 4: “24 teachers stink, but 105 called great.”

The publication of the teacher data reports is a defining moment. A line has been drawn between those who say, “even bad data is better than no data,” and those who say, “Have you no shame?”

Arne Duncan, the federal education secretary, has been a dependable advocate for naming names. In 2010, when The Los Angeles Times printed a similar list of teachers and value-added scores, Mr. Duncan, sounding very much like Winston Churchill, declared, “Silence is not an option.”

The former New York schools chancellor, Joel I. Klein, a leader among educators who consider themselves data-driven, has continued that commitment even after leaving the job. He now works for Rupert Murdoch running Wireless Generation, a company that describes itself as “the leading provider of innovative education software, data systems and assessment tools.”

In 2010, Mr. Klein talked with the radio station WNYC about the importance of teacher data reports. “Any parent I know,” he said, “would rather see a teacher getting substantial value-add rather than negative value-add.”

It was a surprise to see how many data-driven people spoke against making the reports public. Bill Gates, who has spent billions of dollars to bring statistical study to school systems, wrote an Op-Ed page piece published in The New York Times last week that was headlined, “Shame Is Not the Solution.”

Merryl H. Tisch, the chancellor of the State Board of Regents, told The Times, “I believe the teachers will be right in feeling assaulted and compromised here.”

Even Dennis M. Walcott, the city’s current schools chancellor — who usually can be counted on to do what Mr. Klein did — sounded like a man with qualms. “I don’t want our teachers disparaged in any way, and I don’t want our teachers denigrated based on this information,” he said.

At first, when I heard that news organizations were going to publish the list, I was angry, but that has passed. Good has come of this. People have been forced to stop and think about how it would feel to be summed up as a 47, and then have the whole world told.

Michael Winerip writes the On Education column for The New York Times.

1 comment:

  1. Anonymous11:39 AM

    Based on the percentile range quoted in this article (the example noted above was from the 13th to the 66th percentile rank range), a quick back of the envelope calculation suggests that the reliability is approximately .40 for this measure. That is shockingly low. A reliability coefficient should be at .80 or above when important decisions about promotion, retention, or pay is involved. Whoever published this data should be ashamed. I would like other educational researchers to validate my quick computation.

    ReplyDelete