"A child's learning is the function more of the characteristics of his classmates than those of the teacher." James Coleman, 1972

Wednesday, August 18, 2010

More VAM Dope and the Dopes at the LA Times

Part of a great piece by Louis Freedberg at California Watch:
Researchers I talked with tell me that if this had been an academic study, the researchers would never have been given permission under human subject research guidelines to disclose the names of teachers.

Jennifer Imazeki, an economist at San Diego State University, wrote on John Fensterwald's The Educated Guess: 

Regardless of how one feels about value-added, as a researcher, I've been shocked at the public disclosure of teachers' names. Most researchers have to sign their lives away in confidentiality agreements if they want to use student-level data with individual identifiers. How in the world did the Times get their hands on this data without such an agreement?

Richard Buddin, a respected economist at the Rand Corporation who as an independent contractor ran the numbers for the L.A. Times, said he had nothing to do with releasing the teachers' names. In two e-mails to me, he explained that the files he used for his analysis had "scrambled student and teacher identifiers" and that he made "no attempt to link the scrambled identifier with teacher names." "The Los Angeles Times did this after I completed my analysis," he wrote in an email. 

I did not get a response to e-mail queries to two L.A. Times reporters who worked on this, Jason Felch and Jason Song. 

So how did the Times get the names of teachers from LAUSD? Simple: They asked for them.

Robert Alaniz, LAUSD's director of communications, told me the district's legal department concluded that under California's Public Records Act, the district had no choice but to release the names of the teachers, and to link their names to the test scores of their students. He said that if test scores had been used as part of a teacher's performance evaluation, the scores would have remained private. But because they aren't, they are not regarded as confidential information.

"We vetted it with our legal staff, and determined that the request was valid, and that we did have to turn over the teachers' names," Alaniz said. "As adults, as employees, their names fall into the public domain."

He said the district has some safety concerns about the Times' plan to publish the names of 6,000 teachers and where they teach, because some may want to keep their location secret from former spouses and others they may have restraining orders against, etc. The district also has concerns about an over-reliance on on using test scores to evaluate teachers. "It should be just one of many different factors," he said.

All this would be more straightforward if teachers were identified on a clear-cut fact that is either true or false, such as whether they have the proper teaching credentials, or how much they get paid. But the value-added methodology used by the Times to identify ineffective teachers is mostly untested and filled with potential pitfalls.

A report issued last month by the U.S. Department of Education's Institute of Education Sciences concluded that "policymakers must carefully consider likely system errors when using value-added estimates to make high stakes decisions regarding educators."

And last fall, the National Research Council took a close look at the administration's promotion of the value-added methodology as a criterion for states to qualify for its $4.3 billion "Race to the Top" program.

The headline announcing its report, referred to briefly in the Times article, declared, "Value-added methods to assess teachers not ready for use in high-stakes decisions."

The distinguished panel that drew up the report, which included two UC Berkeley professors, Michael Hout and Mark Wilson, warned the administration that "although the idea has intuitive appeal, a great deal is unknown about the potential and the limitations of alternative statistical models for evaluating teacher's value-added contributions to student learning."

One of the concerns raised by the panel was the complexity of the statistical methods used, which would make "transparency" difficult and critiquing an impossibility for anyone but the most sophisticated statistician.

That seems to apply to the dense report written by Buddin accompanying the Times article, in which he explains his methodology.

Take this paragraph, picked more or less at random:
Data sets on teacher inputs are incomplete, and observed-teacher inputs may be chosen endogenously with respect to the unobserved-teacher inputs (teacher-unobserved heterogeneity). For example, teacher effort may be difficult to measure, and effort might be related to measured teacher qualifications, i.e., teachers with higher licensure test scores may regress to the mean with lower effort.
Or this paragraph:
Teacher heterogeneity (φj) is probably correlated with observable student and teacher characteristics (e.g., non-random assignment of students to teachers). Therefore, random-effect methods are inconsistent, and the fixed-teacher effects are estimated in the model. The fixed-teacher effects are defined as ψj=φj+qjρ.
It will require a lot more than fifth grade arithmetic to penetrate that algebraic thicket – one that seems far removed from the impact of the analysis on the lives of teachers and the children they teach.

No comments:

Post a Comment