"A child's learning is the function more of the characteristics of his classmates than those of the teacher." James Coleman, 1972

Wednesday, August 13, 2008

New York Times: The Paper of Record Ignorance on Education

If the New York Times had remained as steadfastly stupid on issues such as the economy, foreign policy or anything else besides its favorite whipping boy, public education, it would have long lost its standing as the newspaper of record. Yesterday's BS and bs-inspired (that's Brent Staples bullshit) opining on the virtues of using NAEP as a national metric for educational achievement continues an unbroken string of self-initated horse's ass making via their own editorial pages. Here is the relevant, or irrelevant, clip from yesterday's pile:

Congress has several concerns as it moves toward reauthorizing the No Child Left Behind Act of 2002. Whatever else they do, lawmakers need to strengthen the requirement that states document student performance in yearly tests in exchange for federal aid.

The states have made a mockery of that provision, using weak tests, setting passing scores low or rewriting tests from year to year, making it impossible to compare progress — or its absence — over time.

The country will have difficulty moving ahead educationally until that changes.

Most states that report strong performances on their own tests do poorly on the more rigorous and respected National Assessment of Educational Progress, which is often referred to as NAEP and is also known as the nation’s report card. That test is periodically given to a sample of students in designated grades in both public and private schools. States are resisting the idea of replacing their own tests with the NAEP, arguing that the national test is not aligned to state standards. But the problem is that state standards are generally weak, especially in math and science. . . .

Entirely sidestepped here is the problem of NCLB's impossible 100% proficiency target, which is the real motivation behind the states' testing adjustments toward continued survival. With one of the best public education systems in the country, Minnesota, for all its supposed finnagling, now has almost half of its schools on the federal failure list. And, of course, Brent and the Boys at the Times do not mention Susan Neuman's admitted privatization rationale for NCLB, which was the ideology driving the impossible NCLB targets from the beginning. So instead of the liberal NY Times being morally outraged at the millions of teachers and students who have been ground up in this ideological crucible over the past 6 years, they, instead, spend good ink on calling for a counterproductive and pedagogically-bankrupt plan for national testing based on NAEP.

In case there is anyone on the Editorial Board who reads anything other than the recommendations from Checker Finn, here, alas, is an explanation from Jerry Bracey that even Brent Staples should be able to understand. Maybe. From the June 2008 issue of The School Administrator:

. . . . Adopting NAEP achievement levels would be a multifaceted, unmitigated disaster, but to demonstrate this we need to back up and take a look at how the NAEP achievement levels of basic, proficient and advanced came into existence.

Until 1988, NAEP was purely descriptive. Starting in 1963, NAEP’s conceptual father, Francis Keppel, and technical father, Ralph Tyler, wanted to create something different from a norm-referenced test on which about 50 percent of students answer most items correctly. On purpose, NAEP created items that the test designers figured few students would answer correctly along with items the creators thought most would answer correctly, as well as the usual items that about half the people would get right. In the same way a medical survey might analyze the incidence of tuberculosis nationwide, NAEP would survey the incidence of knowledge in the country.

In 1988, though, Congress created the National Assessment Governing Board and charged it with establishing standards. NAEP now became prescriptive, reporting not only what people did know but also laying claim to what they should know. The attempt to establish achievement levels in terms of the proportion of students at the basic, proficient and advanced levels failed.

The governing board hired a team of three well-known evaluators and psychometricians to evaluate the process — Daniel Stufflebeam of Western Michigan University, Richard Jaeger of the University of North Carolina at Greensboro and Michael Scriven of NOVA Southeastern University. The team delivered its final report on Aug. 23, 1991. This process does not work, the team averred, saying: “[T]he technical difficulties are extremely serious … these standards and the results obtained from them should under no circumstances be used as a baseline or benchmark … the procedures used in the exercise should under no circumstances be used as a model.”

NAGB, led by Chester E. Finn Jr., summarily fired the team, or at least tried to. Because the researchers already had delivered the final report, the contract required payment.


Flawed Uses
The inappropriate use of these levels continues today. The achievement levels have been rejected by the Government Accountability Office, the National Academy of Sciences, the National Academy of Education, the Center for Research in Evaluation, Student Standards and Testing and the Brookings Institution, as well as by individual psychometricians.

I have repeatedly observed that the NAEP results do not mesh with those from international comparisons. In the 1995 Trends in International Mathematics and Science Study, or TIMSS, assessment, American 4th graders finished third among 26 participating nations in science, but the NAEP science results from the same year stated that only 31 percent of them were proficient or better.

The National Academy of Sciences put it this way: “NAEP’s current achievement-setting procedures remain fundamentally flawed. The judgment tasks are difficult and confusing; raters’ judgments of different item types are internally inconsistent; appropriate validity evidence for the cut scores is lacking; and the process has produced unreasonable results.”

The academy recommended use of the levels on a “developmental” basis (whatever that means) until something better could be developed. In 1996, the National Academy of Education recommended the current achievement levels “be abandoned by the end of the century and replaced by new standards … .”

Continuing Mischief
Here we are almost a decade into a new century and the old standards remain, causing a great deal of mischief every time a new NAEP assessment is released to the news media. No one is working to create new standards. Why? The use of the NAEP standards fits into the current zeitgeist of school reform as all stick and no carrot.

When the U.S. Chamber of Commerce and the Center for American Progress rolled out its jointly developed “Leaders and Laggards” in February 2007, the report lamented: “[T]he measures of our educational shortcomings are stark indeed; most 4th and 8th graders are not proficient in either reading or mathematics … .”

At the press conference announcing the report, an incensed John Podesta, president and CEO of the Center for American Progress, declared: “It is unconscionable to me that there is not a single state in the country where a majority of 4th and 8th graders are proficient in math and reading.” He based his claim on the 2005 NAEP assessments.

Podesta could have saved himself some embarrassment had he read the recent study by Gary Phillips, formerly the acting commissioner of statistics at the National Center for Education Statistics. Phillips, now at the American Institutes for Research, had asked: “If students in other nations sat for NAEP assessments in reading, mathematics and science, how many of them would be proficient?”

Because we have scores for American students on NAEP and TIMSS and scores for students in other countries on TIMSS, it is possible to estimate the performance of other nations if their students took NAEP assessments.

How many of the 45 countries in TIMSS have a majority of their students proficient in reading? Zero, said Phillips. Sweden, the highest scoring nation, would show about one-third of its students proficient while the United States had 31 percent. In science, only two nations would have a majority of their students labeled proficient or better while six countries would cross that threshold in mathematics.

NAEP reports issued prior to the current Bush administration noted that the commissioner of education statistics had declared the NAEP achievement levels usable only in a “developmental” way. That is, only until someone developed something better. But no one was or is working to develop anything better. When I wrote an op-ed piece for The Washington Post (“A Test Everyone Will Fail,” May 20, 2007), an indictment of the achievement levels, I got feedback that officials at the National Assessment Governing Board were quite satisfied with the levels as they are. That can only mean NAGB approves of the achievement levels used as sledgehammers to bludgeon public schools. They serve no other function.


2 comments:

  1. Anonymous9:31 AM

    Jim,

    The incompetency of the NY Times is easily trumped by the Washington Post. My Lord, I think we need to stage some kind of intellectual bake-off to see which major media outlet is more ignorant of education because I think there are serious, consistent contenders in all the big cities.

    Schools can never quite get "tough" enough, the expectations for all children can never ever get high enough as long as we can add another testable bon-bon, and parents won't be accountable till they're all in jail.

    The race to educational absurdity is a joy to behold.

    ReplyDelete
  2. I just have one question for NY Times what does Judy Miller think of public education? Did Saddam have anything to do with our failure in education? All the news fit to print.

    Follow the money. How does the Times benefit from the testing industry? With the Post it's at least obvious. They are the mouthpiece for Kaplan testing and you can't br good at everything. There's a large pool of public dollars out there and everyone wants to take some. That's why it's funny when these paper keep laying off it's employees because nobody reads their paper and then they yell at the reader for not reading their lousy work instead of just concluding it must be because we put out a suckey product. - gee

    ReplyDelete