Monday, December 10, 2012

Evidence-Driven versus Evidence-Influenced Education Policy

Conservative "reformer" Rick Hess demonstrated the best short-term tactic for dealing with amateurs who feel free to editorialize about public schools without having a clue about what they are pontificating about.  Yes, the New York Times' Tom Friedman's "man crush" on Arne Duncan is illustrative of way Op Ed writers feel free to pass on educational policy analyses that are the equivalent of cocktail party talk. On the other hand, we must do more than ridicule journalists who are not on the education beat.  To help them understand school improvement issues, we must devise a fair definition of the two schools of thought in our educational debate.

 The contemporary data-driven school "reform" movement emerged during the Lee Atwater/Dick Morris era when scorched earth politics was taken to a new level.  Accountability-driven "reformers" adopted the state-of-the art public relations tactic of demonizing their opponents. Educators were chosen as their prime target.  For instance, liberal "reformers" with the Educational Equality Project (EEP) claimed "... the sad reality is that these (school) systems are not broken. Rather, they are doing what we have designed them to do over time." Similarly, the Education Trust’s Russlyn Ali denied that students’ poverty, health challenges or mobility are our biggest challenges. She claimed, “Instead, the longest odds are those created by our education culture, which denies that these children can succeed.” In other words, we educators who committed to the inner city were perpetuating inequality.  We were thus defined as part of the "status quo" that needed to be destroyed so that the 21st century's civil rights revolution could proceed.  

The chattering classes do not understand that education's civil war is primarily due to the difference between data-DRIVEN and data-INFORMED accountability.  "Reformers" indicted schools and educators as incompetent (or worse) based on primitive bubble-in metrics.  When educators tried to put those metrics into perspective, our data-informed approach was condemned as "low expectations," and "excuses" for not committing to "whatever it takes" to overcome the legacies of generational poverty.  Even the Obama administration got into the teacher-bashing. It coerced states into using not-ready-for-prime-time statistical models to indict teachers and schools as ineffective. And the Tom Friedmans of the opinion world lapped up the blame game, accepting the accountability hawks’ claim that teachers and unions who resist test-driven evaluations are "anti-reform."

Somebody who is wittier than I must craft a better set of political labels but, in the meantime, I would like to propose a way of characterizing the two schools of educational thought.  We educators have always sought evidence-driven ways of improving schools.  We were easy to lampoon because we wanted social science-driven policy. On the other hand, we knew our place as relatively powerless members of the team effort to fund and build better schools.  Consequently, we settled for the best compromises that were available.

In the 1990s, however, a new generation of "reformers" set out to replace the old New Deal/Fair Deal/Great Society coalitions and the unlovely compromises they had created.  Whether they sought to end public schools or welfare "as we know it," their prime goal was sounding tough-minded.  They borrowed the heroic pose of venture capitalists and devised a market-driven school improvement model that I would characterize as evidence-influenced. In contrast to previous opponents of the public schools who feared the teaching of evolution, they were not overtly anti-intellectual.  They did not necessarily have contempt for facts but, still, when evidence contradicted their theories, contemporary "reformers" stuck with their "Damn the torpedoes. Full speed ahead" quest for "transformative" change.

Moreover, the Gates Foundation and other members of "the billionaires' boys club" funded a slew of think tanks issuing "papers" that resembled scholarly research.  They spawned an alphabet soup of policy shops (TNTP, CRPE, etc.) that all followed the same format for spinning a memorable sound bites (“Widget Effect,” “LIFO,” “the Irreplacables,” etc.) They issued multi-colored, chart-filled "reports" with amazing production values. These publications were full of photogenic young educators and students.  To education experts, the Wow! Zap! Pow! nature of these papers might seem like parodies of the Onion. To reporters off the education beat, however, they looked like research, not social science-informed opinion pieces. Their emotive anecdotes were read as if they were real case studies.

My favorite evidence-influenced manifesto was published in 2009 by the McKinsey Group, the British consultants who sold Enron as an exemplar of the emerging data-driven economy of the 1990s.  Its “Economic Impact of the Achievement Gap” claimed that the American GDP would grow by 2 to 4% if teachers closed the achievement gap. To show that classroom instruction could overcome poverty, their charts showed that Oklahoma was next to Arkansas and yet they had different achievement gaps.  Ergo, school effectiveness must explain the difference!?!?  According to McKinsey, since Connecticut and New Hampshire were geographically close to each other, their different outcomes showed that schools alone could defeat poverty!?!?
McKinsey’s claim that schools could close the achievement gap was based on the ultimate evidence-influenced statement, “Latino students in Ohio score the same as their white peers in 8 states and better than their white peers in 13 states.”  Of course, Ohio was hardly a magnet for low-skilled immigrants and Latinos in that state earned more than whites in states (like my Oklahoma) where school spending and student outcomes were lower for everyone, but that is not the point.  The point is that such an argument sounds like something a scholar might make. And, it was data-driven!  No human judgment was required in making their determination!
Rereading the McKinsey report, I am now struck by a blue exploding star that preceded the Ohio numbers. That graphic introduced the proclamation that New York City had closed the black-white achievement gap, demonstrating Joel Klein’s “reforms” had worked.
Today’s reporters do not need to read the ancient history of how the NYC miracle disappeared in order to experience this spin.  Last month, the Hoover Institute issued a CREDO study of New Jersey charter schools, and it also embodies the essence of social science-informed publishing.  Even though its authors (and funders) are not named in the report, presumably, its authors are smart people who were influenced by their academic experiences.  Even so, journalists can get all the guidance they would need to critique the charter school study through posts by Diane Ravitch, Julia Sass Rubin, Darcie Cimarusti, and Bruce Baker.   They explain why the spin presented by CREDO’s press release is contradicted by the paper’s evidence. Baker then concludes, “How can an institution that claims to be academically objective put out a press release that is so misleading about the study’s findings?”
With the help of education experts, as “reformers” continue to publish social science-informed papers, reporters can learn for themselves how to distinguish between the evidence-driven and the evidence-influenced approaches to school improvement. When educators do not let numbers speak for themselves but try to interpret data in a manner informed by scholarly conventions, we are engaging in the evidence-driven quest for understanding that should also inform journalism. When a brief follows academic tradition and fairly summarizes the opposing positions, and when it does not hide evidence that contradicts its thesis, it is likely to be science-driven. Even when a paper’s format resembles old-fashioned scholarly articles, however, if it lacks a falsifiable hypothesis or even an introduction with an objective statement of its goals and findings, it is likely to be social evidence-influenced and science-informed. 

2 comments:

  1. Ooop! Please forgive a typo. It was Julia Sass Rubin who introduced her article with the question, “How can an institution that claims to be academically objective put out a press release that is so misleading about the study’s findings?”

    I was torn between concluding with her excellent point or Bruce Baker's conclusion,"While it is likely that there exists some strategies employed by some charters (as well as some strategies employed by some district schools) that are working quite well – THE CREDO REPORT PROVIDES ABSOLUTELY NO INSIGHTS IN THIS REGARD."

    Had I just quoted both, my bifocals wouldn't have failed me. Sorry.

    ReplyDelete
  2. I am very interested in the "Economic Impact of the Achievement Gap" published by McKinsey Group. I have red other articles, which shows the similar results. And in terms of cost-benefit analysis, these articles actually support that it is worth to investing in education!

    Also, I don't believe that school alone can close the achievement gap. A lot of previous studies indicate that other factors, such as SES, poverty, health, etc. matter! Actually I remembered that one article has a very interesting finding. It says that after mixing students from middle-class families and students from poor families, the academic achievement gap decreased a lot.

    ReplyDelete