The answer that Tucker puts forward leans heavily on a
"2010 study by the Stanford Center for Opportunity Policy in Education, or SCOPE, which is led by the respected professor Linda Darling-Hammond."At first I took this mean that Darling-Hammond participated in the study, but when I followed the link to SCOPE, I found out that Darling-Hammond, who is a co-director of the Center, was not an author of the study. The study, in fact, is the product of a consulting outfit called the Assessment Solutions Group, and its authors are Barry Topol, John Olson, and Ed Roeber, whose bios from their consulting firm are excerpted below (my bolds). How SCOPE ended up publishing this, we will probably never know.
[Lead author] Barry Topol is a strategic and operationally oriented Financial Executive with over 25 years experience. He has worked in industry as a Chief Financial Officer for several major corporations and small companies and as an independent consultant. From 2005-2008 he was CFO for a major educational publishing company. His expertise includes activity based costing, turnaround situations, detailed financial and operational analysis, strategic planning and leadership and team-building. He has developed numerous cost models for educational assessments and has experience negotiating new pricing frameworks for assessment services.
Mr. Topol has a broad business background with C-Level experience in educational publishing, high technology, telecommunications, financial services, internet and other industries. Working for venture capital, private equity and publicly owned corporations, he combines entrepreneurial thinking with a big business background. He has led successful turnarounds for three different businesses, growing revenues, reducing costs, conserving cash and improving profits. In one case the company, after being turned around, was sold for close to $1 Billion. He has also participated in over 20 mergers and acquisitions, leading the deal team for roughly half of those transactions.
. . . .
Topol’s experience includes such companies as Reed Elsevier, Pearson, CTB/McGraw-Hill, Pacific Gas and Electric Company, AT&T/Pacific Bell and TRW Inc. He has also worked with several small companies, start-up organizations and venture capital firms.
Mr. Topol has an MBA from the Anderson School of Management at UCLA where he was an Edward W. Carter Fellow and the Charles Offer Foundation Fellow. He also has a BA in Economics from UCLA where he graduated Summa Cum Laude and was elected to Phi Beta Kappa. He received his CPA license in 1989.
[Second author] Dr. Olson is a co-founder of ASG. He has more than 25 years of experience providing technical assistance and support to state departments of education, the U.S. Department of Education, testing companies, researchers, and others. Dr. Olson serves as a technical advisor and independent consultant on a variety of measurement and statistical issues for international, national, state, and local assessment programs, and has assisted many states with their RFPs and reviews of proposals. His expertise in the areas of large-scale assessment, psychometrics and other technical issues, test design and development, program management and operations, and NCLB is of much value to states and other customers. . . .[I am not making this up: see the website]
[Third author] Edward Roeber is adjunct Professor of Education, Measurement and Quantitative Methods in the College of Education at Michigan State University, East Lansing, MI. In this capacity, he teaches courses on educational measurement, works on projects to improve the assessment skills of prospective and current educators, and provides additional support for faculty and students on assessment.So those are the authors of the study. Not Darling-Hammond, not Stanford, not peer-reviewed. But to be believed, nonetheless, if we are to believe Bill Tucker. Why? Because Tucker assures us that "a random sample of states" by an unnamed party says so:
Previously, he was Senior Executive Director, Office of Educational Assessment & Accountability in the Michigan Department of Education from 2003 to 2007. He oversaw the assessments of general education students (in mathematics, science, language arts and social studies), students with disabilities and English language learners, as well as the accreditation and accountability programs. . .
A 2010 study by the Stanford Center for Opportunity Policy in Education, or SCOPE, which is led by the respected professor Linda Darling-Hammond, noted that in per-pupil terms, testing costs “substantially less than that of a new textbook, a typical student’s school supplies for the year, or almost any educational intervention.” A random sample of states, small and large, confirms SCOPE’s findings.And who are we to doubt Tucker's assurance, especially since Tucker's expertise comes from ". . . his policy work for Education Sector, [where] he focuses on technology and innovation—specifically virtual schooling, assessments, and data systems." Indeed.
Tucker's main point in his op-ed is that testing, though far from perfect, represents a tiny fraction of what we spend on public education every year, and all the belly-aching about the costs of testing is getting in the way of states spending the extra money required to come up with better tests and more of them:
For example, California, the country’s largest and most financially distressed state, spends less than $14 out of its $8,955 per-pupil total educational outlay on statewide standardized testing (see image). These costs, which include testing contracts and administration for not only federally mandated tests in reading and math but also high school exit exams and state tests in science and history, are dwarfed by spending on such items as workers’-compensation insurance, housekeeping services, and travel.
Even if the California figures underestimate various expenditures related to testing, such as preparation materials or personnel, testing is still a drop in the budget bucket. For the sake of argument, let’s double the amount spent on testing to $28 per student, or about .03 percent of the budget. Under this scenario, the state’s schools still spend 265 times as much on salaries and benefits.Umm, if we only had more testing and fewer workers.
Unfortunately, for Mr. Tucker and the other salesmen at Ed Sector, the Center for Public Education looked at the Topol, Olson, Roeber study back in June, 2010, a fact that Mr. Tucker has somehow avoided it would seem:
. . . . Using a sophisticated cost model, Topol and colleagues estimate that traditional multiple-choice driven assessments cost approximately $20/student to administer, compared to about $56 for HQAs. Through cost savings strategies related to economies of scope, technology and teacher involvement, the authors suggests that HQA costs could be reduced to as little as $10/per student.Sounds terrific, right? All these savings by scaling up and by more technology, but the EDifier goes on:
Here's how they break it down:
$55.67 (per pupil) average starting cost of HQA (scored by vendor)
-$24.26 savings from having teachers score non-multiple choice items as part of professional development
-$16.84 savings from economies of scale over a 30-state assessment consortium
-$3.49 savings from having students take tests online rather then with pencil-and-paper
-$0.93 savings from using technology instead of humans to grade short written responses
-$0.71 savings from having scorers grade questions on their work-based or personal computers, instead of at overhead-intense grading centers
$9.44 TOTAL COST
Reducing the per-pupil costs of high quality assessments to half that of traditional assessment is obviously a bold, dramatic assertion. There are a number of assumptions Topol and colleagues make that can be questioned:Why doesn't Mr. Tucker address any of these questions raised almost a year and a half ago? Can't say.
That a 30-state assessment consortium is feasible. The current common standards consortium would suggest it is, but it has yet to prove its worth in implementation, and abstract goal-setting is a lot easier to commit to then assessment implementation.
That the true long-term cost of implementing high quality assessment doesn't actually include all alignment costs associated with a comprehensive cycle of meaningful assessment, training costs associated with integrating test results into instruction, altering teacher evaluation systems to account for new instructional values, modifying systemwide information systems, etc. Thats the thing about HQA [high quality assessments]. You're not just changing the test if you're doing it right, you're changing a district's entire culture. This cultural change would most likely need to include subjects other than math and English, the only subjects included in the report's projections.
That the costs of purchasing, installing and administering computers in schools wouldn't nullify the savings, even over an extended period of time. The authors note that PC purchase costs were not considered in their figures and that the current student-to-PC ratio is estimated at around 4 or 5 to 1.
That teachers wouldn't put up an intractable fight to avoid assuming what they might perceive to be unfair required duties. As Darling-Hammond and many international studies have noted, scoring of contructed-response questions is a rich professional development activity, but asking teachers do it on their own time, outside of more easily fulfilled workshop requirements is in a different class of time-demands.
That such heavy reliance on technology wouldn't be a mistake. Remember a couple of election cycles ago when there was a big movement to replace manual voting systems with electronic touch screens? It didn't take because electronic systems failed in grand scales in high stakes situations. How long would it take to get even more complex assessment applications working on a large scale in education?
What we do know is that even in using Tucker's number of $28 per student for tests, the cost of testing would amount to almost a billion and a half dollars a year, which is almost double the amount that the federal government will spend in 2012 for all R&D in all the ED categories just below (and much of this money spent is to serve the needs of the corporate testing companies):
- Research, Development, and Dissemination
- Regional Educational Laboratories
- Research in Special Education
- Statewide Data Systems
- Special Education Studies and Evaluations
The real costs of the last quarter century of this tabulation orgy that continues unabated can only be measured in the loss of tens of thousands of America's best teachers who have quit before seeing themselves degraded by this era of testing mania, or it may be measured in the dumbing down of curricula that have left poor children cognitively decapitated and unready for coursework that requires thinking, or it may be gauged in the loss of care as an ethic that schools were grounded in not so long ago, or measured in the millions of children who have been labeled as failures, the uncounted heartbreaks and haunted nights before tests, the shame and the suicides, the loss of public ownership of the schools, the denigration of the educator's profession, the disappearance of public money for the benefit of corporate bottom lines, the loss of stewardship of the most precious of public institutions--the public schools.
These are a few of the costs that Mr. Tucker dares not consider, for if he did, he would be faced with deciding if his job is more important than the truth, if his mission to just follow the orders of a think tank is enough to protect him from moral ruin, if in the end, what he is doing is at all worth it. If the currency of this 21st Century is to be integrity and trust, I suggest, Mr. Tucker, that you declare bankruptcy and start over.