Sunday, October 13, 2013

Value-Added Testing and the Masking of Inequality


This article was previously posted at The Answer Sheet on October 4.
This post is comprised largely of excerpts from The Mismeasure of Education that have been re-arranged to answer this question: “Will value-added testing do as little for the nation as it has for Tennessee?”


By Jim Horn and Denise Wilburn
As White House staff, congressional aides, and a small contingent of think tank insiders huddled in Washington during the summer of 2001 to compile the final version of the No Child Left Behind Act (NCLBA), state policymakers worried about new accountability demands that the legislation was to include.  In July, Missouri’s assistant commissioner of education, Stephen Barr, had described the 12-year window for achieving 100 percent student proficiency in reading and math as “an impossible dream,” and in the same article from The New York Times, Pennsylvania’s secretary of education, Charles Zogby said, “It’s unrealistic to think that in some places where 90 percent of the children are below basic that we’re going to turn this around in 10 years. And then everybody is going to throw up their hands and say none of this is possible” (para 29).
By 2005 it was clear that Barr, Zogby, and others were right in their estimate of alarming numbers of school systems failing to make Adequate Yearly Progress toward the federal benchmark, and [then] U. S. Secretary of Education Margaret Spellings was looking for the kind of alternative system for demonstrating accountability that could acknowledge progress toward proficiency without necessarily reaching it. The result was the Growth Model Pilot Project (GMPP, 2005-2008) that included nine states, with two of those states, Tennessee and North Carolina, already using a value-added state assessment model developed by William Sanders.
Value-added assessment uses sophisticated statistical manipulations of achievement test scores that allow states to get credit for children making their expected growth based on past academic performance even if they did not achieve grade level proficiency.  For example, if a fourth-grade child operating at a second-grade reading level made nine months of academic growth in reading, but did not reach the fourth grade reading achievement benchmark, value-added assessment still credits the teacher, school and district with making adequate progress with this child.  In January 2011, the U.S. Department of Education published the final GMPP Report.  For Tennessee, very few additional schools made Adequate Yearly  Progress (19 in 2006-2007 and only 22 in 2007-2008) by using the value-added model developed by Sanders in Tennessee.
The Tennessee value-added assessment model basically identified the schools that were already making required annual proficiency targets, but it failed to distinguish between schools with rising or declining proficiency scores.
In short, the Sanders Model did little to address the essential unfairness perpetuated by NCLB proficiency requirements, which insisted that those student further behind and with fewer resources than those in economically privileged schools had to work harder to reach the same proficiency point.  More importantly, there was no evidence that the Sanders version of value-added testing did anything to help or even predict the future outcomes for those furthest behind.
The initial fairness advantage touted by Sanders for measuring testing growth rates has not worked to the advantage of either student or teacher in the increasingly high stakes environment of school assessments.  And despite the National Research Council and the National Academies’ flagging of value-added assessment as too unstable for high-stakes decisions in education, such warnings have not slowed aggressive efforts at the federal level to incentivize the establishment of testing systems and state teacher evaluation systems based on value-added assessment models.
With hundreds of millions of dollars at stake for states in the Race to the Top (RTTT) grant money, states like Tennessee rushed to implement a federally recommended system whereby value-added growth scores would come to dominate teacher evaluation for educators who teach tested subjects.  And contrary to the most basic notions of accountability and fairness, two-thirds of Tennessee teachers who teach non-tested subjects are being evaluated based on school-wide scores in their schools, rather than their own.
More recently, Tennessee has revamped teacher licensure so that the renewal of professional licenses will depend on value-added student test scores despite protests by teachers and skepticism among testing experts.  More disturbing, still, are efforts to establish a baseline for determining growth scores in early grades not included in the Tennessee Value-Added Assessment System (TVAAS), with the State now paying for the standardized SAT-10 to be administered in 85 percent of K-2 classrooms in Tennessee, whether or not children can even read the test questions.
We may conclude, then, that the demand today for a year’s worth of growth in test scores, with sanctions attached for not making such gains, holds little advantage for either students or teachers over the kinds of arbitrary testing proficiency targets used just a few years back when NCLB began promising no children left behind.
As value-added assessments have done little to improve student learning and have done much to generate divisiveness and charges of unfairness from educators, they have been used most effectively to accentuate the benefits of corporate education reform initiatives in ways that could not be done otherwise.
For instance, the new CREDO national charter school study depends upon comparisons of test score growth of charter school students and those of demographically matched students in public schools.  Such reporting allows for comparisons to be made without acknowledging the low proficiency rates among charter schools that are touted for their high value-added or growth scores, even as those high growth charter schools regularly lag behind the state average proficiency rate of traditional public schools.
By comparing growth scores that are mysteriously converted into “days of learning” in the CREDO report, charters come to be seen as equal to, or a little better than, the average public school, rather than as being on par with the poorest public schools (for which charters were originally advertised as superior replacements).  As the focus is shifted to growth scores, too, the deepening gaps in overall proficiency between the poor and the economically advantaged are given short shrift.
The fact that charter schools have hardly altered those scandalously low proficiency rates of just a few years ago becomes lost in the new enthusiasms for statistical sleight-of-hand and growth scores.  What was once educationally unacceptable in terms of low proficiency levels before the rise of charter schools has become less and less important as charter schools have grown from just over 1,500 in 2000 to over 6,000 in 2013. One charter advocate within the corporate reform movement, Mike Petrilli, recently argued for focusing entirely on growth of test scores:
Proficiency rates are terrible measures of school effectiveness. As any graduate student will tell you, those rates mostly reflect a school’s demographics. What is more telling, in terms of the impact of a school on its students’ achievement and life chances, is how much growth the school helps its charges make over the course of a school year—what accountability guru Rich Wenning aptly calls students’ “velocity.”
We may expect, then, that focusing on “velocity” will help mask the continuingcategorical inequality within a society rich with civil rights hyperbole and poor in results, and that the continuing corporate education agenda, as Larry Cuban suggests, will shift “public attention away from fiscal and tax policies and economic structures that not only deepen and sustain poverty in society but also reinforce privilege of the top two percent of wealthy Americans.”
The first principle of educational assessment and educational research should be to do no harm. Unfortunately, the 2013 CREDO National Charter School Study sets the stage for considerable harm to come to disadvantaged students by using value-added gains, or growth scores, of students that, in effect, mask 1) the unacceptably low testing proficiency levels of disadvantaged students enrolled in charters, and 2) the continuing testing performance gaps between black and Hispanic charter students and their white non-charter counterparts.
CREDO (2013) finds growth in reading and math scores for black students, black students in poverty, and Hispanic students in poverty (pp. 65-66, 69). But the latest National Education Assessment Program (NAEP) results, which are given the highest level of credibility among American testing experts, show proficiency rates of underprivileged minority students have really not changed. By focusing on test score growth among charter school students rather than proficiency rates, what was once figural for measuring effectiveness of public schools has moved to the background when measuring the value that is added by charters.
We see, too, the value-added masking effect when we look at Tennessee proficiency levels in comparison to charter growth scores. Tennessee boasts of positive charter school impacts for African-American students in both math and reading, as calculated in the 2013 CREDO Report.  A look at the Tennessee State Report Card for 2012 paints a similar positive picture in terms of effect, as indicated by high value-added gain scores for fifteen of Tennessee’s charter schools.  We see more As and Bs assigned to charters than Ds and Fs, even though some charter schools (see Table 1 below) registered no more than a third of their students scoring as proficient or advanced based on state tests.  The Tennessee Department of Education’s (TDE) Charter Schools Annual Report (2013) labels charter charters as positive, neutral or negative effect size according to the change in standard deviation units from one year’s test scores to the next.
Table 1 Comparison of Value-Added Grades with Proficiency/Advanced Rates for Tennessee Charter Schools




TN Charter Schools w/Positive & Statistically Significant Effect Size in 20122011Math% P/A2012Math% P/A2012Math GradeValue-Added2011Read% P/A2012Read% P/A2012Read GradeValue-Added
Promise Academy/Memphis35.254.7A24.038.5A
KIPP Memphis Collegiate Middle (Diamond)31.441.6A24.838.8A
Freedom Preparatory Academy/Memphis56.870.5A31.345.8A
Veritas College Preparatory Charter School/Memphis14.554.8NA30.635.6NA
Power Center Middle School Academy/Memphis49.356.6A51.063.6A
Nashville PrepNA72.2NANA58.8NA
Star Academy/Memphis56.639.3B39.833.1D
STEM Prep Academy/NashvilleNA63.9NANA52.2NA
Soulsville Charter School/Memphis23.2035.9A22.731.3F
Memphis Academy of Health Sciences High School43.848.4NDD44.458.4Above
Lead Academy/Nashville31.432.9A30.034.0B
KIPP Academy Nashville23.454.9A33.043.3A
Power Center Academy High School/MemphisNA73.90NANANANA
Liberty Collegiate Academy/NashvilleNA62.5NANA51.1NA
Circles of Success Learning Academy/Memphis40.4033.00D30.3026.4C
TN Charter Schools w/ Statistically Neutral Effect Size in 20122011Math%P/A2012Math%P/A2012Math GradeValue-Added2011Read%P/A2012Read%P/A2012Read GradeValue-Added
City University School of Liberal Arts/Memphis10.517.6Below53.744.6NDD
Memphis Academy of Health Sciences Middle School27.414.0B25.729.5D
KIPP Memphis Collegiate High SchoolNA32.2NDDNANAAbove
New Vision Academy/Nashville13.823.5NA37.244.1NA
Southern Ave Charter-Acad Excellence Creative Arts/Memphis23.429.6B28.722.8F
Cameron College Prep/NashvilleNA22.5NANA28.1NA
City University Boys Preparatory/Memphis<5 td="">9.0NA22.925.0NA
TN Charter Schools w/ Statistically Negative Effect Size in 20122011Math%P/A2012Math%P/A2012Math GradeValue-Added2011Read%P/A2012Read%P/A2012Read GradeValue-Added
Omni Prep Academy-North Pointe Middle School/Memphis10.78.6NA17.320.2NA
Memphis Business Academy Middle School/Memphis10.219.0F25.527.7C
Memphis Business Academy High School/Memphis15.130.8NDD25.647.50NDD
Memphis Academy of Science & Engineering/Memphis10.129.3C17.421.2D
Southern Ave Charter-MS of Acad Excellence & Tech/Memphis14.312.8NA23.520.6NA
Chattanooga Girls Leadership Academy7.811.7F13.319.2F
Memphis School of Excellence<5 td="">22.4NDD8.825.4NDD
Smithson-Craighead Middle School/Nashville5.37.6F15.617.6F
Ivy Academy/Chattanooga8.9<5 td="">Below44.136.1NDD
Smithson Craighead Academy/Nashville29.619.8F25.528.6F
New Consortium of Law & Business/Memphis5.9<5 td="">NA23.516.9NA
Drexel Prep/NashvilleNA8.8NANA24.6NA
New Consortium of Law & Business/Shelby CountyNA7.7NANA15.4NA
Academic Performance for Black Students by District & State
Hamilton County (Chattanooga) 3-8thgrades24.929.6B24.124.3D
Hamilton County 9-12th grades24.627.5NDD31.934.8NDD
Davidson County (Nashville) 3-8thgrades22.929.0B29.431.1C
Davidson County 9-12th grades33.836.0Above35.839.7Below
Memphis 3-8th grades20.224.0B22.425.6D
Memphis 9-12thgrades23.931.5Above29.133.8Below
Shelby County 3-8thgrades29.738.0A37.241.9C
State 3-8th grades23.628.9B28.231.0C
State 9-12th grades31.038.0NA35.840.3NA
While Tennessee charter schools are almost entirely African American or Hispanic, the enrollment numbers represent only three percent of the total black and Hispanic student population of the state.  Of that three percent, only 1.3 percent of Tennessee’s black and Hispanic children are enrolled in charter schools with a positive effect size, diminishing the claims that charter schools are having a significant impact on the State’s black students’ performance.
A closer look at Tennessee’s charter schools’ state proficiency and advanced percentages reveals that, while there are significant changes in growth scores for 1.3 percent of the black students enrolled in charter schools with a positive effect, most of the state’s charter schools show less than 50 percent of their students reaching proficiency and many are well below the state and district percentages for black students.
So what do these data mean?  Taken together, they mean that it is misleading to say that charter schools that are being pushed in Tennessee are having a positive effect on black and Latino students’ academic performance, given the fact that the majority of the students in the State’s charter schools are below proficiency, as measured by NAEP as well as Tennessee’s own standards.
When we subtract the percentage of students who are proficient and advanced, according to the 2012 Tennessee Report Card, from 100, we get the actual percentage of students who are not proficient.  In math, the percent of students not proficient in Tennessee charter schools range from 26.1 percent to more than 95 percent. In reading, students not proficient range from 36.4 percent to 84.6 percent. It is important to remember, too, that these charter schools have smaller student populations and teacher-student ratios, fewer special education students and English Language Learners, and fewer rules and regulations to follow.  And most enjoy the added benefit of state funding on top of additional incentive funds from state and federal sources, as well as support from venture philanthropists and corporate foundations.  These advantages for charter schools are big disadvantages for public schools.
The proliferation of charter schools further aggravates validity issues that have plagued value-added growth models for years now around nonrandom assignment of students to teachers.  Vanderbilt economist, Dale Ballou (2005) found that the TVAAS provided “no explicit controls for any factors that might influence student progress” (p. 1), such as socioeconomic status, parent involvement, peer influence, or school resources.  Bias is introduced into the TVAAS due to the impossibility of “teachers having an equal chance of being assigned any of the students in the district of the appropriate grade and subject” and that “a teacher might be disadvantaged [on her teacher evaluations] by placement in a school serving a particular population” year after year (p. 5).
With charter school caps in Tennessee now removed, changing demographics in existing public schools are underway.  When students leave their home school to attend a charter school because their home school is designated as a failing school, the students remaining in the home school are more likely to over-represent students with special needs, economic and language disadvantages, and low achievement scores.  Also, public schools’ resources for needed learning interventions are reduced as tax dollars move with the students who enroll in charter schools.  Teachers remaining in the home school are disadvantaged in this situation due to an imposed bias caused by aggregated student or teacher characteristics (those left behind), resulting in an invalid and unfair situation for high-stakes personnel decisions when teachers are compared within districts.  While TVAAS uses multiple years of data in an effort to counteract the effect of non-random student assignment to a teachers, teachers’ value-added scores used for evaluation will reflect student characteristics on academic achievement, when placed year after year in classroom that aggregate, or cluster, students of poverty and minorities or in classes dominated by middle class top flyers.
Aside from growing validity and fairness issues, the charter schools’ achievement results offer a compelling argument against the continued financial drain on public schools to support corporate management organizations that educate three percent of the state’s minority students, especially since only 1.3% of those students are in schools that show positive growth scores. Neither the CREDO nor the Tennessee State Report on charter schools’ performance says anything about how Tennessee charter schools achieve the positive effect sizes they do, and both reports are entirely silent on the intensified resegregation that we see in charters.
However, with bold statements from CREDO like “eleven states deserve mention as states where charter school performance outpaced TPS [traditional public school] growth in both [math and reading] subjects,” or from Tennessee that “19 [charter] schools exhibited positive trends that were statistically greater than zero, suggesting they were above the state average for students given their prior performance,” supporters of corporate reform continue to perpetuate the misleading meme that children are receiving an added value as a result of having traditional public schools shut down and turned into charters. Removed from the larger comparative contexts of what charter school students know and are able to do, given demographic, enrollment, class size, and funding advantages of charter schools over their public school peers in the same districts, these kinds of statements leave policymakers with an erroneous assumption that the most effective available strategy to improve education for minority students is to be found in segregated charter schools run by corporations and staffed by privileged beginners, who are trained to aggressively implement total compliance and “no excuses” environments and techniques that they would never consider imposing on their own children.
In a 2009 Carnegie-funded report, Charles Barone points out that focus on value-added gains, or growth in test scores, may downplay the need for interventions to address low proficiency rates:  “Due to the projection toward proficiency being recalculated annually [in the TVAAS model], there is not necessarily a significant progression, over time toward proficiency . . . causing a delay of needed intervention at appropriate developmental times” (p. 8). So while showing academic progress, gain scores or growth scores easily mask the fact that minority and poor children are far below their well-heeled peers in becoming intellectually prepared for life and careers. And in masking the actual academic progress of the poor and minority students, the state (and the nation) is let off the hook for maintaining and supporting an adequateand equally accessible system of public education for all students. At the same time, politicians and ideologues can celebrate higher “progress rates” for poor and minority students who are, in fact, left further and further behind.
Tennessee has a string of continuing reform “firsts” — from value-added assessment implementation, to being one of the first two states to win a Race to the Top grant, to the singular recognition from Education Secretary Arne Duncan for “making unprecedented progress in producing statewide reform and boosting student achievement.” Yet  a 2012 report by the Education Law Center showed that Tennessee compared poorly on four dimensions of education funding: funding level, funding distribution, state effort, and coverage.
Tennessee ranks 51st in the nation for funding level, which is described in the report as the “overall level of state and local revenue provided to school districts” in comparison to other states’ average per-pupil revenue (pp. 6, 12).  When comparing the distribution of funding across local districts within a state to student poverty rates in those districts, Tennessee received a grade of C in comparison to other states (pp. 7, 14).  Tennessee’s funding effort, or “the ratio of state spending to state per capita gross domestic product (GDP)” rated an F (pp. 7, 22), while “coverage” or the funding for the proportion of school-age children that attend public school versus parochial or private school ranked 46th in the nation (pp. 7, 24).
Tennessee’s commitment to the TVAAS has diminished educational diversity in Tennessee, while stunting the educational opportunities of children, particularly in urban areas, in ways that are likely to have lasting negative effects in adulthood. With the continuing need for changing workforce skill sets among ever-changing economic environments at home and abroad, students enmeshed in testing protocols have not been provided with the intellectual and applied skills that they needed most to enable them to survive and thrive, or to prepare them as literate creators and innovators, responsible decision-makers, and collaborative problem-solvers. The focus on the state tests and the results of the value-added manipulations has diminished student access to learning environments that allow and encourage the development of high-level thinkers and doers.
By every psychometric comparison dear to the hearts of testing reform advocates, whether it is the SAT, ACT or NAEP, Tennessee has not improved the education of its citizens in relation to national testing trends, nor has it successfully addressed the funding equity gaps.
With $326,000,000 spent for assessment, the TVAAS, and other costs related to accountability since 1992,[1] the State’s student achievement levels remain in the bottom quarter nationally (Score Report, 2010, p. 7).  Tennessee received a D on K-12 achievement when compared to other states based on NAEP achievement levels and gains, poverty gaps, graduation rates, and Advanced Placement test scores (Quality Counts 2011, p. 46).  Educational progress made in other states on NAEP [from 1992 to 2011] lowered Tennessee’s rankings:
• from 36th/42 to 46th/52 in the nation in fourth-grade math[2]
• from 29th/42 to 42nd/52 in fourth-grade reading[3]
• from 35th/42 to 46th/52 in eighth-grade math
• from 25th/38 (1998) to 42nd/52 in eighth-grade reading.
The Public Education Finances Reports (U.S Census Bureau) ranks Tennessee’s per pupil spending as 47th for both 1992 and 2009.  Subsequent studies rank Tennessee last in education spending with less than 54 percent of that funding actually reaching classrooms, as reported by Tom Humphrey. When state legislators were led to believe that the teacher is the single most important factor in improving student academic performance, they found reason to justify lowering education spending as a priority.
During the 2012 legislative session, Tennessee Governor Bill Haslam led an unsuccessful attempt to repeal the average class size requirement of the Education Improvement Act of 1992 in order to fund nominal increases to teacher salaries. The urge, then, to make educators “accountable” for improvements to education seemed as robust in 2012 as it was in 1992, with corporate education reformers supporting the national application of value-added assessment with even tighter testing accountability and higher stakes for those with the least power to alter the conditions that most significantly contribute to low school achievement.
Even when faced with a series of State Supreme Court rulings, low national educational rankings, and continuing inequities between rich and poor systems, the executive and legislative branches of Tennessee government continue inadequate funding of public education, instead of the promised “comprehensive approach to funding schools” that would enable “accountability standards for quality and productivity” (Goal 10 of the landmark 1992 EIA).
The state now has a 20-year record of requiring that poor school districts reach the high goals and accountability demands of the EIA without the needed funding to achieve goals or satisfy demands. At the same time, the state has taken advantage of value-added accountability measures to help camouflage the continuing disparities between rich and poor systems, while leaning increasingly on regressive sales tax increases that have proved as inadequate and as inequitable as the system they sought to remedy over twenty years ago.  Meanwhile, Tennessee has allowed the percentage of state resources to be shifted without fanfare toward purposes that leave education underfunded and the protection of the state’s most vulnerable citizens in jeopardy.
If value-added modeling offered some way to alter the vicious and mis-educative circle that generations of education reformers have helped to perpetuate with misplaced blame and irresponsible diversions from the structural problems that the neglect of poverty has allowed to multiply, then we could be more receptive to its arrival and spread.  Instead, the era of value-added assessment offers policy elites a continuing route of escape from accountability for decades of failure to address the categorical inequality that walls off our society’s and schools’ economic unfortunates who suffer its indignities.
What value-added modeling has contributed to the testing fairness formula by acknowledging different starting points in the testing race, it takes away by helping to conceal the chasms that constitute the inequalities that mark the radically different starting points of the disadvantaged and the privileged.  Meanwhile, Tennessee remains, by State Commissioner Huffman’s own admission, in the “bottom ten states in the country in educational outcomes,” even after twenty-plus years of corporate education reform and value-added assessment.
When value-added gain scores becomes the currency used to purchase testing success and, therefore, educational credibility, then the distances that poor children are left behind by festering economic disadvantage are overlaid by the cheap veneer of test score improvements.
When a poor child, then, comes to demonstrate a year’s worth of test score growth, just as a middle-class child from the leafy suburbs, the achievement gap will not have been closed but, rather, covered over by thin veil that conceals the deep knowledge divide that cuts deeper each year between the disadvantaged and the privileged. Are we willing to confine our attention to such a false version of educational equality, even as the education debt to those left behind remains unpaid and grossly overdue?
References
(In-text references with live web links provided are not included in the list below). 
Ballou, D.  (2005).  Value-added assessment: Lessons from Tennessee.  Retrieved from http://dpi.state.nc.us/docs/superintendents/quarterly/2010-11/20100928/ballou-lessons.pdf
Barone, C. (2009, March). Are we there yet?  What policymakers can learn from Tennessee’s growth  model [Technical Report]. Washington, DC: Education Sector.
CREDO.  (2013).  National charter school study 2013.  Center for Research on Educational Outcomes.  Stanford, CA: Stanford University.
Horn, J., & Wilburn, D.  (2013).  The mismeasure of education.  Charlotte, NC: Information Age Publishing.
National Research Council and National Academy of Education. (2010). Getting value out of value-added: Report of a workshop. Committee on Value-Added Methodology  for Instructional Improvement, Program Evaluation, and Educational Accountability, Henry Braun, Naomi Chudowsky, and Judith Koenig, Editors.  Center for Education, Division of Behavioral and Social Sciences and Education. Washington, DC: The National Academies Press.
Quality counts, 2011:  Uncertain forecast.  (2011, January 13). Education Week.Retrieved from http://www.edweek.org/ew/toc/2011/01/13/index.html
State Collaborative on Reforming Education. (2010). The state of education in Tennessee (Annual Report). Retrieved from http://www.tnscore.org/wp-content/uploads/2010/06/Score-2010-Annual-Report-Full.pdf
Tennessee State Department of  Education. (2012).  2012 Tennessee Report Card.           Retrieved from http://edu.reportcard.state.tn.us/pls/apex/
The Educational Improvement Act, Tennessee Public Acts, Chapter No. 535, pp. 19–49 (1992).
U. S. Census Bureau. (2011). Public Education Finances: 2009 (G09-ASPEF). Washington, DC:  U.S. Government Printing Office.
U.S. Department of Education. (2011, January).  Final report on the evaluation of the growth model pilot project. Washington, DC: Office of Planning, Evaluation and Policy Development, Policy and Program Studies Service.

[1] This information was gathered from the Office of Education Research and Accountability (2004) and the Tennessee State Budgets from 2004-2011.)
[2] The changes to the mathematics framework introduced in 2005 for grades 4 and 8 were minimal, which allowed for the continued reporting of results from previous assessments beginning with 1990.
[3] The reading framework was updated in 2009. Results from special analyses determined the 2009 and subsequent reading assessment results could be compared with those from earlier assessment years (NAEP, 2011,http://nces.ed.gov/nationsreportcard).

No comments:

Post a Comment