So then the Boston Globe has become the carrier of the corporate message of the “bold reformers,” or the reckless corporatizers, depending on your views of the relative importance of renewing the public commitment to public education—as opposed to embracing private entrepreneurial investments at taxpayer expense, all disguised as public giving and philanthropic goodness.
The Globe first ran a big news piece with graphs and stats last Monday ("Charter Schools Grade Highest), followed up with editorials and op-eds heavy with quotes from insiders at the Boston Foundation. In none of these pieces will you find any of the caveats and limitations that the authors of the study carefully detailed to keep respectable news sources and other readers from doing what the Globe has done. I don’t blame Boston Foundation President, Paul Grogan, from pumping his study. And pump he did in, of course, the Boston Globe:
Paul Grogan, president of the Boston Foundation, which funded the research, was more direct. "There is no justification for keeping a charter cap in place that is denying urban, mostly black and brown children the opportunity for a demonstrably better result," Grogan said.After all, Grogan's non-profit Boston Foundation (with almost a billion dollars in non-profit) paid for the study and has a huge vested, shall we say, interest in removing the cap on charter schools, thus removing the top on the tax credit cookie jar for all those hungry edu-entrepreneurs seeking to do good. But it is unconscionable that the Boston Globe offer such a one-sided presentation of a schooling situation that is anything but one-sided.
So, then, a few observations on the limitations of the study, since it is obvious that the Globe and the folks at the Boston Foundation are trying their best to paint a picture that only exists in the sunny side of the heads of the “bold reformers.” I will leave the statistical surgery for someone more competent, and I will use largely quotes from the authors, themselves.
The study, Informing the Debate: Comparing Boston’s Charter, Pilot, and Traditional Schools is really two separate research designs under the same cover. It can be downloaded at the Boston Foundation website.
The first part of the study is an observational study, and it examines MCAS test score differences among charters, pilot schools (a sort of hybrid with some charter and some traditional characteristics) and Boston Public Schools (BPS). As indicated by the researchers, themselves, the findings of the observational design are easy to fault because of the selection bias background characteristics that are unaccounted for in the study. Besides attracting parents, for instance, who are more eager to seek out opportunities for better academic results for their children, we know that the charter schools in this study have fewer special education students, fewer English language learners, and fewer poor students. No wonder, then, that these charters outperform the BPS schools and the Pilots.
Charter Schools also serve a smaller proportion of special education students, free- and reduced-price lunch students, and English learners than do the traditional BPS schools. In addition, high school Charter students tend to come in with substantially better math and ELA performance on the MCAS than those in traditional BPS schools (.412 standard deviations higher in math and .412 standard deviations higher in ELA) (p.15).The authors of the study further concede that
students who go to Pilot and Charter Schools are different in important ways from those that do not. We need to take account of these differences before judging the relative effectiveness of these different school models” (p. 18).In an earlier section the authors entitled “Caveats,” they go further:
Each design is described in detail on page 8. This study is limited by the constraints of our two research designs. The observational study includes all schools but does not control for unobserved differences in background characteristics. The lottery study controls for all differences in students’ background, including unobserved differences, but does not include all schools.The second research design is what the authors call a lottery study. Students who applied for and were selected into pilot and charter schools were compared to students who applied and were not selected to attend. Their individual test scores were tracked over time to compare the effect of charter schools, pilot schools, and BPS schools. Quite ingenious in design, but extremely limited in the sampling—as noted in the Caveats above. What results is a skewed picture based on a handful of the most popular charters compared with pilots with a much more limited lottery selection compared to the BPS who accept any student who walks in the door (remember public schools?).
A second caveat relates to the observed control variables used in our study. These include indicators for participation in special education and limited English proﬁciency. These broad categories may disguise large differences in student groups. Special education students range from those needing intensive all day services to students needing a little extra time in a resource room. English learners may know no English at all or have some proﬁciency. It is possible that Pilot and Charter Schools serve different proportions of these subgroups. Unfortunately, our state data set does not provide ﬁnely detailed breakdowns for these two variables in a manner consistent or comprehensive enough to be useful for this study (p. 6).
. . . it's important to keep in mind that while the lottery study uses a stronger research design than the observational study, both the Charter and Pilot lottery results come only from schools and years in which the demand for seats exceeds the number of seats. Our Charter lottery results also omit schools and years for which lottery records are missing or incomplete. These considerations have the largest impact on the sample of Charter middle schools in the lottery study, where the estimated test score effects are largest.And the non-experimental results, remember, are the ones from the observational study that the authors told us that we have to take with a big grain of salt. What does this mean? It means that this study, the one that the “bold reformers” are crowing about did not include test data for students from the lower tier of charter chain gangs, where, of course, the scores are most likely to be equally bad or worse than they are in the poorest public schools that the edu-preneurs don't give a damn about. If they did, they would be putting their money where their mealy mouths are.
On balance, our lottery-based ﬁndings provide strong evidence that the charter model has generated substantial test score gains in high-demand Charter Schools with complete records. On the other hand, these results should not be interpreted as showing that Boston Charters always produce test score gains. In Charter Schools with lower demand and incomplete lottery records, we have to rely on non-experimental results (p. 39).
And what about this lottery business—didn’t this “scientifically-based” study show the charter schools superior in test results than the pilot schools? Well, just like charter schools, not all lotteries are equal. The lotteries for the pilot schools were conducted after all the guaranteed seats were filled, which means that most of the students attending the pilot schools are there not because of a lottery draw but because they live within a guaranteed proximity of the school. Here comes the self-selection bias again, yes, because ALL of the charter school students are there because their parents cared enough to fill out the app to get them into the lottery. Apples to oranges!
And how many charter school kids are we talking about in this earth-shaking study? Well, in the middle schools, where the significant math gains earned a big chart placement in the Boston Globe, the total number of students was 953 students from four (4) charter schools. A pretty meager sample to use in order to build an argument that charters have kicked butt over the pilots and BPS.
But remember, it doesn’t take much when the media hammer the intended distortion until it becomes an all-pervasive meme that is passed from newspaper to TV and back again. After all, it was just one study, and one that could not even cover its own statistical contortions, by another Harvard rock star, Paul Peterson, that launched the urban voucher movement.