The New York Times ran a story on 28 January, 2006, entitled, “Public-School Students Score Well in Math in Large-Scale Government Study.” Well, it wasn’t a “government” study. It was only paid for by a government grant. When one looks into the methodology of the study and the histories of its two researchers, the results are highly suspect.
The Times wrote:
A large-scale government-financed study has concluded that when it comes to math, students in regular public schools do as well as or significantly better than comparable students in private schools.
The study, by Christopher Lubienski and Sarah Theule Lubienski, of the University of Illinois at Champaign-Urbana, compared fourth- and eighth-grade math scores of more than 340,000 students in 13,000 regular public, charter and private schools on the 2003 National Assessment of Educational Progress. The 2003 test was given to 10 times more students than any previous test, giving researchers a trove of new data.
Though private school students have long scored higher on the national assessment, commonly referred to as "the nation's report card," the new study used advanced statistical techniques to adjust for the effects of income, school and home circumstances. The researchers said they compared math scores, not reading ones, because math was considered a clearer measure of a school's overall effectiveness.
The study itself says, but not until page 18, that “overall, due to the complexity of the issues involved, no single study can provide a definitive determination of the effectiveness of various forms of schooling.” The Times never mentions this caveat.
The study was based on the NAEP exams in math for 4th and 8th graders in 2003. The actual results showed that public school students scored about one grade level below private school students, and charter school ones. But the authors of the study then set out to “normalize” the data, by adjusting for sociological and home factors including race, income, and ownership of a computer.
It is not until page 21 that we find out that the factors used to adjust the data, and therefore reverse the results seen in the actual test results, were determined by a prior study co-authored by one of these authors, in 2004. Among the factors deliberately left out were, “school discipline climate, teacher quality, and even parental involvement.” Most studies conclude that all three of these have major impacts on student achievement.
A major factor the researchers used to change the results was “Home Resources,” detailed as homes which regularly receive newspapers and magazines, own a computer, an atlas, and books, coded by number owned. But household income was a separate factor used (in the form of SES, Socio-Economic Status). Anyone familiar with census data would know that all these items are more common in wealthier families. So, the researchers were actually double-counting household income.
Another fudge factor appears in footnote 12, where the authors wrote, “In order to preserve data, students who reported that they did not know if they had a particular resource were coded as ‘no’ on the grounds that even if the resource was present in his/her home, it was not (directly) enriching the student’s home experiences.” If the public school students were less likely to report accurately their “resources” at home, this assumption alone would question the entire results of this study.
On racial composition, the researchers used only “Black, Hispanic and American Indian.” Conspicuously absent from this list are Asian Americans. And anyone with the slightest knowledge of educational achievement in the US knows that the Asian-Americans outperform all other racial categories, including “Caucasian,” on standard tests like the ones here.
The reason why the data may have been manipulated to obtain the result that public schools are providing a better education – no matter that the actual test results show the opposite – appears on page 39, where the authors state that their results, “call into question the basic premise of such reforms [meaning charter schools, and voucher programs].”
An indication of the possible prejudices of the authors of this study appears in the bibliography, where the authors cite no less than nine papers written or co-authored by one of them, most of which challenge the premise that non-public schools produce a better educational product.
In short, had the reporters at the New York Times bothered to read the report they were reporting on, they would have found ample reason to question the conclusions it reached. But because the study supported the mantra of the Times that competition between public and private schools for parents with limited resources is undesirable, the Times trumpeted the results as solid.
And sadly, a fair number of other national media may take this study seriously without examining its methodologies, in part because of this stamp of approval from the Times.