A National Assessment of Educational Punditry

Written by: David Casalaspi

Primary Source: Green & Write, December 16, 2015

As reported in an earlier blog post, the 2015 results for the National Assessment of Educational Progress (NAEP) were released last month. The results nationwide were disappointing. Overall, 67% of American 8th-graders are not proficient in math. For 4th-graders, it’s 60%. And achievement gaps between white and minority students remain mostly unchanged.

The results were a surprise since scores had been trending slowly upwards since the 1990s. Peggy Carr, the Commissioner of the National Center for Education Statistics, which administers the NAEP, called the results an “unexpected downturn.”

Hindenberg Photo

Some people would have you believe the NAEP scores were a disaster. Photo courtesy of Wikipedia.

In the wake of the results being published, scholars and pundits all across the country have weighed in on the results. Rick Hess of the American Enterprise Institute called the results a “train wreck”, and education scholar/blogger emerita Diane Ravitch wrote that the scores “showed the fiasco of Race To The Top and No Child Left Behind.” The Network for Public Education, an advocacy organization founded by Ravitch, likewise attributed the scores to the “failure of corporate reforms” during the Bush and Obama administrations.

The 2015 NAEP was also the first national test since the widespread implementation of the Common Core State Standards (CCSS), and accordingly, the CCSS was one of the most frequent targets in discussions about the NAEP. Unsurprisingly, those opposed to the Common Core used the disappointing NAEP scores as ammunition in assaulting the standards. The Massachusetts-based Pioneer Company, a longtime critic of the CCSS, said the scores represented a “troubling indication of the negative impact of Common Core’s academic mediocrity.” Grover Whitehurst of the Brookings Institution provided additional grist for CCSS critics when he conducted a back-of-the-envelope analysis showing that the 28 states which had used a Common Core assessment in the 2014-2015 school year experienced significantly larger drops on the NAEP than the remaining 22 states.

In response, supporters of the Common Core did not deny that the new standards may have depressed scores, but they nevertheless took a more placatory approach in discussing the NAEP’s implications. They attributed the dip in performance to the significant changes schools have been undergoing both curricularly and pedagogically while transitioning to the CCSS. This dip should only be temporary, CCSS supporters argued, and once schools become fully immersed in the CCSS and work out the kinks, scores should rise. William Bushaw, the executive director of the National Assessment Governing Board, which oversees the NAEP, exhibited this line of thinking, telling the press that “the majority of our schools are undergoing significant changes in how and what we teach our students. It’s not unusual when you see lots of different things happening in classrooms to see a decline before you see improvement.”

Secretary of Education Arne Duncan echoed these sentiments, chalking up the underwhelming scores to a fairly routine “implementation dip.” He reiterated that the scores are in no way a sign of ineluctable failure for the Administration’s policies or the Common Core. “I wouldn’t be surprised if there will be folks out there who will use the results from this one round of the NAEP as an opportunity to say that raising expectations for our kids was the wrong thing to do, and to turn back the clock. That would be a mistake….This is the ultimate long-term play,” he said.

Shortly after his statement, Duncan was roundly criticized for distancing himself from the scores. After all, in 2013, he had attributed NAEP score increases in ten states to either their adoption of the Common Core or their participation in the Administration’s Race to the Top Program. Many wondered how the Administration could claim responsibility for the test score gains, but not declines. The answer of course has something to do with politics.

What Does This Really Mean?

What does all this mean? Not a whole lot. It is important not to overreact to the results of a single test administration, especially when the dips were relatively slight in absolute terms. Year-to-year fluctuations are to be expected, and it would be foolish to extrapolate long-term trends based on a single data point. Furthermore, the NAEP is only designed to provide a snapshot of national achievement. It is like a thermometer that takes your temperature but doesn’t provide any clues into why you might be running a fever. As such, it is inappropriate to condemn (or validate) entire policies based on these results. Researchers and pundits would therefore be wise to simply step back, take a deep breath, pour themselves a drink, and wait for the next results to be released in 2017.

The following two tabs change content below.
David Casalaspi
David Casalaspi is a third-year student in the Educational Policy Ph.D. Program. Before beginning his graduate studies, he attended the University of Virginia, where he received his B.A. in History and spent his senior year completing a thesis on the rise of federal accountability policy between 1989 and 2002. Additionally, while at UVA, David designed and taught a two-credit seminar for undergraduates on the political history of the American education system and also received some practical experience with policymaking through work with the City Council of Charlottesville, VA. His current research focuses on the politics and history of education, and particularly the way that education rhetoric and issue framing efforts affect the implementation of school reforms.