Distortion in A-F Accountability Scales

Written by: David Casalaspi

Primary Source:  Green & Write – February 8, 2017

Today, every state in the U.S. is required by federal law to produce annual report cards showing how each school is doing in terms of student achievement. However, states are free to design their accountability report cards as they please. As a result, there are a variety of different accountability report card formats in existence. One of the most popular uses is an A-F grading system in which schools are given an A, B, C, D, or F grade based on the performance of their students. The thinking behind simple A-F report cards, a favorite accountability tool of reform-minded organizations like the Foundation for Excellence in Education, is that they ostensibly provide clear, simple, and easy-to-understand information for members of the public. Instead of having to peruse a dense document full of numbers and percentages, citizens can easily identify good and bad schools because everyone knows what an A means and everyone knows what a C means.

Recent research, however, has cast doubt on this thinking. In the past few years, researchers studying school accountability (including some right here at MSU) have set out to understand how the more cosmetic elements of school accountability report cards can affect public perceptions of school quality (see here and here for two key examples). In other words, does the formatting of a school report card—e.g. whether the report card uses an A-F grading system, a numeric proficiency rating system, or an Advanced/Proficient/Basic grading scale like the NAEP—influence the public’s perception of a particular school’s performance independent of that school’s actual performance. Is School X likely to be viewed differently by members of the public depending on if the report card shows an A, a high numeric proficiency rating, or an “Advanced” label?

It turns out, the answer is yes. In one experimental study, researchers have shown that members of the public can interpret the same information differently depending on the format of school report cards, and that the A-F scale is especially prone to distortion. When presented with an A-F scale, members of the public tended to believe a hypothetical A-grade school was better than an identical school whose achievement was reported using a different visual (such as a proficiency percentage or an Advanced label). They also tended to view a C school much more negatively compared to identical schools labeled with either a middling proficiency percentage or a “Basic” label. The results thus suggest that how states design their school report cards can affect the public’s perceptions of school quality independently of actual school performance—in some cases inappropriately eroding public support for schools, and in some cases inappropriately inflating public support for schools.

Conclusion

Texas is one state that recently moved to an A-F school accountability system. Photo courtesy of Wikimedia.

This research is timely given the fact that A-F grading scales have been in the news in recent years. In 2013, for example, an Education Commissioner in Florida resigned after of allegations arose that in his previous job he had meddled with Indiana’s A-F grading system to give a higher grade to a charter school operated by a wealthy donor. In 2014, an A-F grading system was the subject of a school-funding lawsuit in New Mexico which alleged that the ranking system discouraged teachers from working in high-need schools.   And more recently, school district administrators in Alabama and Texas have begun protesting recent legislation in those states which mandate an A-F grading system for school accountability reports. In Texas, an analysis conducted by the Austin American-Statesman found that the A-F accountability system in the state actually makes low-income schools look worse than they did under the previous system.

In all of these debates, the practical concerns about the way school report cards can distort perceptions of true performance is evident. As state policymakers across the country consider tinkering with their school accountability report cards in coming years, it will be important for them to consider the way that seemingly cosmetic changes in school accountability report cards can have real effects on perceptions of public schools and subsequent operations on the ground. An A-F grading system might be appealing for its simplicity, but it also risks painting with too broad a brush, and policymakers will therefore have to be cautious.

 

Contact David: dwc@msu.edu

The following two tabs change content below.
David Casalaspi
David Casalaspi is a third-year student in the Educational Policy Ph.D. Program. Before beginning his graduate studies, he attended the University of Virginia, where he received his B.A. in History and spent his senior year completing a thesis on the rise of federal accountability policy between 1989 and 2002. Additionally, while at UVA, David designed and taught a two-credit seminar for undergraduates on the political history of the American education system and also received some practical experience with policymaking through work with the City Council of Charlottesville, VA. His current research focuses on the politics and history of education, and particularly the way that education rhetoric and issue framing efforts affect the implementation of school reforms.