Written by: Abbey Frazier
Primary Source: Michigan Policy Wonk Blog, July 17, 2017
In an increasingly ideological and partisan political world, the practice of evidence-based policymaking is a critical tool in ensuring the most efficient use of government resources. Objective outcome analyses and rigorous program evaluations can help policymakers and state officials guide limited funds towards investments with a proven track record (i.e., evidence) and away from programs that fail to deliver. Though all states adhere to this practice to some degree, the extent and the consistency to which evidence-based policy is embedded in state budgetary processes varies across states.
Early this year a report released by the Pew-MacArthur Results First Initiative evaluated states’ use of evidence-based policymaking according to six key areas of action:
defined levels of evidence
inventories of existing programs
comparisons of program cost and benefits
reported outcomes in state budgets
targets of funds to evidence-based programs
required actions written in state law
States were scored on these six actions across the policy areas of behavioral health, child welfare, criminal justice, and juvenile justice, and categorized as either “trailing”, “modest”, “established”, or “leading” in their efforts towards evidence-based policymaking. While 38 of the 50 states fell somewhere between “modest” or “established”, Michigan was one of seven states found to be lagging when it came to carrying out concrete evidence-based actions in the state budget process. States that led the way included Connecticut, Minnesota, Oregon, Washington, and Utah, and states deemed “trailing” were Michigan, West Virginia, New Hampshire, Maryland, Montana, North Dakota, and South Dakota.
States scoring highly in defined levels of evidence had clear, consistent, and multi-tiered designations representing the varying levels of strength in the underlying research methods of program evaluations. Most state definitions of evidence ranged from single-tiered to multi-tiered, based on elements like the type, rigor, and number of existing research studies that provided proof of programmatic success. For example, the State of Nebraska developed a seven-tiered definition of evidence that ranked the state’s Juvenile Justice programs, all they way from “fully evidence-based” to “insufficient evidence” with varying levels of strength in between. Programs that were categorized as “fully evidence-based” were defined as having at least one randomized controlled trial study or two quasi-experimental studies proving program success.
The second key action identified for successful evidence-based policymaking involves the development and maintenance of a comprehensive program inventory. Program inventories specify the level of evidence (as determined by the definitions of evidence) and provide a simple way to review and rank programs by outcomes. Almost all states inventoried programs in at least one area of policy and 29 organized by level of evidence, but few were found to inventory by evidence across different areas of policy.
The third key action identified in the report, a clear comparison of program costs and benefits, was found in only seventeen states of the policy areas surveyed. Of the seventeen, sixteen monetized outcomes in comparison of costs to determine the returns on each program. Washington State was found to have developed an advanced cost-benefit model in respect to human services policy areas and eight others utilized a cost-benefit model developed through public-private partnerships.
The inclusion of program outcomes in official budget documents is the fourth action identified in successful evidence-based policymaking. While most states compile and report outcome data in some fashion, only thirteen require this information for each program within official budget documents. For example, in a funding increase proposal for Minnesota’s adult offenders’ supervision program, a comparison of recidivism outcomes with and without the program were included in the Governor’s biennium budget.
Formally targeting state dollars towards evidence-based programs is the fifth key action identified in the report. To be considered ‘advanced’ a state must officially designate at least 50 percent of funds towards programs with proven evidence, of which only five states did. Most states targeted funds based on evidence when it came to grants, service providers, or contract bids. In Indiana, counties that receive state funds for community corrections are required to use evidence-based programs and perform regular auditing.
The final evidence-based action identified requires writing the previously mentioned key actions into state law. Thirty-three states and D.C. have written at least one of the aforementioned key actions into state statute, administrative codes, or executive orders in at least one of the four policy areas. Tennessee, for example, created legislation outlining the share of Department of Children’s Services funds that are required to go to evidence-based programs, with the eventual goal being 100 percent. Most states with evidence-based actions written in law relate to program inventories. Connecticut adopted a law in 2015 requiring an inventory of criminal justice and juvenile justice programs that includes treatment population, program outcomes, expenditures, and the evidence base.
These six key actions of evidence-based policymaking can serve as a blueprint in guiding decision makers towards more efficient uses of government resources and away from a more partisan policymaking approach. Though the state survey is limited to four areas of policy, states can utilize these concrete actions in their own budget processes and apply them across areas of policy as they see fit.