Skip to content

Bell: Base Policy Decisions on the Best Research

As the annual budget debate plays out in Congress, social programs for the poor such as the Head Start preschool readiness program and others face cuts.

Throughout the process, research will be used by both sides to bolster their arguments for or against budget cuts. Lawmakers will be inundated with research and information on “what’s working” and “what’s not” in many social programs aimed at improving the lives of the nation’s most disadvantaged.

The Head Start program is likely to stoke a fiery debate. Over the years, numerous studies on the school readiness program have offered contradictory findings that have muddied the waters for those tasked with deciding how much the government should spend. Despite all the research, the question remains: Does early childhood assistance help disadvantaged preschoolers?

One decades-long and often referenced source called the HighScope Perry Preschool Study answered this question with a resounding “yes,” as have other recently published examinations of Head Start. But the influential HighScope study is based on 128 kids in one location (Ypsilanti, Mich.) in the early 1970s.

In contrast, the most recently available findings from the National Head Start Impact Study, which I co-authored, included 4,667 children in 84 locations. It showed few important effects on children’s lives — results that are causing the nation to rethink its early childhood policy options. Both studies used the same research design but only the second addressed programs in diverse, real-world settings. Critics and advocates alike are waiting on a new report from this comprehensive assessment to be released later this year.

Increased pressure has come from both sides of the political aisle to tie federal funding for Head Start and other social programs to results. Already, programs such as home visits by nurses for at-risk parents along with six other social interventions are part of a pilot program to measure success that will determine future financing after a thorough evaluation of performance data.

Research comes in many forms, and not all research is created equal. So how can policymakers be sure they are basing their decisions on the best scientific evidence among the many published studies on the effectiveness of government-funded programs? How can we? Here are a few ways to determine whether a study meets broad principles for what researchers consider the “gold standard” of reliable evidence:

Here are four tips when assessing social policy research:

1. How Good Is the Data? Check to see whether the investigators studied a large, representative data sample. Make sure they didn’t focus on a narrow group or exclude important segments of the population that would be affected by the specific program or policy being examined. And be sure that they, like the national Head Start study results, reflect the more typical situation of a program implemented with limited resources in many places across the country.

2. Beware of Apples and Oranges Comparisons. It is vital to check that a study compares equivalent groups of people when deciding whether better results for program participants are caused by the program. They can in fact emerge from all manner of pre-existing differences between participants and nonparticipants that invalidate the comparison. When study authors conclude a social program’s specific strategy or intervention worked, ask yourself whether there are alternative explanations for the patterns in the data.

As an example, for years, published research on state vocational rehabilitation programs reported big benefits — but only by comparing “apples” (people who successfully completed rehab and training) with “oranges” (participants in rehab who dropped out early or were not successfully rehabilitated). Subsequent gold standard studies, comparing apples and apples, did not bear out these results. As a result of awareness of the new, accurate findings, agencies running disability programs have moved from providing employment services to strengthening the financial incentives to work for this population.

3. Look for Random Assignment. Give findings from randomized controlled trials the greatest credence. Just as medical researchers run a lottery to give the new drug they are testing to some and a placebo to others, studies of social issues randomly divide a population of interest into a “treatment group” that receives an intervention and a “control group” that does not. These are “apples” and “apples” comparisons of families or individuals taken from the same tree.

4. Don’t Overinterpret Negative Findings. Scientists set a very high bar for proving a social program works, so it is possible that many programs that do work fail to meet this standard — often because of small study sample sizes. Researchers often fall into the trap of presuming studies that fail to prove program effectiveness have proved program failure. Don’t do this. “Statistically insignificant” findings have simply failed to show clear patterns in either direction and should not be the death knell for programs until clearer evidence of failure emerges. In some instances, such as the national Head Start study, the investigators, including myself, caution that “we cannot with this study sample make a confident conclusion either way.” This is a scientific truth lost in the public wrangling over where the study points in terms of policy.

Following these gold standard criteria, policymakers can make better-informed decisions about directing government dollars to expand the programs that are working and fine-tune or eliminate policies that are not working. The economic situation may push us to make quick decisions about our nation’s future, but we owe it to everyone to be sure we’re making them based on sound science.

Stephen Bell is a principal associate/scientist and a senior fellow with Abt Associates.

Recent Stories

Uncalled races blur House majority status for 119th Congress

Trump wins presidency a second time, completing comeback

Trump takes first two swing states as AP calls NC, Georgia

Republicans claim Senate majority outright

A night of firsts for the First State, as Delaware elects Sarah McBride, Lisa Blunt Rochester

A glide-path caucus of new senators with enhanced statures