Recap of Evidence 2.0: Exploring New Approaches for Applying Evidence in Active, Real-Time Decision-Making Environments

Introduction

In the five years since the U.S. Commission on Evidence-Based Policymaking issued its unanimous and bipartisan report to the President and Congress, the discourse and practice surrounding evidence-informed decision-making advanced. Evidence Act implementation expanded the federal government’s capacity for evidence-building activities by establishing or formally recognizing a new c-suite of data officials in government: Chief Data Officers, Evaluation Officers, and Statistical Officials. These positions are accelerating growth and maturity of the “evidence ecosystem.” The National Secure Data Service Act, passed in mid-2022, shows that knowledge production, data linkage and infrastructure, and evidence-building capacity are clear and established priorities for the federal government.

As agencies produce more evidence, a critical question remains: How is evidence being meaningfully and effectively used? More specifically, how can those analyzing data and producing evidence keep the end-users in mind so they can leverage evidence for impacts and to benefit the public good? 

As part of the White House Year of Evidence for Action, the Data Foundation convened an Evidence Forum on October 18, 2022, in partnership with the White House Office of Science and Technology Policy (OSTP) and the Office of Management and Budget (OMB) to explore answers to this question and introduce the Evidence 2.0 model—an approach for applying evidence in dynamic, real-world decision-making contexts. Live-audience polling indicated that this topic resonated: 65% of respondents identified themselves as both evidence producers and users, and participants voiced a wide array of barriers to evidence use such as time, culture, and evidence availability, which are themes the Evidence 2.0 model will help address.

Audience members submitted the barriers they face when using evidence.

A majority of responding audience members described themselves as both producers and users of evidence.


Recording


Perspective of an Evidence User

The U.S. Department of Education is responsible for creating an efficient, fair, and financially sound structure for higher education. Deputy Under Secretary and Chief Economist Jordan Matsudaira’s keynote address stressed that research generally informs higher education policies, but it is rare to find evidence ready-made to inform decision-making.

In my experience, it’s very rare to face a decision where there is existing evidence off the shelf that provides ready-made answers for us to take that evidence and slot it into the decision-making process, and from that evidence say, ‘OK we’ve looked at the evidence and we know what to do. We should obviously do one of these things.’
— Deputy Under Secretary and Chief Economist Jordan Matsudaira

He highlighted several complications that arise when using evidence to make policy decisions in government contexts:

  • What question are we asking? What evidence do we need to answer it? This is the first step in evidence-informed decision-making and often the most difficult. For example, when the Department of Education sets a goal to increase the number of college-educated Americans, there are many different potential policy approaches to analyze. To increase the college completion rate, the department must decide whether they will look at the impact of Pell Grants on graduation rates, tuition reduction, or income supplements, among others. All three angles are measures of college affordability, but each requires different evidence. 

  • What evidence already exists? The specific data required for a policy decision, more often than not, do not yet exist. So, agencies must go out and produce the data themselves, or find someone who can produce data for them. In cases where outside researchers cannot help, agencies must often rely on their own incomplete data. For example, when analyzing student loan repayment programs, the Department of Education needs detailed historical earnings data from student borrowers, but the pre-existing survey data miss low-income borrowers. This discrepancy skews the picture of the typical loan holder and makes it difficult to measure and predict the impacts of student debt reforms. Deputy Under Secretary Matsudaira said a great deal of time and resources are spent running simulations to fill in gaps in income data.

  • How do we synthesize and extrapolate from the evidence that we do have? Once all available evidence has been collected, researchers and analysts must decide which questions they can answer. Due to gaps in the data, decision-makers must be careful to avoid drawing unsupported conclusions from incomplete data sets. Deputy Under Secretary Matsudaira recommended translating evidence for policymakers with appropriate context and recommendations about policy implications. 

He proposed two solutions: 

  1. Given the resources, agencies should recruit and retain more research staff in-house to produce and synthesize evidence in real-time for policy decisions. 

  2. Having political leadership within agencies who have contact with evidence producers and policymakers—such as the Department of Education’s Office of the Under Secretary—can ensure there is a voice for evidence-informed decision-making in the policymaking process.


Panel: Designing Evidence for Users

Evidence 2.0 is about innovating, so that we can get the evidence to the people who use it, in a way that matches what they need, and in a way that is both accessible and actionable in real time.
— Dr. Jennifer Brooks, Center for Impact Sciences

There is still plenty of room to improve evidence production, but it is equally important to consider how that evidence will be used, by whom, and for what. The Evidence 2.0 model takes that next step, considering the needs of end users. Based on her experience leading the federal Peer Learning Group focused on Core Components Analysis, Dr. Jennifer Brooks of the Center for Impact Sciences said evidence needs to be broken into more manageable and applicable categories, be more dynamic and adaptable, and align with the needs of the communities using and benefitting from the evidence. The Core Components model is one method for doing so.

The Core Components Model

The Core Components model, explained Jason Saul of the Center for Impact Sciences, is not a strategy of looking at ‘Program A’ and ‘Program B’ to identify which one worked best. Rather, it is a way of looking at the underlying structures of programs to figure out why Program A worked, and why Program B didn’t. The goal of the Core Components model is to pinpoint design features and elements that a wide array of successful programs share. Once identified, those common features can be standardized and used to construct effective programs consistently.

Making evidence actionable through the Core Components model entails several innovations:

  • Focus on external, generalizable validity. The singular attention given to internal validity in research often comes at the expense of external validity, or how a specific study can be applied within the broader field of research. Researchers should ask themselves how their successful models can be put to use in other situations, Mr. Saul said.

  • Shift from post-hoc to ex-ante evidence. Too much time is spent looking back at research to determine what went right or wrong. Mr. Saul encouraged the audience to begin thinking about the predictive capacity of modeling. Rather than waiting for months or years to determine the validity of a framework, more effort should be applied to using successes to look ahead.

  • Consider component-based rather than program-level evidence. When scientists standardize evidence in meta-analysis, they typically standardize the effects of studies rather than the intervention strategies of those studies. To make evidence actionable, Mr. Saul said researchers need to look at which underlying components produced successful outcomes across many different programs instead of paying attention only to those outcomes.

Three things are driving Evidence 2.0: data standardization, evidence synthesis, and matching prediction with program features. Mr. Saul recognized those in the federal government who are doing groundbreaking work in this field; the next step, he said, is centralizing and institutionalizing these practices. The Evidence Forum panel then laid out how their agencies are working with the Core Components model to make its vision reality.

  • Cheri Hoffman, Acting Commissioner, Administration on Children, Youth and Families: The Administration on Children, Youth, and Families realized there was a largely untapped database of social science research dedicated to youth prevention programs related to pregnancy, school drop-outs, crime, and other topics. Her team analyzed the data and pulled three common outcomes: externalizing behaviors, social competence, and self-regulation. These were used to create “practice guides” for the social science community which offered standardized, proven techniques for achieving those three outcomes. The project was called Evidence for Program Improvement, and is available in an interactive format on youth.gov/epi. 

  • Kaitlyn Sill, Office of Research, Evaluation, and Technology, National Institute of Justice, U.S. Department of Justice: Over the past ten years, the Office of Justice Programs has moved from a rating system for program outcomes (effective, promising, no effect) to rating practices, aligning with the Core Components model. For example, there is now a database showing which types of police body-worn camera practices are most likely to produce positive outcomes. 

  • Mary Hyde, Office of Research and Evaluation, AmeriCorps: When evaluating grant funding proposals, AmeriCorps has begun to analyze the structures of different proposals. By compiling evidence on successful grants, the agency has been able to assess which grant programs are likely to produce desired outcomes, and the grant application process is now structured to identify which proposals have Core Components that will likely lead to success. This allows AmeriCorps to fund grantees which the evidence predicts will achieve their goals, making each dollar more impactful.


Resources

The Evidence 2.0 model and Core Components frameworks hold great promise for transforming evidence-informed policymaking in the federal government. The panelists and audience offered their resources where this work has already begun:

Key resources from the White House Office of Management and Budget: