Executive Summary

In January 2019, the Foundations for Evidence-Based Policymaking Act (Evidence Act) was enacted. It has the potential to dramatically change how the federal government manages data and uses information to inform important policy decisions. As agencies work to implement the Evidence Act, one core question will routinely arise: how much money should agencies spend on generating evidence to inform decision-making? And, importantly, how can the American public track and monitor the amount of funding agencies are allocating annually to support data and evidence activities? These questions need to be addressed throughout the implementation of the Evidence Act and beyond.

This white paper explores the opportunities available for Congress and Executive Branch agencies to apply various funding models to ensure the intent of the Evidence Act is realized. The paper starts by describing the challenges in measuring funding allocated for data and evidence activities, then describes some pieces of the funding calculus that are readily accessible as well as the types of funding mechanisms in place today. The paper concludes with a description of potential policy options and approaches for consideration by congressional appropriations committees, the White House Office of Management and Budget (OMB), and federal agencies in implementing the Evidence Act and supporting evidence-building activities moving forward.

With few exceptions, agencies in the federal government are unable to articulate in whole or in part the cost of collecting and managing data, conducting analyses that are considered “evidence,” then applying that information in decision-making. While there are some exceptions—such as statistical agencies and certain types of research—the challenges of measuring spending on data and evidence initiatives are real. Having a baseline measurement of federal funding that supports evidence generation and use could inform future efforts to ensure government has adequate capacity to effectively make data drive decisions and leverage their available evidence. It could also enable assessment of how adequately resources are allocated for establishing a more data-driven government and what can be expected for the future.

As Congress and the Executive Branch determine how to proceed in implementing the Evidence Act, the following six recommendations aim to support engagement specifically on resource availability:

  1. Agencies should articulate funding and resource needs to OMB and congressional appropriators.

  2. OMB should use the annual budget Passback to prioritize agency actions on the Evidence Act.

  3. Congress and the Executive Branch should allocate sufficient direct appropriations for data and evidence initiatives.

  4. OMB should propose new, flexible funding mechanisms to support implementation of the Evidence Act.

  5. Agencies should maximize existing set-aside authorities and other funding flexibilities.

  6. Information on data and evidence spending should be centrally collected and made publicly available.

If evidence-based policymaking is to succeed in the U.S., our government’s leaders must provide the resources to attain this important goal.


Introduction

The Foundations for Evidence-Based Policymaking Act (Evidence Act), signed into law in January 2019, dramatically changes the expectations for how the federal government manages data and uses information to inform important policy decisions.[1] But, without sufficient resources to support the goals of the law and the activities necessary to achieve them, Executive Branch agencies will be unable to realize the progress Congress envisioned.

The Evidence Act was based on recommendations from the blue-ribbon Commission on Evidence-Based Policymaking, which presented its suggestions to Congress and the President in a report just 16 months before the Evidence Act became law. The commission outlined a vision that rigorous evidence would be generated and used effectively to guide program and policy decisions. To achieve this reality, the commission recommended policymakers provide “sufficient” resources to enable agencies to collect and manage data, enabling the use of data for analysis and evaluation of policies and programs.[2] But, the commission was silent on what level of resources would be needed to support these activities, only suggesting that up to 1 percent of a program’s funding should be used for government’s data and evidence activities.

When Congress advanced the Evidence Act in 2017 and 2018, the expectation was that resources would be identified over time to encourage data-driven government and more evidence-based activities, recognizing that existing resources were insufficient. In practice, the processes of collecting and managing data, generating information through research and analysis, and using evidence to inform decisions in government are multi- faceted and have real financial implications for agencies based on the requisite staff time and systems costs.

The political and stakeholder communities that advocate for evidence-based policymaking in government know that resources are essential to the success of the endeavor. [3,4,5,6]

While the upfront investment is non-trivial, the costs are also modest and even scalable. Even then, the benefits of allocating resources to data and evidence activities are likely significant, leading to improved efficiencies in the programs, projects, and systems that ultimately benefit the American people.

The Evidence Act outlines expectations for government agencies to improve their capacity to engage in data- driven and evidence-based decision-making. A core feature of the Evidence Act is establishing the leadership positions, staffing infrastructure, and core processes to enable government to increasingly construct and use evidence to inform consequential policy decisions.

As agencies work to implement the Evidence Act in 2019 and beyond, one core question will routinely arise: how much money should agencies spend on generating evidence to inform decision-making? Is 1 percent of an agency budget too much? Too little? What about 10 percent? And, importantly, how can the American public track and monitor the amount of funding agencies are allocating annually to support data and evidence activities? These questions need to be addressed throughout the implementation of the Evidence Act in order to ensure the intent of the Evidence Act and the full vision of the Evidence Commission is realized over time.

In the first 10 months of 2019 following enactment of the Evidence Act, OMB offered Congress and agencies little public support for articulating resource needs. The lone exception was included in a memo from OMB finalizing a Federal Data Strategy, calling for sufficient resources in three of the 40 practices recommended for agency data management.[7] Far more attention is needed to this likely barrier for effective data management.[8]

In the short-term, policymakers must make sufficient resources available to support effective implementation of the Evidence Act. Failure to do so introduces substantial risks for fulfilling the Evidence Commission’s vision and congressional intent in the law. This white paper starts by describing the challenges in measuring funding allocated for data and evidence activities, and then describes some components of the funding calculus that are readily accessible, as well as the types of funding mechanisms in place today. The paper concludes with a description of potential policy options and approaches for increasing data and evidence activities in the federal government for consideration by congressional appropriations committees, OMB, and federal agencies. The resource strategies are ripe for consideration to support implementing the Evidence Act and other evidence-building activities.

Screen Shot 2019-11-18 at 2.27.22 PM.png
 

Challenges Measuring Data and Evidence Spending

The ways in which organizations allocate resources indicate relative priorities about activities deemed important for achieving organizations’ missions. In government, budgets provide direct knowledge about how policymakers weigh trade-offs and which programs are prioritized relative to other activities.

With few exceptions, agencies in the federal government are unable to articulate in whole or in part the cost of collecting and managing data, conducting analyses that are considered “evidence,” then applying that information in decision-making. While there are some exceptions—such as statistical agencies and certain types of research—the challenges of measuring spending on data and evidence initiatives are real. The consequences of not having this information are also real: policymakers are unable to determine whether sufficient funding is provided and are, thus, unable to address variations or gaps. Those inside government interested in building evidence capacity do not have reliable information on which to identify needs and better understand variations in capacity.

Notably, this inability to articulate how much funding is allocated does not necessarily mean government is averse to evidence-based policymaking or data-driven decision-making. Instead it suggests policymakers face a challenge common with other policy issues: there is insufficient information for making decisions. For data quality or evaluation funds, for example, policymakers have historically relied on anecdotal, limited, or intermittent resource knowledge to determine how to allocate funding for initiatives. The following are examples of challenges to determine the amount of federal funding to be allocated toward evidence-based policymaking related activities:

NUMEROUS BUDGETARY MECHANISMS ARE USED.

Producing information about budget allocations for data and evidence activities today is challenging because of the wide range of funding sources used in practice through congressional appropriations and authorizations. Some agencies receive funding for data activities from direct appropriations on an annual basis. Some conduct analyses on a fee-for-service basis, or with resources provided by other agencies (i.e. reimbursable). Some data collections and program evaluations are funded with set- aside portions of appropriations. Still yet other activities are funded through public-private partnerships, in-kind contributions, and even private foundations. Evaluations and data collection activities can also be embedded within the design of grants and program operations, so the costs are not readily available. Because there are numerous, often overlapping mechanisms, some duplication may occur in estimation as well.

ABSENCE OF REQUIREMENTS TO MONITOR AND REPORT.

While the variety of mechanisms make tracking of funding difficult, the biggest gap is that the current budgetary accounting structure is not well-aligned for compiling information that may vary by appropriations bill, congressional committee, budget function, agency, and budget account without substantial manual effort. Nevertheless, despite similar challenges, the federal government can articulate total applied research and development, climate change, and information technology systems expenditures. This is because OMB requires agencies to monitor and report this information establishing a top-down expectation. If this existed for data and evidence activities, it would also mean the federal government has information about the aggregate spending in key priorities and the ability to identify agencies that may need additional resources.

METHODS FOR ATTRIBUTION NOT CONSISTENTLY SPECIFIED.

The methods for determining how much funding is allocated on data activities are also not well-articulated. For example, funding for program administration likely supports a variety of data collection and management activities, IT system maintenance, and cybersecurity protections. Good program administration should also include activities to improve data quality, study program outcomes, and facilitate data governance. However, often these amounts are aggregated into single program-wide budget lines. Further, there are various definitions of what constitutes data management, evidence, and policy research that could change the scope of determining what counts as data and evidence funding.

ABSENCE OF COMMON DATA IDENTIFIERS AND STANDARDS FOR REPORTING.

There are inconsistently applied common identifiers for government spending on these funding streams in the legislative and executive branches. Therefore, attempts to document elements of funding in management systems for contracts and grants often result in substantial gaps in documentation, and necessitate making certain assumptions that can create over- or under-estimation.

NON-FEDERAL FUNDING SUPPORTS FEDERAL PROGRAMMING IN MANY AGENCIES.

Some federal agencies receive support for data and evidence activities through a range of indirect sources that may not be easily tracked and reported. One obvious example of this relationship is with state and local governments that operate federal programs, which often have dollar matching requirements to receive federal block grants and categorical grants. In other cases, this could manifest as a requirement for grant recipients. The measurement of funding for data and evidence activities is even more complicated when activities of other government jurisdictions (i.e. states, counties, cities, school districts) are included.

In short, multiple challenges collectively make the tracking of these funding sources problematic within and across agencies. The methodological limits also constrain the ability to make comparisons across agencies or to articulate the resources available for government as a whole. Determining how to best fund the Evidence Act and sustain funding moving forward will require addressing each challenge and instituting a stronger foundation to monitor and track this spending.

While answering the question of how much funding government allocates to data and evidence-building activities is difficult, pursuing the answer is worthwhile to inform future decisions and resourcing of the Evidence Act. Addressing these challenges in the near-term will also benefit agencies because the Evidence Act requires each to conduct assessments of data and evidence capabilities.[9] Information from these initial agency assessments should begin to be made available publicly as part of agencies’ FY 2021 Congressional Justifications in February 2020. Having a baseline measurement of federal funding that supports data and evidence activities will productively inform future efforts to ensure government has adequate capacity to generate and use evidence in decisions.

This paper provides insights into how data- and evidence- based activities are currently funded by the federal government to help decision-makers understand both the current status as well as opportunities to leverage or change in the future if desired. The next section highlights current funding mechanisms that enable evidence-based activities.

Current Approaches to Funding Data and Evidence Initiatives

The various ways the federal government funds data and evidence activities can be broken into distinct segments. However, the segments result in some duplication and have gaps that limit the ability to generate an aggregate funding amount.[10] For comparability, this analysis compiles information from FY 2018 when possible, a year prior to enactment of the Evidence Act.

Statistical, Policy Research, and Program Evaluation Units ($6.9 billion)

The Federal Statistical System provides a core component of longitudinal and reliable data collection, management, and analysis activities in government. The system is comprised of 13 principal statistical agencies and dozens of smaller statistical units. The large statistical agencies each have a unique role in providing statistical information for use by governments, businesses, researchers, and the public. The statistical agencies collect data and conduct rigorous analysis to transform them into useful, objective information that is readily accessible to the public and government decision-makers. Examples of outputs of these agencies include the number of people living in the United States, the unemployment rate, the gross domestic product, educational attainment levels, on-time performance rates of flights, and crime rates. All of these activities are specifically statistical in nature, meaning that they result in summary statistical conclusions like an average about a group or population; the Evidence Act also uses this definition to define “evidence” for the purposes described here.[11]

The 13 principal statistical agencies cumulatively received $2.3 billion in FY2018.[12,13] Funding for statistical agencies is largely provided as a direct discretionary appropriation, though many of these agencies also rely on contributions from other agencies to support targeted surveys and other data activities. In addition to the principal statistical agencies, many other statistical programs exist across agencies. Using the programs identified in the Statistical Programs of the United States Government, commonly referred to as the Blue Book, and adding these programs to the calculation brings the total to $6.9 billion. The programs identified in the Blue Book may not be unique from other programs and funding mechanisms identified below, which may lead to double counting.

This estimate includes program evaluation and policy research activities, and also includes research and evaluation offices like the Office of Policy Development and Research at the Department of Housing and Urban Development as well as the Office of Policy Support, Food and Nutrition Service at the Department of Agriculture. The units typically conduct rigorous research driven by congressional directives, administration priorities, and agencies’ agendas.

Chief Information Offices and Chief Data Offices

Prior to enactment of the Evidence Act, some data quality and management activities explicitly required by the Evidence Act may have been occurring within government’s chief information offices or pre-existing data units. For example, preceding the Evidence Act, the General Services Administration established a Chief Data Officer position to report to the Chief Information Officer, suggesting some amount of resources for purposes beyond IT systems would be within scope for that office.

Moving forward, the expectation is that the CDOs and Chief Information Officers, will work together but likely operate as separate organizational entities. As the Commission on Evidence-Based Policymaking noted, much of the expected activities related to data access and management simply were not occurring at the anticipated level.[14] This sentiment was further elaborated upon by Congress in the committee report for the Evidence Act, which noted a goal of some of the new authorities was to focus on data management lacking in many federal agencies.[15] The perspective was even reinforced by the Chief Information Officers Council, which indicated data access and management were activities that could not be appropriately prioritized given the Chief Information Officer’s role to improve IT management.[16] Nonetheless, the IT budget is approximately $200 billion per year in the federal government and some data management activities likely occurred within these budget lines, particularly as some federal agencies established chief data officers prior to the Evidence Act. Recent research concluded there is a “lack of obvious budget to support CDOs” and a precise estimate of current expenditures on these positions is unknown.[17]

Other Funding Mechanisms at the Program Level

SET-ASIDES AND TRANSFERS

A funding set-aside enables a percentage of a total program appropriation or specific amount to be available for specified purposes. Examples of the use of set-asides for data and evidence activities include:

  • Department of Agriculture, Food and Nutrition Services, Special Supplemental Nutrition Program for Women, Infants, and Children (WIC), has a $6 billion annual budget, based on a discretionary appropriation, to provide supplemental foods, health care referrals, and nutrition education for low-income pregnant, breastfeeding, and non-breastfeeding postpartum women, and to infants and children up to age five. The WIC authorizing statute makes 0.5 percent of the annual appropriation, not to exceed $15 million, available for research evaluation projects at the discretion of the Secretary.[18]

  • Department of Labor, through a provision in the appropriations bills, has authority to allocate up to 0.75 percent of the appropriated amount for specific programs to conduct evaluations through the Office of the Chief Evaluation Officer. While the authority sounds potentially expansive, in practice, Labor has historically only allocated about $27 million for this set-aside, or about one-third of what the law permits. In FY 2018, Labor reduced the spending on evaluation under the set-aside from $13 million to $2 million in FY 2019.[19] For FY2020, the Labor Department is expected to allocate just over $3 million for this purpose.[20]

  • The Department of Education is authorized through the Every Student Succeeds Act to set-aside up to 0.5 percent of the appropriation for specific programs each year for evaluation, or approximately $4 million.[21]

  • As an example of a set-aside of mandatory funding, the Department of Health and Human Services, Administration for Children and Families, operates the $17 billion Temporary Assistance for Needy Families (TANF) program. Of the mandatory funding, 0.33 percent is required to be set-aside for research, evaluation, and technical assistance, or approximately $56 million each year.

The authority for agencies to transfer funds, usually capped at an amount or percentage of agency/bureau/program level, is another mechanism to enable funding for data and evidence activities. One example is the transfer authority that exists at the Department of Treasury through the Treasury Forfeiture Fund, as well as at the Department of Labor, which applies this for executing the evaluation set-aside. These funding streams are an asset for agencies and allow flexibility based on need. Their consistency and stability as a funding source, however, can be uncertain based on the Secretary’s prerogative and even politics. Funding amounts established through the appropriations and authorizing process may be more secure and reliable.

PILOTS AND DEMONSTRATION PROJECTS

Before enacting changes to a program, Congress may choose to allow a small-scale test, providing the federal agency with previously unauthorized flexibility. This allows the new approach to be monitored and evaluated to determine if it would be beneficial for full enactment. An example of this is how the Social Security Administration received renewed authority in 2015 to initiate demonstration projects for the Disability Insurance program to explore ways to help beneficiaries enter or re- enter the workforce. The temporarily-authorized direct mandatory appropriation provides expansive spending authority. This is important as the demonstration projects were projected to cost approximately $400 million over 10 years for data collection, analysis, and evaluation activities. Other pilot and demonstration authorities exist across government agencies; no inventory of such authorities is known to exist.

GRANT AND CONTRACT MODELS

Other funds at the program level are allocated based on the design of the program, which could include grant or contract restrictions to support innovative practices for data management and analysis. The federal government spends $600 billion in grant programs annually for a range of programs and services, including those related to social services, education, and health care.[22]

Some of the grant programs related to evidence building can be competitively awarded, rather than formula grants. Competitive grants provide funding to an entity, in the government or non-governmental, that apply to receive the specific funding. For example, the Children and Families Services budget account at the Department of Health and Human Services’ (HHS) Administration for Children and Families includes a line item for Child Welfare Training, Research, and Demonstration Projects. The agency congressional justification explains that the program provides “broad authority to award competitive grants to entities that prepare personnel for work in the child welfare field and those engaged in research around child welfare issues.”[23] This includes demonstration projects using child welfare research to encourage experimental and innovative child welfare services. The specific funding amount for this program is detailed in appropriations report language, with the budget account top line funding amount in legislation.

Within these grant programs, a subset is for tiered evidence grants. This funding mechanism incentivizes the use of evidence-informed practices for specific services within a grant program. Tiered evidence grants, also referred to as innovation grants, require decision- makers to consider the efficacy of practices when determining what to fund and with what proportion of grant dollars. Since 2010, the number of federal grants using this structure has increased in conjunction with greater awareness of the value of evidence-based programs.

The tiered evidence approach ties federal funding to the level of rigorous evidence for a particular service, with proposals with a greater preponderance of rigorous evidence being eligible for higher funding amounts. The approach answers the question of how to most effectively and efficiently design grant programs to achieve particular goals. Examples of tiered evidence grant programs include the Department of Education’s Education Innovation and Research program (the successor to the Investing in Innovation Fund), HHS’s Teen Pregnancy Prevention program and Maternal, Infant, and Early Childhood Home Visiting program, and the Department of Labor’s Workforce Innovation Fund. To elaborate on one example, the Education Innovation and Research program is a tiered grant program supporting the creation, development, implementation, replication, and scaling up of evidence-based, field- initiated innovations designed to improve student achievement and attainment for high need students. The EIR program supports innovative approaches with evaluations that suggest efficacy. EIR-funded approaches address persistent education challenges while also continuing to build knowledge about what works in education and in what contexts.

To establish a tiered evidence grant, in addition to congressional authorization, the agency must develop evidence standards for different levels or tiers of grants, develop processes to validate evidence claims in applications (often accomplished through evidence clearinghouses), and provide technical assistance to grant recipients for the evaluation component of the grant award. Most agencies use a three-tiered model, which include stages for development, validation, and scale-up. In the three-tiered model, the lowest grant tier is for new, emerging, or innovative practices that align with existing theory but additional testing and evidence is needed. The middle tier is allocated for expanding or implementing practices supported by both theory and some existing evidence. The highest tier of funding is for replicating practices that have a strong body of evidence to substantiate the activity, which is often demonstrated through quantitative evaluations with experimental or quasi-experimental designs.

In contrast to traditional grant programs, tiered evidence grants focus on promising approaches that require further testing and evaluation, as well as practices that are most likely to work in a given context based on existing evidence. This type of grant also incentivizes service providers and private funders to help develop and expand evidence-backed approaches.

In 2018, Congress passed the Social Impact Partnerships to Pay for Results Act (SIPPRA) and provided up to $100 million for 10 years for “pay for success” grants. These funds require rigorous evaluations to determine whether the pre-established outcomes were satisfied. If criteria are met, the service providers receive payment. A similar model from FY 2014-2016 was the Performance Partnership Pilots for Disconnected Youth (P3), which encouraged combining funds from approved programs in the departments of Education, Health and Human Services, Housing and Urban Development, Justice, and Labor along with the Corporation for National and Community Service, the Institute of Museum and Library Services, and related agencies. Implementation required data collection and analysis on specific cross-program outcomes as a condition. The congressional authority enabled 10 performance partnership grants for states, regions, localities, or tribal communities that provided additional flexibility in using discretionary funds across multiple federal programs.

No inventory exists for the breadth of these types of contract and grant mechanisms applied in practice today. The last complete crosscut for program evaluation activities in government was completed in 1977, estimating $170 million in contracts and grants plus an additional $73 million in personnel and other expenses.[24] Adjusted for inflation, this amounts to more than $1 billion in FY 2018 if that level was sustained. Because the crosscut has not been completed since, there is no reliable estimate to compare to across time.

FEDERAL WAIVERS FOR MANDATORY PROGRAM FUNDS

Separate from specific allocations of funding for data and evidence activities, some agencies can alter program requirements specifically to collect data and study whether the changes achieved desired outcomes. More specifically, with the goal of providing states and grantees flexibilities to test and institute alternative approaches, Congress granted some federal programs the ability to waive certain requirements in specific instances to allow for innovation and experimentation. This waiver mechanism has been used in health (Medicaid and Children’s Health Insurance Program (CHIP)), human services (child welfare, child support, and TANF), and education. A broad and routine application of waivers occurs in the Medicaid program, where states can test new or promising strategies to deliver and pay for health care services.[25] As of September 2019, there were 49 approved waivers across 40 states.[26]

Similarly, the federal child welfare program used waivers of federal law to allow states flexibility in implementing certain program requirements, so long as the changes were evaluated.[27] In recent years, this specifically included flexibility for testing innovative prevention services to help keep children out of the child welfare system. Under the authority that expired in FY 2019, HHS approved more than 30 state waivers.

For the Medicaid and child welfare waivers, the amount of funding allocated to waivers is not publicly available. Because costs associated with waivers typically include the full cost estimate for service delivery as well as data collection and analysis, no assessment of the data and evidence is available related to federal waivers and demonstration projects.

WORKING CAPITAL FUNDS

A flexibility provided to some agencies, at the discretion of Congress, is the ability to use a Working Capital Fund (WCF), which can support data and evidence activities. The WCF is a type of budget account that is self-sustaining based on revenue collected through fees (i.e. reimbursables from within an agency) and are available for multiple years (“available until expended”).

Screen Shot 2019-11-18 at 2.41.02 PM.png

The Department of Transportation’s WCF supports a variety of data and evidence activities. The Transportation fund was used in FY 2017 for the Volpe Transportation Systems Center, which seeks “to improve the nation's transportation system by anticipating emerging issues and advancing technical, operational, and institutional innovations” through research and evaluation.[28] In2017,$343 million was collected by Volpe, which is completely funded by sponsor projects and is a federal fee-for-service research and innovation center within the Department of Transportation. The fund is financed through negotiated agreements with other offices at the Department of Transportation, including within the Office of the Secretary, other governmental entities, and non-governmental entities using Volpe’s capabilities.

Across government, the amount spent from Working Capital Funds for data and evidence initiatives is likely not more than 1 percent of the total resources in these funds.

OVERLAP AND DUPLICATION

In practice, agencies operate data and evidence initiatives with an integrated funding approach, combining funding mechanisms to address agency priorities. The Department of Labor receives federal funding for specific offices, programs, grants, and evaluations. The Department is unique in the federal government in having a Chief Evaluation Officer prior to enactment of the Evidence Act that received both a direct appropriation and flexibility to use the set-aside authority for the department. Agencies likely use a combination of these funding mechanisms and will continue to do so moving forward.

New Funding Options for Data and Evidence Initiatives

In coming years as Congress and the Executive Branch determine how to best fund data management and evidence initiatives, new funding mechanisms can also be considered among the many options that enable maximum flexibility to meet agency needs. These mechanisms can be applied even before determinations about what the precise level of resources needed by agencies is, because appropriators and OMB could still apply direct oversight to ensure resources are used responsibly and where funding would not otherwise be allocated.[29] Two possibilities include establishment of a new Evidence Act Interagency Transfer Fund or creation of the agency Evidence Incentive Fund, envisioned by the Commission on Evidence-Based Policymaking.

Evidence Act Interagency Transfer Fund

Historically, some new government-wide management laws have been supported by interagency budget accounts managed by a central agency, then allocated based on need determined during implementation of the new law. The Evidence Act could benefit from such an interagency transfer fund in the interim period between enactment and when agencies are completing assessments and resource reviews.

Similar to the Information Technology and Oversight Reform fund controlled by OMB, an Evidence Act Interagency Transfer Fund could be allocated by a designated official to support agencies in staffing new chief data offices, evaluation offices, and statistical offices, as well as implementation of other provisions of the Evidence Act. Such a fund is relatively simple to establish in appropriations law. It can be coupled with notification and reporting requirements to congressional appropriators to provide real-time oversight of how funds are used by the Executive Branch to support emerging and critical data and evidence needs.

While a beneficial start-up resource, it’s unlikely that an interagency transfer fund would be a viable long-term solution. Furthermore, interagency transfer funds can be burdensome to administer and not the most efficient mechanism to fund an activity Congress intends to continue indefinitely. However, if established as a short- term resource, Congress would need to specify which official would be most appropriate to determine how to allocate funds. This could be the OMB director or specified as a collaboration requirement between OMB officials and the Chief Data Officer Council, the Interagency Council on Statistical Policy, and the Interagency Council on Evaluation Policy.

Evidence Incentive Fund

Evidence Incentive Funds (EIF) for individual federal agencies would serve as a funding vehicle that incentivizes activities to implement the Evidence Act, in combination with appropriate transparency and engagement on key provisions that support accountability and oversight of the initiatives. Initially recommended by the Commission on Evidence-Based Policymaking in its unanimous recommendations, an EIF was envisioned as a Working Capital-like fund that would be authorized for agencies that complied with creation of the agency learning agenda, or evidence-building plan now required by the Evidence Act.

Funding is added to an EIF by rolling over some of an agency’s discretionary unobligated balances at the end of a fiscal year, then making the resources available without a time limitation for use. The absence of a time limit provides the requisite flexibility for multi-year data and evidence projects that require a continuity of funding over multiple years.

The mechanism currently exists in other agencies to support a variety of purposes authorized in appropriations bills. For example, the Social Security Administration has authority to move unobligated balances to support information technology investments, and has used those resources to modernize the agency IT systems.

An EIF leverages existing budget authority and maximizes slack to supplement current evidence funding without displacing programmatic funds. In the short- term, an EIF could help shift the culture of agencies to institute new data and evidence requirements by empowering senior leaders, but without competing for existing resources. The unobligated balances in all discretionary accounts would be transferred to the agency-specific EIF, enabling the agency chief data officer, evaluation officer, and statistical officials to determine the specific use of the funds, with approval from Congress and OMB. An EIF would also not compete with existing resources allocated to evidence-building activities, meaning the funds would truly be used to supplement rather than displace existing resources.

Establishing an agency EIF provides an influx of resources to support learning and knowledge acquisition for building capacity to launch and sustain evidence- building initiatives. The resources could enable an agency to build out and operationalize a robust learning agenda, data inventories, data management processes, and other key provisions of the Evidence Act.

An EIF does have limitations and would not be a viable long-term solution. One drawback is how an EIF would generate an inconsistent amount of funding as the amount is driven from unobligated balances annually and may fluctuate from year to year. An EIF relies on programs not spending all discretionary resources within the allocated time frame, which may not always be the case for many programs. However, the funding mechanism recognizes current federal funding constraints and concerns over the growing federal deficit. One mechanism to manage the unpredictable amount of unobligated balances would be for Congress to cap the amount of unobligated balances going to the EIF and direct any additional funds to agency IT requirements, such as the authority SSA already has. The Evidence Commission suggested 10 percent as a threshold, though an absolute number would likely be more practical for implementation.

By funding and enabling more evidence-based policymaking, government has the ability to utilize federal dollars most effectively, which can result in cost savings. In the absence of a dedicated stream of new funding, the pooling of unobligated balances allows existing funds to be reallocated to fill gaps in evidence- building activities, which would signify progress.

Screen Shot 2019-11-18 at 2.45.40 PM.png

Recommendations and Next Steps

The Evidence Act stands to revolutionize the country’s ability to generate and use evidence in decision-making by prioritizing better data management practices and empowering champions for data who can change and sustain improvements in agency cultures for decades to come. But success in realizing the Evidence Act’s goals requires agencies to have the support and capability to build and maintain the necessary organizational capacity.

As Congress and the Executive Branch determine how to proceed, the following six recommendations aim to support effective implementation of the Evidence Act:

RECOMMENDATION 1

Agencies should articulate funding and resource needs to OMB and appropriators.

As agencies establish data governance processes and the new data and evidence leaders are named across agencies, agencies themselves have an obligation to identify and articulate funding needs. The Executive Branch’s budget review process, in preparation for the annual President’s Budget submission, is a natural opportunity for agencies to engage with key stakeholders and OMB to present needs for data and evidence initiatives. Agencies should especially be obligated to articulate new funding needs when agency leadership determine a new unit, new project, or additional support staff are necessary to achieve the intent of the Evidence Act, including to support chief data officers, evaluation officers, statistical officials. Additionally, resources will inevitably be needed to support producing high quality data inventories, facilitating proper management and review of data assets, enabling access to government data, formulating and conducting evaluations, and much more.

Further, agencies should include explicit data and evidence submissions to appropriations committees as part of the annual congressional justification process.

RECOMMENDATION 2

OMB should use the annual budget Passback to prioritize agency actions on the Evidence Act.

As the Evidence Act implementation process proceeds, agencies will likely need to receive repeated indications about the value of the activities and provisions of the law to ensure they are appropriately prioritized among competing priorities within agencies. OMB should use Passback to expressly direct agencies to prioritize implementation of the Evidence Act, including support for the chief data officers, evaluation officers, and statistical officials.

RECOMMENDATION 3

Congress and the Executive Branch should allocate sufficient direct appropriations for data and evidence initiatives.

The Commission on Evidence-Based Policymaking unanimously recommended that Congress and the President provide sufficient resources for data and evidence initiatives. This goes beyond merely identifying resource needs. OMB must ensure that necessary requests for additional direct appropriations are included in the President’s Budget request and Congress should prioritize resources for these purposes to the extent possible.

RECOMMENDATION 4

OMB should propose flexible funding mechanisms to support implementation of the Evidence Act.

While the funding mechanisms described in this paper are available to some agencies, many agencies do not have access to these authorities that must be explicitly authorized in appropriations laws. OMB should include in the President’s Budget the legislative language for appropriate funding flexibilities that would enable agencies to support implementation of the Evidence Act. Specifically, OMB should include directed set-aside authorities, transfer authorities, and Evidence Incentive Funds at agencies. Each of these funding mechanisms should be tied to activities that demonstrate progress in fulfilling the goals of the Evidence Act, such as the publication of a learning agenda or progress in developing a robust data inventory.

Screen Shot 2019-11-18 at 2.49.10 PM.png

RECOMMENDATION 5

Agencies should maximize existing set-aside authorities and other funding flexibilities.

Numerous agencies have capabilities today that could better support resources for implementing the Evidence Act. Agencies with set-aside funding authorities should maximize those resources. Agencies with Working Capital Funds could allow for those funds to be used to support data and evidence initiatives. Agencies should also recognize that data management and data analysis, including for evaluation, are essential elements of effective program administration. Therefore, agencies should acknowledge to staff that program administration funding can be used to support implementation of the Evidence Act requirements.

RECOMMENDATION 6

Information on data and evidence spending should be centrally collected and made publicly available.

OMB should develop an assessment of data and evidence funding in all Executive Branch agencies to generate insights for policymakers about how funding is allocated across budget accounts and agencies for these expenses that are difficult to track. Additionally, OMB and the Department of Treasury should develop data standards for use in USASpending.gov to enable transparency and accountability for relevant spending in grants and contracts. In addition to the individual capacity assessments required by the Evidence Act, OMB should immediately begin conducting a data and evidence assessment to support decision-making about future resource needs in agencies and provide agencies with clear guidance to address the methodological issues identified in this paper. These issues and support for conducting the assessment could be remedied by using the expertise of the new Chief Data Officer Council, the Interagency Council on Statistical Policy, and the Interagency Council on Evaluation Policy. The assessment should be made publicly available, perhaps by including in the Blue Book, so that Congress and the American public can hold agencies accountable for prioritizing resources for Evidence Act implementation. Finally, efforts to assess resources must also be mindful that while allocation of resources is a general reflection of priorities, targeted engagement by agencies may be realized through discrete funding allocations as well. Agency allocation of resources for capacity that most effectively achieves the long-term vision of the Evidence Act is potentially more significant than reaching an arbitrarily-established threshold for spending in any single type of funding mechanism.

The successful application and use of data to inform government decisions stands to greatly improve the effectiveness and efficiency of government operations. Developing the infrastructure to leverage this information is a responsible and essential endeavor. Successfully implementing the Evidence Act’s data provisions will improve how information is leveraged to inform future policymaking and contribute to more evidence-based government programs.

The absence of estimates of how much funding government allocates to data and evidence activities today does not mean policymakers and the American public are not supportive of evidence-based policymaking or that progress implementing the Evidence Act is not already underway. It also does not mean that individuals are unable to assess whether agencies sufficiently allocate funds to a basic function of quality program administration—generating and using evidence. But the lack of this information in the future will have real and lasting consequences, including for evaluating whether the costs of evidence-based policymaking initiatives are worth it compared to the actual benefits. If evidence-based policymaking is to succeed in the U.S., our government’s leaders must provide the resources to attain this important goal.

References

1. “Foundations for Evidence-Based Policymaking Act of 2018” (Evidence Act). P.L. 115-435. Available at: https:// www.congress.gov/115/plaws/publ435/PLAW-115publ435.pdf.

2. See Rec. 5-5. U.S. Commission on Evidence-Based Policymaking (CEP). The Promise of Evidence-Based Policymaking. Washington, D.C.: Government Publishing Office, 2017. Available at: https://cep.gov/report/cep-final-report.pdf.

3. N Hart. Recommendation for Fiscal Year 2020 Appropriation to Support Effective Implementation of New Chief Data Officers, Evaluation Officers, and Statistical Officials. Letter to the Financial Services and General Government Appropriations Subcommittee. Washington, D.C.: Data Coalition, 2019. Available at: http://www.datacoalition.org/wp- content/uploads/2019/08/Letter.EvidenceAct.FY2020.FSGG_.Approps.DataCoalition.7-26-2019-1.pdf.

4. T Catsambas, L Goodyear, and A White. Letter to the House Appropriations Committee. Washington, D.C.: American Evaluation Association, 2019. Available at: https://www.eval.org/d/do/4730.

5. American Evaluation Association. An Evaluation Roadmap for a More Effective Government. Washington, D.C.: American Evaluation Association, 2019.

6. Results for America. “2019 Invest in What Works Federal Standard of Excellence.” Washington, D.C.: America Achieves, 2019.

7. RT Vought. “Federal Data Strategy – A Framework for Consistency.” M-19-18. Washington, D.C.: White House Office of Management and Budget, 2019. Available at: https://www.whitehouse.gov/wp-content/uploads/2019/06/M-19-18.pdf.

8. T Austin, D Mader, M Ravichandran, and M Rumsey. Future of Open Data: Maximizing the Impact of the OPEN Government Data Act. Washington, D.C.: Data Foundation, 2019. Available at: https://www.datafoundation.org/future-of-open-data- maximizing-the-impact-of-the-open-government-data-act.

9. See Section 306(a)(8) of U.S.C. Title 5.

10. Note that the Departments of Defense and Homeland Security and intelligence agencies are generally excluded from this discussion because of the lack of public information about amounts within classified portions of the budget and government expenditures. The Evidence Act does apply to these agencies, therefore much of the discussion in this white paper is equally applicable to non-defense and defense agencies alike.

11. See 311(4) of U.S.C. Title 5.
12. Requested amount in the FY 2020 President’s Budget.

13. White House Office of Management and Budget. Statistical Program of the United States Government, Fiscal Year 2018. Washington, D.C. OMB. Available at: https://www.whitehouse.gov/wp-content/uploads/2018/05/statistical- programs-2018.pdf.

14. CEP, 2017.

15. U.S. House of Representatives. “House Report for Foundations for Evidence-Based Policymaking Act of
2018” (Evidence Act). House Report 115-411 for H.R. 4174. Available at: https://www.congress.gov/115/crpt/hrpt411/ CRPT-115hrpt411.pdf.

16. House Report 115-411. 17 Austin et al, 2019, p. 6.

17 Austin et al, 2019, p. 6.

18. See Division A. “Consolidated Appropriations Act, 2018.” P.L. 115-141. Available at: https://www.congress.gov/115/ plaws/publ141/PLAW-115publ141.pdf.

19. Bipartisan Policy Center. “BPC Obtains Documents Revealing Labor Dept Cut Funds for Program Evaluation.” Washington, D.C.: BPC, 2018. Available at: https://bipartisanpolicy.org/press-release/bpc-obtains-documents-revealing- labor-dept-cut-funds-for-program-evaluation/.

20. Data Coalition. “Data Coalition Obtains Documents Indicating Labor Department is Constraining Resources for Evaluating Program Effectiveness.” Washington, D.C.: Data Coalition, 2019. Available at: https://www.datacoalition.org/ press-releases/data-coalition-obtains-documents-indicating-labor-department-is-constraining-resources-for- evaluating-program-effectiveness/

21. Results for America. “Evidence-Based Policy Provisions in the Every Student Succeeds Act.” Washington, DC: Results for America, 2016. Available at: http://results4america.org/wp-content/uploads/2016/12/ESSA-evidence- summary12.20.16-.pdf.

22. Government Accountability Office. Tiered Evidence Grants: Opportunities Exist to Share Lessons from Early Implementation and Inform Future Federal Efforts. Report GAO-16-818. Washington, D.C.: GAO, 2016. Available at: https://www.gao.gov/ products/GAO-16-818.

23. U.S. Department of Health and Human Services. FY 2019 Justification of Estimates for Appropriations Committees for the Administration for Children and Families. Washington, D.C.: HHS, 2019. Available at: https://www.acf.hhs.gov/sites/default/ files/olab/acf_master_cj_acf_final_3_19_0.pdf.

24. White House Office of Management and Budget. Resources for Program Evaluation, FY 1977. Washington, D.C.: OMB, 1977.

25. See Section 1915 of the Social Security Act.

26. Medicaid Waiver Tracker: Approved and Pending Section 1115 Waivers by State, Kaiser Family Foundation. Available at: https://www.kff.org/medicaid/issue-brief/medicaid-waiver-tracker-approved-and-pending-section-1115- waivers-by-state/.

27. Title IV-E of the Social Security Act.

28. Department of Transportation. “Our Mission and Values.” Washington, D.C.: DOT, 2019. Available at: https:// www.volpe.dot.gov/about-us/our-mission-and-values

29. N Hart. “Funding the Foundations for Evidence-Based Policymaking Act of 2018 in the FY 2021 President’s Budget Request.” Letter to the White House Office of Management and Budget. Washington, DC: Data Coalition, 2019. Available at: http://www.datacoalition.org/wp-content/uploads/2019/09/Data-Coalition-Letter-to-OMB-re-2021- Budget-9-20-2019.pdf.

Screen Shot 2019-11-18 at 2.52.22 PM.png

Acknowledgments

The Data Foundation and authors thank three anonymous reviewers who are experts on the topics discussed in this paper for their constructive suggestions and advice on an earlier version.

Disclaimer

This paper is a product of the Data Foundation. The findings and conclusions expressed by the authors do not necessarily reflect the views or opinions of the Data Foundation, its funders and sponsors, or its board of directors.

Screen Shot 2019-11-18 at 2.53.18 PM.png