Authors

  • SARAH HOFMAN-GRAHAM is a graduate student at the George Washington University Trachtenberg School of Public Policy and Public Administration.

  • MIA VANTINE is a graduate student at the George Washington University Trachtenberg School of Public Policy and Public Administration.

  • KATHRYN NEWCOMER, PH.D. is a member of the Data Foundation Board of Directors. She is also a Professor at the George Washington University Trachtenberg School of Public Policy and Public Administration.

Contents

Introduction

A Theory of Change For Using a Learning Agenda

Key Themes From a Review of Initial Learning Agendas

Conclusion And Next Steps

References


Introduction

The Foundations for Evidence-Based Policymaking Act of 2018 (Evidence Act) requires federal agencies to produce evidence-building plans, or learning agendas.[1] While the initial cohort of public learning agendas required by the Evidence Act is not expected to be published until 2022, some agencies have already published learning agendas or produced interim plans offering insights and approaches that may be helpful to other organizations.

In 2019, the White House Office of Management and Budget (OMB) issued initial implementation guidance to agencies about fulfilling the legal requirement while also encouraging agencies to formulate useful learning agendas to conduct evidence-building activities across government.[2] OMB subsequently published additional guidance about the role of learning agendas in supporting evidence-based policymaking and suggesting that all agencies use the tool.[3] Specifically, OMB provided flexibility to agencies in designing the agendas and also encouraged agencies to identify priority questions that will have the biggest impact on agency performance and function, including both short- and long-term questions, and then to review agendas annually. The general instructions allow for considerable flexibility in determining the structure, scope, scale and format of learning agendas for each agency.

In parallel, researchers outside government continue to identify strategies for maximizing the usefulness and accessibility of the learning agendas through participatory processes, alignment to agency missions, and fostering a commitment and capacity in agencies to promote evidence-based decision-making.[4] An effective learning agenda holds promise to both address longstanding challenges in promoting learning cultures and advance the promise of more evidence-informed policy and meaningful performance improvements.[5]

This issue brief explicitly outlines the theory of change for developing and using a learning agenda, providing practitioners and government officials a useful frame- work for understanding the value proposition for agency evidence-building activities. This issue brief also reviews five learning agendas obtained by the research team from federal agencies to identify characteristics of learning agendas created since enactment of the Evidence Act. Key characteristics reviewed include the types of questions in agency agendas, involvement of stakeholders, and the level of integration into the agency's strategic plan.

Of note, three of the federal agency’s learning agendas included in this analysis were obtained confidentially by the research team and will be referred to as Agency A, B, and C. The other two agencies openly published their agendas and include the Small Business Administration and the Department of Housing and Urban Development.[6]

A Theory of Change For Using A Learning Agenda

A theory of change is a representation of how and why a program or activity is expected to produce outcomes, illustrating the assumptions and factors that may affect achievement of stated goals. For promoting capacity to do and use evaluation, or evidence, the learning agenda process is situated at the center of the theory of change. Each component is briefly discussed below.

Screen Shot 2021-07-22 at 3.29.12 PM.png

Process for Developing a Learning Agenda

The process of developing a learning agenda supports evidence-building outcomes and evidence use. While the Evidence Act mandated federal agencies develop an evidence-building plan, the development process varies widely across agencies. Numerous process questions and design choices are affected by who leads the process, which stakeholders are included, how the strategic plan is integrated, and what types of evaluation questions are framed through the process.

Evaluative Knowledge, Skills, and Resources

Conducting evaluations and implementing evaluation learning requires time, resources, and expertise. Organizations have limited resources and human capital to allocate to evaluation development, data organization and collection, analysis, and implementation of learning activities.[7] Prior to enactment of the Evidence Act, some agencies already prioritized conducting and using evaluations, while others are now building this capacity.

Leadership Support

Agency leaders—political appointees and civil servants alike—contribute to setting priorities and determining how to allocate resources and personnel. Leadership support impacts the learning agenda development process by identifying mission objectives and goals, deciding at what level of the organization the agenda should be developed, and encouraging a culture that values evaluation for the purpose of learning, not just compliance.[8]

Organizational Learning Capacity

Organizational learning capacity relates to developing an agency culture that pro- motes learning by encouraging evidence-informed policy and using data to inform meaningful performance improvements. Agency leadership and staff, plus the learning agenda development process can influence organizational learning capacity depending on how meaningful, realistic, and useful the learning agenda is.[9]

Organizational Outcomes

The development and use of learning agendas can help improve organizational outputs and outcomes. In the short term, outcomes may include increased production of program evaluation reports and compliance with the Evidence Act. Intermediate outcomes may include a greater shared understanding and agreement on key goals and desired outcomes across agency leaders, staff, and stakeholders. The development and use of learning agendas may result in changes to program operations due to evidence-informed performance improvements. Because the learning agenda development process entails examining data and evaluation needs and capacity, long-term outcomes may also include a prioritization of evaluative knowledge, skills, and resources and the integration of evaluative thinking into the operations and culture.[10]

Context-Specific Factors

Contextual variables are mediating or moderating factors that may influence the learning agenda development process, but may be out of the agency’s direct control. Mediating factors are usually internal to the process and may positively or negatively affect the learning agenda development process. Such factors include the level of existing evaluation capacity within an agency, agency work and learning culture,

and existing data availability and usefulness. Moderating factors impact the environment in which the process is taking place, including a change in the Executive Branch’s leadership in early 2021. In January 2021, the President issued a memorandum announcing forthcoming updated guidance for agencies on learning agenda formulation.[11]

Applying the theory of change in practice may be constructive for agencies consider- ing how to situate a learning agenda within available resource constraints, contexts, and goals. Taken together, agencies reflecting on this theory of change in practice can identify characteristics to better devise a holistic approach for the development, adoption, and use of a learning agenda.

Key Themes From A Review of Initial Learning Agendas

In reviewing five agency learning agendas, important and relevant distinctions emerged about the degree of agency centralization the agendas, approaches to engaging stakeholders, integration with longstanding strategic planning activities, types of questions, and plans for implementation.

Centralized Reporting of Learning Agendas

In OMB’s guidance to agencies, officials established by the Evidence Act are all slated to play a role in the formulation of the learning agenda.[12] Without prescribing how evaluation officer, chief data officers, and statistical officials should coordinate, OMB’s guidance recognizes the development process and coordination will vary across agencies, including the extent to which and how agencies centralize reporting for an agency-wide learning agenda.

SBA, HUD, and Agency A used centralized processes to create a single, unified learn- ing agenda. In SBA the activities were coordinated through the Office of Program Performance, Analysis and Evaluation. In HUD, activities were centralized by the Office of Policy Development and Research. In Agency A, an agency-wide evaluation unit attached to the agency head’s direct office led formulation, while also providing technical support, tools, and resources.

In contrast, Agencies B and C deployed more decentralized approaches. Agency B used a semi-centralized approach by establishing three governing bodies to implement and coordinate the building and use of evidence, operationalizing the Evidence Act themes and new positions around data governance, statistical data, and evaluation. In Agency B, coordinating bodies led by each of the three Evidence Act officials lead respective committees, which also coordinate with each other. Agency B’s learning agenda is divided into separate goal-based questions aligned with program activities.

Agency C employed a decentralized learning agenda development process for their learning agenda. A coordinating council for evidence and evaluation includes senior staff and subject matter experts from across the agency. The council provided templates and instructions for operating divisions to use, each of which cleared respective plans through division leadership. The operating division plans were then compiled into a single agency learning agenda.

Screen Shot 2021-07-22 at 3.30.55 PM.png

Approaches to Stakeholder Engagement

Stakeholder engagement can encourage the use of evidence generated from a learn- ing agenda process, including for any individual, group, or organization that will affect or be affected by a program or operation. Each of the five agencies included some consultation with internal and external stakeholders in their learning agenda development process, but the stakeholders and engagement strategies varied between agencies.

All five agency agendas included staff as internal stakeholders with varied consultation approaches. SBA and Agency B both included program staff, in addition to agency and evaluation leadership. Agency B formed a committee chaired by the Evaluation Officer which included program staff. SBA has a staff-level Evidence and Evaluation Community of Practice. SBA, Agency A, and Agency C report staff engagement strategies including meetings, brainstorming sessions, listening sessions, and staff comment periods.

Strategies for external stakeholder engagement also varied among agencies. HUD and Agencies A, B, and C report consulting with other federal agencies as they developed their learning agendas. Agency B reported consulting with Congress, though the nature of their engagement and with whom is not specified. HUD, Agency A, and Agency B, note consultation with state and local governments.

Each of the five agencies also sought non-governmental input. SBA and Agency A both published Requests for Information through the Federal Register to solicit feed- back from stakeholders including nonprofits, think tanks, professional associations, academic institutions, and businesses. HUD and Agency C sought input from the end-users of their programs and other on-the-ground stakeholders. For example, HUD solicited research questions and feedback from HUD-assisted tenants. HUD and Agency C both reported seeking input from advocates and grass-roots groups.

Screen Shot 2021-07-22 at 3.32.21 PM.png

Integration With Strategic Plans

The Evidence Act intended for multi-year learning agendas to align with quadrennial agency strategic plans required by the Government Performance and Results Act (GPRA). OMB’s implementation guidance, however, did not specify a particular format, level of detail, or structure of the alignment.[13] Thus, agencies took different approaches for accomplishing integration.

SBA, Agency A, and Agency B, each organized their learning objectives by the respective agency’s strategic goals. Agency A, for example, split their learning agenda into three sections each containing a goal and objective from the agency’s current strategic plan. The section then contained the learning objectives intended to further the strategic goal and objective listed in the section. HUD and Agency C also include alignment with strategic plans directly in the learning agendas, but with different approaches. Because the Agency C learning agenda was developed using a decentralized process, the strategic plan alignment is also decentralized by operating division, with limited reference to the agency-wide strategic plan. HUD took yet another approach, noting how their learning agenda will help shape HUD’s next strategic plan. In practice, all learning agendas should also result in informing future strategic planning activities.

Framing Learning Agenda Questions

Given the flexible guidance on developing learning agendas, agencies were expected to pursue a range of approaches in framing questions and priorities. Across the five agencies questions fell into five general categories: descriptive, normative, causal/ impact, prospective, or comparative. In practice, agencies should determine the kinds of results they want when creating questions and should frame their questions accordingly.

Screen Shot 2021-07-22 at 3.33.27 PM.png

The majority of the questions for all five agencies were either causal/impact or exploratory questions. HUD had the most questions overall, and focused primarily on causal/impact, exploratory, and descriptive questions. Clearly HUD’s approach focuses on describing and defining topics and objectives. For example, HUD asked the following combination of questions:

→ DESCRIPTIVE:
“Where is new housing being built? What type of housing is being built?”

→ CAUSAL/IMPACT AND EXPLORATORY:

“Does ownership of manufactured housing communities by low-income residents through limited- equity housing co-ops (LEHCs) improve outcomes such as financial security, asset development, neighborhood amenities, and community safety and resilience for low-income residents?”

“What regulatory or program policies would support more LEHC purchases of threatened or poorly managed manufactured housing communities?”

The HUD learning agenda included 102 total questions, 48 of which asked a single question with the remainder posting multiple questions. Each question addresses a different topic and will yield different research opportunities. The approach also combines exploratory and causal interests in particular topics and issues.

Implementation Planning

HUD and Agency B were the only agencies that included written plans to implement the learning agendas. Each discussed the importance of regular updates to the learning agendas as new challenges emerge. While each noted how the learning agenda and implementation will likely change based on policy priorities, HUD and Agency B’s emphasis on continuous updating stood out compared to other agencies’ learning agendas.

Conclusion and Next Steps

Multi-year learning agendas can be valuable tools for agencies to address long- standing challenges in promoting learning cultures. They can also help advance evidence-informed policy and meaningful performance improvements. In practice, learning agenda processes and approaches differ by agency. For the five agencies reviewed, each varied in processes for stakeholder engagement, integration with the strategic plan, framing of evaluation questions, and implementation planning.

Given the small sample of agencies included in this analysis ahead of requirements to publish learning agendas publicly, any correlations between the size and complexity of the agency cannot be fully assessed. However, among the analyzed agendas, smaller agencies appeared to opt for a more streamlined process across all dimensions. This may reflect the relatively new approaches for evaluation capacity and planning for learning in those agencies, compared to larger agencies with more established evaluation capacity. It also suggests a goal of the Evidence Act to begin building capacity in all federal agencies may be coming to fruition.

In 2022, when federal agency learning agendas are published in conjunction with new strategic plans, additional assessments should be conducted to identify promising practices for ensuring the agendas are actionable and useful for the evidence-building community. Perhaps most importantly, evaluators should also assess in coming years whether the theory of the learning agenda as a support for evidence-informed policymaking is itself an evidence-based practice.


References

  1. See Section 312 in the Foundations for Evidence-Based Policymaking Act of 2019, Public Law 115-435. Available at: https://www.congress.gov/115/plaws/publ435/PLAW-115publ435.pdf.

  2. Office of Management and Budget (OMB). Memorandum for Heads of Executive Departments and Agencies: Phase 1 Implementation of the Foundations for Evidence-Based Policymaking Act of 2018: Learning Agendas, Personnel, and Planning Guidance. M-19-23. Washington, D.C.: White House Office of Management and Budget, 2019. Available at: https:// www.whitehouse.gov/wp-content/uploads/2019/07/M-19-23.pdf. The acronyms represent: eXtensible Business Reporting Language, eXtensible Markup Language, JavaScript Object Notation, and Resource Description Framework, respectively.

  3. OMB, 2021. Evidence-Based Policymaking: Learning Agendas and Annual Evaluation Plans. Washington, D.C.: White House OMB, 2021. Available at: https://www.whitehouse.gov/wp-content/uploads/2021/06/M-21-27.pdf. As one example, accounting standards often are represented as codifications to indicate the rigor of this domain knowledge.

  4. Newcomer, K., Olejniczak, K., Hart, N. Making Federal Agencies Evidence-Based: The Key Role of Learning Agendas. Washington, D.C.: IBM Center for The Business of Government, 2021. Available at: http://www.businessofgovernment.org/ sites/default/files/Making%20Federal%20Agencies%20Evidence%20Based.pdf.

  5. Newcomer et al., 2021.

  6. U.S. Department of Housing and Urban Development (HUD). HUD Research Roadmap: 2020 Update. Washington, D.C.: HUD, 2020; and U.S. Small Business Administration (SBA). Enterprise Learning Agenda, FY 2020 Update. Washington, D.C.: SBA, 2020.

  7. Newcomer, K. and Hart, N. Evidence Building and Evaluation in Government. New York: Sage, forthcoming.

  8. Newcomer et al., 2021.

  9. Newcomer and Hart, forthcoming.

  10. Newcomer and Hart, forthcoming.

  11. Biden, J. Memorandum on Restoring Trust in Government Through Scientific Integrity and Evidence-Based Policymak- ing. Memorandum for the Heads of Executive Departments and Agencies. Washington, D.C.: The White House, 2021. Available at: https://www.whitehouse.gov/briefing-room/presidential-actions/2021/01/27/memorandum-on-restor- ing-trust-in-government-through-scientific-integrity-and-evidence-based-policymaking/.

  12. OMB, 2019.

  13. OMB, 2019.


Disclaimer

This paper is a published report and product of the Data Foundation. The findings and conclusions expressed by the authors do not necessarily reflect the views or opinions of the Data Foundation, its funders and sponsors, or its Board of Directors.

© 2021 Data Foundation