What is the Research-Implementation Gap and Why is it Relevant to USAID Programming?
Natalie Dubois, Andres Gomez, and Sara Carlson currently support Measuring Impact II, which offers assistance for best practices in adaptive management and the use of evidence and learning across USAID’s biodiversity portfolio.
The research-implementation gap, also referred to as the knowing-doing gap and the knowledge-action gap, captures the idea that there is a disconnect between the knowledge generated by researchers and the information being used to inform policy and practice decisions. This gap impedes effective programming when program planning and implementation proceed with incomplete information and when managers miss opportunities to incorporate relevant knowledge into program decisions. This hot topic in applied research literature has real-world implications for USAID programs.
Researchers have been discussing what they can do—and are doing—differently to make their work more relevant and accessible to practitioners. As practitioners, we have been focused on the implementation side of that gap and how our approach to programming can help or hinder evidence-based decision-making. In a new paper in the journal Conservation Science and Practice, we expand the dialogue around the research-implementation gap to make explicit that bridging the gap is a shared responsibility between practitioners and researchers. Although our recommendations are directed toward conservation practitioners, they are applicable to the work of practitioners across all sectors at USAID. Practitioners across the Agency can apply the learning processes they already use to narrow the research-implementation gap.
The Program Cycle offers approaches such as collaborating, learning, and adapting (CLA) and multiple entry points, including evaluation, to work on the implementation side of the research-implementation gap. These approaches and tools can be strengthened through evidence-based decision-making and adaptive management. Evidence-based decision-making focuses on acquiring evidence before a design or implementation decision to better understand what will likely work—or not work. Through adaptive management, practitioners learn from outcomes after these decisions have been implemented. Being explicit about how we apply these two forms of learning to decision-making in the Program Cycle has important implications for the research-implementation gap.
If the research-implementation gap is a shared responsibility, then how might practitioners at USAID help address it through their work? Below we reflect on and interpret the five recommendations from the paper.
Share your questions. It may seem obvious, but evidence that does not exist or is not relevant to decision makers cannot be used to inform decisions—so knowledge exchange between researchers and practitioners is particularly important. Researchers can do this is by involving end-users in the process of science production, but most researchers will welcome practitioner input about their evidence needs. Synthesizing and disseminating critically important research themes and questions can be an efficient way for practitioners to communicate their needs with the research community (e.g., USAID’s Biodiversity and Development Research Agenda) and open avenues to new partnerships with researchers. At the project and activity level, articulating well-defined questions for researchers when commissioning assessments and evaluations can increase their relevance to program decisions.
Share your data. Monitoring and evaluation, adaptive management, and organizational learning all have the potential to generate information about performance and effectiveness that can feed into the research arena. However, sharing data also requires investment in infrastructure and systems to collect and catalog data and make it available in useful formats so it can be used in formal research projects. Practitioners’ compliance with USAID’s Open Data Policy ensures that researchers have access to program data that can then be analyzed to generate evidence on effectiveness.
Help build the evidence base. Project implementation offers opportunities for learning that can improve global practice and scientific knowledge. However, making data accessible to researchers does little to build the evidence base if the data being generated are of insufficient quality to make reliable inferences about effectiveness. Practitioners can help by generating data that produce transferable knowledge that extends beyond simply assessing the success of a project in meeting its goals. For example, teams can make use of program learning agendas to test theories of change about how strategic approaches work.
Apply multiple learning strategies. Evidence-based practice and adaptive management are not alternative frameworks. Systematic use of scoping and assessments in design can reveal where it may be more efficient to invest resources in learning from the evidence base versus taking action first and learning from project outcomes. Learning from implementation and sharing that learning widely can help inform future similar programming decisions. Even within a single project or activity, different approaches can be more or less suited to different information needs.
Be aware of how your choices can perpetuate the gap. When program managers and design teams are faced with time and resource constraints, these pressures can push them toward learning from outcomes as the default, rather than building on existing evidence. However, when there is an existing research-implementation gap, waiting to learn from outcomes can result in practitioners wasting resources learning from mistakes that could have been avoided. Simply being transparent about how evidence and learning are being used to address uncertainty can help practitioners increase the efficiency of investments in evidence and learning.
Viewing the research-implementation gap as the shared responsibility of both researchers and practitioners will expedite knowledge exchange at the research-implementation interface. Practitioners at USAID already have several tools at their disposal that can help them do so. And by paying greater attention to how these tools are used and applied in program decisions, practitioners can play a critical role in closing the gap between research and implementation.
Dubois, N.S., A. Gomez, S. Carlson, and D. Russell (2019). Bridging the research-implementation gap requires engagement from practitioners. Conservation Science and Practice e134. https://doi.org/10.1111/csp2.134