Primary tabs

ADS 203 - USAID Operational Policy on Assessing and Learning

Comments (0)
Published:
June 27, 2014

USAID plans and implements programs designed to improve the development status of the people in the selected countries and regions around the world in which we work. In order to meet these development results and to ensure accountability for the resources used to achieve them, USAID Operating Units must strive to continuously learn and improve their approach in achieving results. The purpose of strong evaluation and performance monitoring practices is to apply learning gained from evidence and analysis. USAID must rely on the best available evidence to rigorously and credibly make hard choices, learn more systematically, and document program effectiveness.

As outlined in ADS 200, learning links together all components of the Program Cycle. Sources for learning include data from performance monitoring, findings of research, evaluations, and analysis commissioned by USAID or third parties, and other sources. These sources should be used to develop and adapt plans, projects, and programs in order to improve development outcomes. ADS 202 provides more detail about learning and adapting during the implementation of projects and programs. This ADS Chapter focuses on carrying out the monitoring and evaluation components of the Program Cycle. In this process, USAID Operating Units must establish systems, methods, and practices for ensuring that quality evaluation and performance monitoring practices directly inform their implementation and adapting as well as contribute to Agency decisions and learning.

Performance monitoring and evaluation are mutually reinforcing, but distinct, practices. It is important to understand the difference between performance monitoring and evaluation, as each performs different functions:

Performance monitoring is an ongoing process that indicates whether desired results are occurring and whether Development Objective (DO) and project outcomes are on track. Performance monitoring uses preselected indicators to measure progress toward planned results at every level of the Results Framework continuously throughout the life of a DO.

Evaluation is the systematic collection and analysis of information about the characteristics and outcomes of programs and projects as a basis for judgments to improve effectiveness, and/or inform decisions about current and future programming. Evaluation is distinct from assessment, which may be designed to examine country or sector context to inform project design, or an informal review of projects. Evaluation provides an opportunity to consider both planned and unplanned results and to reexamine the Development Hypothesis of the DO (as well as its underlying assumptions) and to make adjustments based on new evidence.

COMMENTS (0)