Welcome to the e-version of the USAID Evaluation Toolkit!
The Evaluation Toolkit:
- Curates the latest USAID guidance, tools, and templates for initiating, planning, managing, and learning from evaluations primarily for USAID staff members who are involved in any phase of the evaluation process.
- Is a resource for USAID staff members and external contractors who participate in or conduct evaluations for USAID.
How to Use this E-Toolkit:
The Toolkit is organized according to the USAID Program Cycle and the phases of an evaluation.
Section 1: Evaluation at USAID (overview of evaluation and the policy context for evaluation at USAID)
Section 2: Evaluation Throughout the Program Cycle (when it is required or encouraged to plan, use, or report on evaluations)
Sections 3 through 5: Phases of an Individual Evaluation
- Section 3: Planning (from deciding to evaluate to procuring an evaluation)
- Section 4: Managing an Evaluation
- Section 5: Sharing, Reporting, Using, and Learning from an Evaluation
- Brief narrative introduces the general requirements and important considerations.
- Sub-thematic areas are listed on the left-hand side and go more in-depth into specific areas or processes.
- Core resources (1) provide further guidance on specific requirements and processes; (2) describe best practices; and (3) offer templates and other tools.
- Additional links that provide easy access to USAID reference documents, reports, and webinars that go into specific evaluation issues in greater depth, and non-USAID resources that may be useful during the evaluation life cycle.
A few resources and additional links are available only to USAID staff. These are indicated by a designation of USAID only.
This Toolkit was developed by the Bureau for Policy, Planning, and Learning Office of Learning, Evaluation, and Research (PPL/LER). Many USAID and USAID contractor staff—from the field and Washington—provided content, comments, feedback, and insights into this Evaluation Toolkit. Their contributions have been and continue to be essential to the ongoing development of this Toolkit.
1. Evaluation Policy at USAID
Evaluation at USAID is defined as the systematic collection and analysis of information about the characteristics and outcomes of strategies, projects, and activities as a basis for judgments to improve effectiveness, and/or to inform decisions about current and future programming. Evaluation is distinct from assessment (which may be designed to examine country or sector context to inform project design) or an informal review of projects. It is also distinct from performance monitoring, which is an ongoing and systematic collection of performance indicator data and other quantitative or qualitative information to reveal whether implementation is on track and whether expected results are being achieved.
The purpose of evaluations is twofold: to ensure accountability to stakeholders and to learn to improve development outcomes. The subject of a USAID evaluation may include any level of USAID programming, from a strategy to a project, individual award, activity, intervention, or even cross-cutting programmatic priority.
As noted in ADS 18.104.22.168, evaluations at USAID should be:
- Integrated into the Design of Strategies, Projects, and Activities. Planning for and identifying key evaluation questions at the outset will both improve the quality of strategy development and project design and guide data collection during implementation.
- Unbiased in Measurement and Reporting. Evaluations will be undertaken so that they are not subject to the perception or reality of biased measurement or reporting due to conflicts of interest or other factors.
- Relevant. Evaluations will address the most important and relevant questions about strategies, projects, or activities.
- Based on Best Methods. Evaluations will use methods that generate the highest quality and most credible evidence that corresponds to the questions being asked, taking into consideration time, budget, and other practical considerations.
- Oriented toward Reinforcing Local Capacity. The conduct of evaluations will be consistent with institutional aims of local ownership through respectful engagement of all partners, including local beneficiaries, while leveraging and building local evaluation capacity.
- Transparent. Findings from evaluations will be shared as widely as possible, with a commitment to full and active disclosure.
USAID Automated Directives System (ADS) 201 and its associated references provide the foundation of all USAID guidance on evaluation at USAID.
- Reference: M&E POCs List. USAID only.
- Reference: USAID Evaluation Policy: Year One
- Reference: Evaluation at USAID - November 2013 Update
- Reference: Strengthening Evidence-Based Development: Five Years of Better Evaluation Practice at USAID
2. Evaluation Throughout the Program Cycle
The USAID Program Cycle is a common set of processes intended to achieve more effective development interventions and to maximize impacts. The graphic representation of the Program Cycle was updated on September 7, 2016.
Evaluations may be planned, conducted, or utilized at any stage in the Program Cycle. This section addresses the various, formal stages of the Program Cycle at which Missions or Washington OUs are required or encouraged to consider whether it would be appropriate to plan for, conduct, or learn from an evaluation.
Evaluation in CDCS
Evaluation in Country Development Cooperation Strategies
A Country Development Cooperate Strategy (CDCS) articulates country-specific development hypotheses and sets forth the goal, objectives, results, indicators, and resources levels that guide Project Design and Implementation, Evaluation, and Performance Management, and inform annual planning and reporting processes.
Evaluations, along with research and other analyses, should be used to inform various sections in the CDCS, including the Development Context, Challenges and Opportunities, the Development Hypothesis, and the Results Framework.
According to ADS 22.214.171.124, the Monitoring, Evaluation, and Learning section of the CDCS must include a brief discussion of the Mission's overall priorities and approach to evaluation.
In addition, Missions must incorporate USAID’s Gender Equality/Female Empowerment Policy (ADS 205) in the CDCS.
Evaluation in PMPs
Evaluation in Performance Management Plans
A Performance Management Plan (PMP) is a Mission-wide tool for planning and managing the process of monitoring strategic progress, project performance, programmatic assumptions, and operational context; evaluating performance and impact; and learning and adapting from evidence. Each Mission must prepare a Mission-wide PMP. Missions that do not have a CDCS are still required to have a PMP that covers any projects they fund (ADS 126.96.36.199).
Each Mission must prepare a Mission-wide PMP within six months of CDCS approval. Missions must keep the PMP up-to-date to reflect changes in the CDCS or projects. Missions must update the PMP with new project indicators, evaluations, and learning efforts as each new Project Appraisal Document (PAD) is approved. Missions should update information in the evaluation plan from Project and Activity MEL plans upon their approval.
Per ADS 188.8.131.52(A.II), Missions must include an evaluation plan in their Mission PMP to identify, summarize, and track all evaluations as they are planned across the Mission and over the entire CDCS timeframe by Development Objective (DO). An evaluation plan must include the following information for each planned evaluation, as it becomes available:
- The strategy, project, or activity to be evaluated;
- Evaluation purpose and expected use;
- Evaluation type (performance or impact);
- Possible evaluation questions;
- Whether it is external or internal;
- Whether it fullfills an evaluation requirement or is a non-required evaluation;
- Estimated budget;
- Planned start date; and
- Estimated completion date.
- Reference: Model Mission Order on Performance Monitoring. USAID only.
Evaluation in Project MEL Plans
The ADS defines a project as a set of complementary activities, over an established timeline and budget, intended to achieve a discrete development result (i.e. the project purpose). A project is often aligned with an Intermediate Result (IR) in the CDCS Results Framework.
Project designs should be derived from well-documented, rigorous analysis, including evaluations. In addition, per ADS 184.108.40.206, Project Appraisal Documents (PADs) must include a Project Monitoring, Evaluation and Learning (MEL) Plan. The evaluation section of the Project MEL is used to describe all anticipated evaluations associated with the project. This section must specifically identify and describe any evaluations that will be conducted to fulfill evaluation requirements described in ADS 220.127.116.11. It is particularly important that expected impact evaluations be planned at this stage to ensure that relevant activities being evaluated are designed to accommodate parallel implementation of the evaluation.
In addition, the evaluation section should identify two to three broad questions, related to the project theory of change and the project design, that are expected to be answered through planned evaluations.
In the case of a Government to Government (G2G) project, a Project MEL Plan should be created by the Government Agreement Technical Representative (GATR) or G2G POC in collaboration with the partner government.
Evaluation in Activity MEL Plans
Per ADS 201.3.4, an activity carries out an intervention, or set of interventions, typically through a contract, grant, or agreement with another U.S. Government agency or with the partner country government. An activity also may be an intervention undertaken directly by Mission staff that contributes to a project, such as a policy dialogue.
In the case of an awarded activity, implementers are expected to submit an Activity Monitoring, Evaluation and Learning (MEL) Plan to their Agreement Officer’s Representative/Contracting Officer’s Representative (AOR/COR) within the first 90 days of an award (ADS 18.104.22.168).
The Activity MEL Plan should include any plans for internal evaluations, to include the type of evaluation (performance or impact), possible evaluation questions, estimated budget, planned start date, and estimated completion date. The Activity MEL Plan should also include information for ensuring that any planned external or USAID-led evaluations will have access to appropriate data collected by the implementer, such as performance monitoring data.
In the case of a Government to Government (G2G) activity, an Activity MEL Plan should be created by the Government Agreement Technical Representative (GATR) or G2G POC in collaboration with the partner government. Generally, if at the activity level, the GATR should emphasize the use of partner government M&E systems.
Evaluation in the Budget Cycle
Per ADS 201, USAID Operating Units (OU) should devote approximately 3 percent of total program funding to external evaluation on average. This does not mean that every project or activity should be evaluated or that 3 percent of the budget of every project, activity or implementing mechanism be set aside for evaluation. The actual costs of M&E may vary depending on the operating environment and the specific types of evaluations the OU plans to undertake.
The Program Office should calculate on an annual basis a budget estimate for the external evaluations to be undertaken during the following fiscal year. This estimate does not include implementing partners’ internal M&E operations. Most likely, this exercise is best done during the Operational Plan (OP) preparation time. As many external evaluations will also be implementing mechanisms, much of the needed information will need to be prepared as part of the OP.
Evaluation in Portfolio Reviews
A Portfolio Review is a periodic review of all aspects of a USAID Mission or Bureau/Independent Office (B/IO)’s Development Objective, projects, and activities, often held prior to preparing the Performance Plan and Report. Missions must conduct at least one portfolio review per year that focuses on progress toward strategy-level results.
Per ADS 201, the strategic Portfolio Review should consider (1) what has been learned during evaluations (along with other sources of evidence) and (2) the status of post-evaluation action plans for evaluation findings and their use in respective decisions. After the Portfolio Review, the Mission should update the PMP as needed to reflect changes in the evaluation plan.
Additionally, the Portfolio Review during the final year of the CDCS must include a review of the cumulative achievements toward the DOs and IRs, with the results documented to support knowledge management.
3. Planning an Evaluation
Section 2 noted the various stages of the Program Cycle where Missions or Washington OUs should formally consider evaluation needs and requirements, including Mission-wide evaluation planning. This section addresses the planning phase for an individual evaluation, from the decision to evaluate to the procurement of evaluation services.
Ideally, evaluation planning should start during the project or activity design stage. This will help ensure that a project or activity and its monitoring system are designed with the planned evaluation in mind. However, the decision to evaluate a strategy, project or activity may occur at any time in the Program Cycle as new evaluation needs are recognized. In addition, evaluations should be timed so that their findings can inform decision making (for example, exercising option years, designing a follow-on project, making mid-course corrections, creating a country or sector strategic plan, or making a policy decision). For a typical performance evaluation, this means the process to solicit an evaluation should begin at least 12–18 months in advance of a decision point.
While early planning is beneficial for all evaluations, it is particularly important for impact evaluations. These studies parallel the life of a project or activity and sometimes require substantial modifications to the design of interventions (e.g. randomized assignment of treatment and control groups, modifications to selection criteria, modifications to roll-out timing, etc.). Understanding impact evaluation requirements at an early stage can help inform the drafting of implementing partner agreements in a way that builds implementer/evaluator cooperation and communicates how the evaluation will affect implementation.
In planning an individual evaluation, sufficient time should be allocated to:
- Draft a strong Statement of Work (SOW) that is peer reviewed prior to finalizing;
- Develop an Independent Government Cost Estimate (IGCE);
- Commission the evaluation to allow partners several weeks to prepare and respond;
- Review proposals and select a finalist;
- Award the contract;
- Conduct the evaluation using high-quality methods; and
- Review, reflect upon, and act on the evaluation findings, conclusions, and recommendations.
For Missions and Washington OUs with M&E Support contracts, the steps outlined here may be somewhat different because the contract to conduct the evaluation may have already been awarded.
Deciding to Evaluate
The decision to evaluate a strategy, project, or activity should be based on the decision-making needs of a Mission or Washington OU, the policy requirements for evaluation, learning needs, and practical considerations.
Evaluations are required in three instances (see ADS 22.214.171.124):
- Each Mission and Washington OU that manages program funds and designs and implements projects as described in ADS 201.3.3 must conduct at least one evaluation per project. The evaluation may address the project as a whole, a single activity or intervention, a set of activities or interventions within the project, questions related to the project that were identified in the PMP or Project MEL Plan, or cross-cutting issues within the project.
- Each Mission and Washington OU must conduct an impact evaluation, if feasible, of any new, untested approach that is anticipated to be expanded in scale or scope through U.S. Government foreign assistance or other funding sources. (This evaluation may count as one of the evaluations required under Requirement 1.)
- Each Mission must conduct at least one "whole-of-project" performance evaluation within their CDCS timeframe. (This evaluation may count as one of the evaluations required under Requirement 1.)
In these cases, decisions need to be made about when and how these projects will be evaluated.
Strategies/projects/activities that are not required to be evaluated may still be evaluated at any point in implementation for learning or management purposes. In this case, decisions need to be made about whether to evaluate, what type of evaluation (performance or impact) to conduct, what type of evaluation team (internal or external) would be appropriate, and when the evaluation should be conducted. In the case of potentially large, expensive, or lengthy evaluations (particularly impact evaluations), an evaluability assessment may be a worthwhile investment prior to planning an evaluation.
Engaging with Stakeholders
Collaboration is a principle that is integral to all stages of the USAID Program Cycle, including evaluation. For an evaluation to successfully contribute to USAID development results, the Evaluation Point of Contact (POC) and other USAID staff involved in the evaluation must productively engage and collaborate with key stakeholders in the evaluation—USAID staff across various offices, implementing partners, host country officials, project beneficiaries, etc. These stakeholders may contribute to the planning and implementation of the evaluation, serve as primary or secondary audiences for evaluation products, and/or serve as critical actors in ensuring that evaluation evidence is utilized effectively.
Consequently, it is prudent to start identifying and engaging with key stakeholders as early as possible in the evaluation process. A stakeholder analysis is one simple way to start the process of identifying stakeholders and determining how to best collaborate with them throughout the evaluation. At minimum, Project Managers and AOR/CORs are responsible for ensuring that implementing partners (IPs) of the activity or project that will be evaluated are aware of any planned evaluations and the steps IPs need to take to ensure a successful evaluation.
Similarly, early dissemination planning for evaluation is critical. According to ADS 126.96.36.199: "Missions and Washington OUs must plan for dissemination and use of the planned evaluation." Evaluations of all types will include a dissemination plan. Such dissemination plans can help ensure that appropriate evaluation products are planned and developed to meet stakeholder needs and fulfill USAID’s commitment to transparency, accountability, and learning.
Determining Evaluation Purpose and Evaluation Questions
As early as possible in the evaluation planning phase, the Mission or Washington Operating Unit needs to consider the purpose and audience of the evaluation and the key questions that the evaluation will address. Ideally, the evaluation purpose and questions will be developed during the design of the project or activity to be evaluated. The process of developing the evaluation questions may even inform the decision to evaluate or not.
Evaluation questions are typically developed by the Development Objective team or Technical Office managing the strategy, project, or activity being evaluated in coordination with the Program Office, which will manage the evaluation in most cases. However, evaluation questions may be based on input from Washington offices, Mission leadership, or other stakeholders, such as implementing partners and host governments. Adequate consultation is essential when defining the evaluation purpose and evaluation questions to ensure that evaluation findings will be credible, relevant, and actionable for decision-makers.
Developing an Evaluation SOW
The development of an Evaluation Statement of Work (SOW) is one of the most significant steps in the evaluation planning process. The SOW communicates to the evaluation team why the evaluation is needed, how it will be used, and what evaluation questions managers need answers to. Before finalizing the Evaluation SOW, the Mission or Washington Operating Unit Program Office will organize an in-house peer technical review of the Evaluation SOW including no less than two individuals in addition to the Program Office Evaluation POC (or designee). Relevant and non-procurement sensitive parts of the SOW may also be shared with external stakeholders as needed and appropriate. The Program Office is responsible for ensuring that the SOW is compliant with ADS 201mab, USAID Evaluation Statement of Work Requirements. Most of the guidance in this section assumes that USAID is the author of the SOW for a competitively procured external performance evaluation. However, SOWs may differ for internal evaluations, for evaluations planned within an existing evaluation contract, or for more complicated impact evaluations.
Developing an Evaluation IGCE
The Evaluation Independent Government Cost Estimate (IGCE), is USAID’s estimate of the costs that an evaluation contractor may incur in performing the evaluation. As with all IGCEs it serves as the basis for reserving funds during acquisition planning; it provides the basis for comparing costs or prices proposed by offerors/applicants; and it serves as an objective basis for determining price reasonableness in cases in which one Offeror/Applicant responds to a solicitation.
The Evaluation IGCE should be developed concurrently with the Evaluation SOW and should be available to those involved in the peer review of the SOW. The IGCE for an evaluation should follow directly from the information included in the SOW. For instance, the number and complexity of questions, along with the proposed data collection and analysis methods in the methodology section and the team composition requirements in the Evaluation SOW should all be reflected in the IGCE.
- Guidance and Tool: ADS 300maa Independent Government Cost Estimate (IGCE) Guide and Template.
- Guidance and Tool: Independent Government Cost Estimate (IGCE) Guide and Template in Excel. USAID Only
Commissioning an Evaluation
For external evaluations, the completed SOW is typically incorporated into a request for proposals (RFPs) or request for procurement. For Missions and Washington Operating Units with existing M&E support contracts, or M&E “Platforms” this will not be the case.
Part of the value-added function of the Program Office is to suggest potential implementing mechanisms for carrying out the evaluation. There are numerous options for procuring evaluation services, including field and Washington mechanisms supported by a variety of USAID offices. Missions should also consider the use of local evaluation contractors. If USAID staff is expected to participate on the evaluation team, that expectation should be acknowledged in the solicitation.
Although procurement typically occurs at the end of the planning phase of the evaluation process, the Evaluation POC (or designee) should work with their contract officers to consider possible mechanisms early in the planning process, since the choice of mechanism can have implications on the budget and timing of the evaluation.
Special considerations should be taken with regard to timing of contracts for impact evaluations. In cases where impact evaluations are undertaken, it is a good practice to establish a parallel award at the inception of the intervention to accompany implementation. If possible, the evaluation team should be in place before implementation starts in order to conduct the baseline and provide guidance related to the selection of treatment and comparison/control groups.
- Reference: ADS 300: Agency Acquisition and Assistance (A&A) Planning.
- Reference: Monitoring and Evaluation Platforms: Considerations for Design and Implementation Based on a Survey of Current Practices (Sept 2013).
- Tool: Interactive Map of VOPEs (Voluntary Organizations of Professional Evaluators).
- Tech Note: Using PADs to develop RFPs.
- Reference: The Ideal Prospective Impact Evaluation Timeline.
- M&E Mechanisms (Field and Washington) USAID Only
4. Managing an Evaluation
This section addresses the management phase of an individual evaluation, from the period following the award of an evaluation contract to the submission of the final report. Following the award of an evaluation contract, the COR (for external evaluations) or the Evaluation Manager (for internal evaluations) serves as the main communication link between USAID and the evaluation team. In most cases, the evaluation will be managed by the Program Office (i.e., Evaluation COR/Manager is a Program Office staff member). The Evaluation COR/Manager will ensure that:
- The evaluation team’s final evaluation design meets the Agency’s needs;
- The evaluation team has access to the necessary information (e.g. project or activity reports, performance monitoring data, key contact information, etc.);
- The evaluation team is proceeding with the evaluation as envisioned;
- Coordination between evaluators and implementers is smooth; and
- A final report is reviewed, approved, and disseminated.
From Award to Approval
In the period following the award of the evaluation contract, but prior to data collection, an evaluation design is the one deliverable required by USAID policy. Other deliverables may also be due during this period, depending on what was requested in the Evaluation SOW.
An evaluation design describes and documents how the data collection and analysis methods will be used to produce credible evidence for answering all of the evaluation questions within the time and budget constraints. Clear articulation of the evaluation design aids USAID and other stakeholders in discussing these choices with the evaluation team, but the level of detail in an evaluation design may vary depending on the complexity of the evaluation, overall level of effort, and other factors. An Evaluation Design Matrix is a standard tool for outlining the components of an evaluation design and is highly recommended for use by evaluation teams.
Per ADS 188.8.131.52, “Except in unusual circumstances, the key elements of the design [of the evaluation] must be shared with implementing partners of the projects or activities addressed in the evaluation and with related funders before being finalized.”
Additional deliverables that may be required during this period typically include:
- A workplan that describes the schedule, activities, and milestones of the evaluation team;
- An inception report or background report that addresses what the evaluation team has learned based on program documents provided to them;
- An in-brief or series of in-briefs, either in person or virtual; and
- Other possible deliverables, such as an evaluability assessment.
During this design period, the Evaluation COR/Manager should consider the possibility of revising evaluation questions based on evaluation team input. Any revisions to the questions in the Statement of Work should be documented in writing in the evaluation report. The Evaluation COR/Manager should also consider if the design complies with ethical standards for protection of human subjects.
Conducting an Evaluation
USAID staff members typically manage evaluations on behalf of USAID, while the design and implementation of an evaluation is more typically the responsibility of an externally contracted evaluation team. USAID staff may participate in these external evaluations, provided that the team leader is an externally contracted evaluator with no fiduciary relationship with the implementing partner. In addition, USAID staff may lead and/or participate in internal evaluations.
Whether leading an evaluation, managing an evaluation, participating on an evaluation team, or just reviewing an evaluation report, it is beneficial to be familiar with the typical designs and methodologies of USAID evaluations. The field of program evaluation is quite diverse, and numerous books, journals, and websites are dedicated to describing the various approaches, models, designs, methods, techniques, and practices in conducting program evaluations. This section provides some limited guidance on designs and methods for conducting an evaluation for USAID.
ADS 201 emphasizes high-quality evaluation methods. It notes:
“Evaluations will use methods that generate the highest quality and most credible evidence that corresponds to the questions being asked, taking into consideration time, budget, and other practical considerations. A combination of qualitative and quantitative methods applied in a systematic and structure way yields valuable findings and is often optimal regardless of evaluation design." (ADS 184.108.40.206)
"No single evaluation design or approach will be privileged over others; rather, the selection of method or methods for a particular evaluation should principally consider the appropriateness of the evaluation design for answering the evaluation questions as well as balance cost, feasibility, and the level of rigor needed to inform specific decisions.” (ADS 220.127.116.11)
- Guidance: The Road to Results: Designing and Conducting Effective Development Evaluations.
- Guidance: Impact Evaluation in Practice.
- Guidance: Real World Evaluation: Working Under Budget, Time, Data, and Political Constraints: A Condensed Summary Overview.
Managing and Monitoring an Evaluation Team
The responsibilities of an Evaluation COR/Manager for an evaluation contract or task order are technically no different than the responsibilities of a COR for any other implementing mechanism. In practice, though, managing the implementation of an evaluation differs in significant ways from managing the implementation of other development activities.
Evaluations typically have much more compressed timelines compared to other activities, requiring quick responses and adaptations when problems arise.
Evaluations also rely heavily on the cooperation of other USAID partners. The Evaluation COR/Manager must help mediate and manage the relationship between the evaluation team and the implementing partner being evaluated. This relationship can place a considerable burden on the implementing partner as they assist the evaluation team in obtaining documents, participate in interviews, and facilitate access to beneficiaries. This is particularly true for experimental designs in impact evaluations, which require that the implementing partner adhere to pre-specified treatment and control groups.
Finally, managing an external evaluation team requires the Evaluation COR/Manager to carefully balance USAID’s involvement to ensure a high-quality, useful, and on-time product with the need to protect the independence of the evaluators and the evaluation report.
From Draft to Final Report
The requirements for evaluation report structure and content are detailed in the mandatory references for ADS 201:
- ADS 201maa, Criteria to Ensure the Quality of the Evaluation Report
- ADS 201mah, USAID Evaluation Report Standards
The report must present a well-researched, thoughtful, and organized effort to objectively evaluate a USAID strategy, project, or activity. Findings, conclusions, and recommendations must be based in evidence derived from the best methods available given the evaluation questions and resources available. The evaluation methods, limitations, and information sources must be documented, including by providing data collection tools and the original Evaluation SOW as annexes to the main report.
Following the submission of the draft evaluation report, the Mission or Washington Operating Unit Program Office will organize an in-house peer technical review to assess the quality of the draft report and ensure that comments are provided to evaluation teams. The Program Office will be responsible for ensuring that the final report is compliant with USAID evaluation policies and ADS 201. See ADS Additional Help, ADS Reference 201sai, Managing the Peer Review of a Draft Evaluation Report.
The draft report will also be shared with implementing partners whose projects (or activities) are examined in the evaluation and other organizations who contributed funding to the evaluation or the project/activity being evaluated. Funders, implementers, and members of the evaluation team are to be provided with the opportunity to write a “statement of differences” to address any unresolved differences of opinion to be appended to the final evaluation report.
- Example: Sample Evaluation Covers.
- Guidance: USAID Graphic Standards Manual.
- Tool: Checklist for reviewing a randomized controlled trial.
5. Sharing, Reporting, Using, and Learning
Completing an evaluation report is not the end of the evaluation process. As the evaluation report moves toward completion, the Mission or Washington OU that commissioned the evaluation enters into the key phase of sharing, reporting, using, and learning from the evaluation.
Sharing: Transparency is a key practice of evaluation at USAID. As noted in ADS 18.104.22.168, “Findings from evaluations will be shared as widely as possible, with a commitment to full and active disclosure." At minimum, this requires the posting of evaluation reports to the Development Experience Clearinghouse (DEC) and evaluation data to the Data Development Library (DDL).
Reporting: Planned, ongoing, and completed evaluations must be reported annually in the Evaluation Registry of the annual Performance Plan and Report.
Using and Learning: Given the time and effort that is expended in planning and conducting evaluations, it is essential that Mission and Washington OUs use evaluations to understand performance, test development hypotheses, question assumptions and cause-and-effect relationships, and, ultimately, manage for results and learning. At minimum, following the completion of an evaluation, Missions and Washington OUs should respond to the evaluation through the development of an action plan for addressing the evaluation findings, conclusions, and recommendations.
Missions and Washington Operating Units (OU) should share and openly discuss evaluation findings, conclusions, and recommendations with relevant partners, donors, and stakeholders, unless there are unusual and compelling reasons not to do so.
Missions and Washington OUs should revisit their evaluation stakeholder analysis and dissemination plan toward the conclusion of an evaluation to ensure that it still reflects the priorities for dissemination. While sharing the evaluation report is the most typical form of dissemination, Missions and Washington OUs should also consider other methods of dissemination, such as hosting briefings with local stakeholders, partners, and other donors to discuss evaluation findings; featuring evaluation findings on their website, such as through articles or blog posts; and holding press conferences and issuing press releases.
In many cases, USAID Missions should arrange the translation of the executive summary into the local written language.
Two forms of evaluation report sharing are required:
- First, the Program Office must ensure that the final evaluation report is posted on the Development Experience Clearinghouse (DEC) no later than three months after completion. Exceptions to this requirement are granted only in very rare circumstances (see Guidance on Exemptions to Public Disclosure of USAID-funded Evaluations).
- Second, in addition to posting the evaluation report to the DEC, Program Offices must post quantitative data from the evaluation to the Data Development Library.
The annual Performance Plan and Report (PPR) documents USG foreign assistance results achieved over the past fiscal year and sets targets on designated performance indicators for the next two fiscal years. The PPR also includes the Evaluation Registry as a sub-module for documenting each Mission and Washington Operating Unit’s work on evaluation.
The Evaluation Registry includes information on evaluations completed in the most recent fiscal year, evaluations that are currently ongoing, and evaluations planned for the current year and two additional out-years. This includes required, non-required, external and internal evaluations. As of FY2013, the data from the Evaluation Registry are also used to calculate the targets and actuals for the USAID Forward Evaluation Indicator. Evaluation status and budget data from the Evaluation Registry is critical to helping USAID understand the number of evaluations completed across the Agency, the totality of budget resources being devoted to evaluation, and trends over the fiscal years. These data also help USAID demonstrate to external stakeholders, such as the White House’s Office of Management and Budget, the priority that USAID places on evaluation.
- Reference: Performance Plan and Report Guidance (FY2015). USAID only.
Utilization and Learning
The value of an evaluation is in its use. Evaluations should inform decision-making, contribute to learning, and help improve the quality of development programs. At minimum, USAID Program Offices should lead Missions and Washington Operating Units through the process of:
- reviewing findings, conclusions, and recommendations of evaluations that relate to their activities, projects, and DOs;
- identifying any management or program actions needed; and
- assigning responsibility and time lines for completion for each set of actions.
While learning and utilization are most often considered at the conclusion of the evaluation process, learning and utilization can happen at various phases in the evaluation process and stages of the Program Cycle. Evaluation use and learning may occur before or during the evaluation, shortly after it is completed, or long after the findings have been presented. It may occur during the development of a CDCS, the design of a project, or portfolio review. Utilization and learning should be planned for and actively facilitated, whenever evaluation use occurs.
Assessing the Evaluation Process and Evaluator Performance
Following the completion of the evaluation, the Evaluation COR/Manager and others involved in the evaluation should consider not just the content of the evaluation report, but what they have learned from the entire evaluation process that might be help in conducting the next evaluation. An After-Action Review (AAR) is one formal means of capturing such lessons.
If the evaluation was contracted, the Evaluation COR should, when applicable, access the Contractor Performance Assessment Reporting System (CPARS) to file a Contractor Performance Assessment Report within 60 days of the completed evaluation. CORs completing an assessment report should ensure that the report contains an accurate portrayal of the contractor’s performance. Contractors utilize the completed past performance reports when responding to solicitations. These reports are also used by Contract Officers and CORs when assessing the past performance of contractors and to incentivize contractors to produce superior products and services.
- Guidance: After Action Reviews (AAR)
- Guidance: User Manual for Contractor Performance Assessment Reporting System (CPARS).