Welcome to the USAID Monitoring Toolkit!
The Monitoring Toolkit:
- Curates the latest USAID Program Cycle guidance, tools, and templates for monitoring USAID strategies, projects, and activities.
- Is a resource for USAID staff members and external partners who manage or implement USAID efforts.
- Complements USAID’s Program Cycle Operational Policy (codified in ADS 201) and is regularly updated to make sure content is current and consistent with policy requirements.
This Toolkit was developed by the Office of Learning, Evaluation, and Research in USAID’s Bureau for Policy, Planning, and Learning (PPL/LER).
How To Use
How to Use this E-Toolkit:
The Toolkit is organized to inform users how to monitor development interventions. It begins with a focus on monitoring within USAID’s Program Cycle but expands to provide more general information and best practices in the field of monitoring. Specific sections include:
- Monitoring in the Program Cycle
- Monitoring Approaches
- Monitoring Indicators
- Monitoring Data
- Analysis, Use, and Reporting
On each page of the Monitoring Toolkit, the sections are listed across the top of the page. Within each section, topics are listed along the left side of the screen that, when selected, provide detailed information and resources on a variety of subjects.
Each section’s landing page introduces specific topics and important considerations. Technical content is available through links listed under “Resources” on the right side of the page. These resources include USAID How-To and Technical Notes, general guidance documents, templates, worksheets, and more! Where applicable, links to additional resources are also provided.
Note: Some resources and additional links are available only to USAID staff. These are indicated by a designation of USAID only.
Monitoring in the Program Cycle
The Program Cycle is USAID’s operational model for planning, delivering, assessing, and adapting development programming in a given region or country to advance U.S. foreign policy. It is described in USAID policy ADS 201. Monitoring plays a critical role throughout the Program Cycle and is used to determine whether USAID is accomplishing what it sets out to achieve, what effects programming is having in a region, and how to adapt to changing environments.
Performance monitoring and context monitoring occur throughout the Program Cycle, from Country or Regional Development Cooperation Strategies (CDCS/RDCS), to projects, to activities. Data from monitoring are used to: assess whether programming is achieving expected results; adapt existing activities, projects, and strategies as necessary; and, apply Agency learning to the designs of future strategies and programming. The diagram below identifies some of the ways in which monitoring is embedded in the Program Cycle.
Monitoring at USAID
Monitoring is integrated throughout the Program Cycle. Information from partners helps inform Missions for learning and adaptive management purposes. Monitoring information from Missions enables USAID as an Agency to understand its achievements at a corporate level and tell its story to Congress and the American people.
USAID uses existing monitoring information to inform Country Development Cooperation Strategies (CDCSs). Following approval of a CDCS, a Performance Management Plan (PMP) is written to accompany the strategy and includes information about what will be monitored. Project teams develop Project Monitoring, Evaluation, and Learning (MEL) Plans and CORs/AORs/GATRs in collaboration with implementing partners develop Activity Monitoring, Evaluation, and Learning (MEL) Plans.
Based on the country context and development priorities articulated in the CDCS, monitoring approaches may be customized to be more effective for specific country contexts or programmatic approaches.
A Country Development Cooperation Strategy (CDCS), or regional equivalent (RDCS), is a formal strategy document that details what a Mission or Washington Operating Unit (OU) intends to achieve over the five years that the strategy will be implemented. Through the strategy development process, a Mission creates a Results Framework that depicts the integrated hierarchy or flow of results to be achieved in order to reach stated Development Objectives (DOs).
A Mission must identify monitoring indicators to monitor the results stated in the Results Framework. USAID mandates that in an annex to its CDCS, the Mission must include a table of indicators and other monitoring approaches that will be used to monitor progress toward achieving its DOs, and to track contextual factors beyond the Mission’s control that may affect implementation.
Example Results Framework
The Performance Management Plan (PMP) is a Mission-wide planning and management tool for monitoring, evaluating, and learning related to the implementation of the CDCS. A PMP is created within three months of a Country Development Cooperation Strategy (CDCS) being approved and should be reviewed and updated regularly to make sure it accurately reflects what is happening in the Mission. It is important to recognize this dynamic aspect of a PMP; it is not a static document that is ever finalized, rather it is a living resource that evolves in parallel to the Mission’s strategy, projects, and activities.
PMPs should take the illustrative information first laid out in the CDCS and revise it or expand on it to create a comprehensive list of performance and context monitoring approaches, including indicators, which will actually be used throughout the life of the strategy. Beyond just identifying approaches, detailed information should be provided defining each approach, how and when information should be collected for it, what should be collected, and who is responsible for it.
In addition to information on how the Mission will monitor results, PMPs include detailed information on the Mission’s Evaluation Plan and Collaborating, Learning and Adapting (CLA) Plan. For more information on these aspects of the PMP, please see the Evaluation Toolkit.
Project MEL Plans
Project Monitoring, Evaluation, and Learning (MEL) Plans are an essential component of a Project Appraisal Document (PAD) which is approved prior to launching a project.
The Project MEL Plan must describe how the Project Team will: monitor progress toward planned results and conditions outside the control of the project that may affect implementation; provide a summary description of performance or impact evaluations that will be conducted during or after implementation of the project, including both required and non-required evaluations; and, describe how the Project Team will generate and apply new knowledge and learning during project implementation.
When thinking about monitoring for a project, it is important to think through where the Project Purpose aligns to the CDCS and what activities will implement the project design. All applicable indicators from the PMP should be included in the Project MEL Plan, as should any activity-level indicators that will be used to inform project implementation. Project teams should customize monitoring approaches to the project and context.
Activity MEL Plans
Within 90 days of an activity being awarded, the Activity Monitoring, Evaluation, and Learning (MEL) Plan is drafted. Unlike a Performance Management Plan (PMP) or Project MEL Plan, the Activity MEL Plan is typically written by the implementing partner then reviewed and approved by USAID. It is the CORs/AORs responsibility to review, collaborate on any necessary changes, and finally approve the plan. After approval, the Activity MEL Plan should evolve and adapt alongside the activity work plan, being updated at regular intervals based on what has been learned to date.
An Activity MEL Plan also needs to include additional monitoring approaches unique to that specific activity, reflective of the activity’s programmatic approach, operational context, and management needs.
As Activity MEL Plans are used by USAID’s partners to guide efforts and by USAID to manage the activity, it is important that these plans clearly detail how the partner will monitor performance as well as programmatic and operational context. Beyond including a Performance Indicator Reference Sheet (PIRS) for each performance indicator, the plan should identify how data will be collected and stored and how data quality will be ensured, among other important topics. The Activity MEL Plan should also include information on the activity’s Evaluation Plan and Learning Plan.
Agency Policies and Initiatives
To the extent that a Mission or Washington OU’s strategy, project, or activity is aligned with a given initiative or policy, the Mission or Washington OU should be sure to incorporate all relevant indicators and guidance throughout their Program Cycle processes.
The term monitoring approaches refers to the three main categories of monitoring in the Program Cycle, as specified in ADS 126.96.36.199. These approaches are performance monitoring, context monitoring, and complementary monitoring. Though only performance monitoring is required in the ADS, a well-rounded monitoring plan may employ all three of these approaches, provided they fit the Mission’s programming needs and culture.
ADS 201 defines performance monitoring as “the ongoing and systematic collection of performance indicator data and other quantitative or qualitative information to reveal whether implementation is on track and whether expected results are being achieved. Performance monitoring includes monitoring the quantity, quality, and timeliness of activity outputs within the control of USAID or its implementers, as well as the monitoring of project and strategic outcomes that are expected to result from the combination of these outputs and other factors. Performance monitoring continues throughout strategies, projects, and activities.”
ADS 201 encourages USAID to look beyond just the performance of its strategies, projects, and activities when monitoring programming. Washington Operating Units (OUs) should also monitor the surrounding context.
ADS defines context monitoring as, “the systematic collection of information about conditions and external factors relevant to the implementation and performance of a Mission or Washington OU’s strategy, projects, and activities. This includes information about local conditions that may directly affect implementation and performance (such as non-USAID projects operating within the same sector as USAID projects) or external factors that may indirectly affect implementation and performance (such as macro-economic, social, or political conditions). Context monitoring should be used to monitor assumptions and risks identified in a CDCS Results Framework or project or activity logic model.”
Complementary monitoring is a blanket term used to describe any monitoring tool or approach beyond USAID’s standard performance and context monitoring practices. Complementary monitoring may be used in situations where results are difficult to predict due to dynamic contexts or unclear cause-and-effect relationships, or where traditional monitoring methods may not suffice.
While performance and context monitoring require a level of predictability, complementary monitoring can measure unintended results, perspectives, and a wide range of other factors that have an influence on the results we intend to achieve. Complementary monitoring includes complexity-aware monitoring approaches.
Selecting indicators, determining baselines, and setting targets are fundamental aspects of monitoring in the Program Cycle. For example, choosing an appropriate number of indicators that are well-defined and accurately monitor results can increase data quality used for reporting and decision-making. Data from these indicators can also inform the Mission’s learning agenda and can provide evaluation teams with necessary information to understand what project or activity results have been achieved.
USAID should ensure alignment of monitoring efforts, including sharing information about performance indicators, not only along the levels of the Program Cycle but also between USAID and its implementing partners as well as between implementing partners and their partners or sub-contractors.
Selecting and Refining Indicators
Identifying the appropriate number and combination of monitoring approaches, including indicators, is a critical aspect of developing and maintaining an effective monitoring plan. While such a plan must first meet all Agency requirements in terms of including mandatory performance indicators, it should also incorporate the priorities and existing efforts of host country governments, implementing partners, and other donors, to the extent possible, in order to align efforts and reduce data collection and reporting burdens.
Monitoring plans should be reviewed on a regular basis (at least annually) to ensure that the selected indicators continue to be relevant and useful for management needs.
Overall, the process of selecting and refining the suite of monitoring approaches used in a monitoring plan is an evolving process.
Data disaggregation is the process by which performance indicator data are separated into their component parts to meet analytical interests of a Country Development Cooperation Strategy’s (CDCS) Results Framework or a project’s or activity’s logic model. Typically these component parts, or subgroups, reflect demographic characteristics. At a minimum, USAID requires that all person-level indicators be disaggregated by sex.
Disaggregated data improve understanding of the progress toward achievements that an indicator captures, by providing details of the experiences of subsets of beneficiaries or processes monitored by that indicator. Indicator data that are disaggregated by relevant subgroups can provide richer information, often allowing for greater insight and a fuller understanding as to whether an activity, project, or Mission is progressing toward stated objectives. With this information, USAID is better equipped to manage adaptively.
Indicator Reference Information
Once a Mission or Washington Operating Unit (OU) has identified an appropriate number of performance and context indicators that meet their information needs, it is time to develop reference information for the indicators, done through indicator reference sheets. A Performance Indicator Reference Sheet (PIRS) is required for each performance indicator. A Context Indicator Reference Sheet (CIRS) is recommended for context indicators.
In order to ensure the provision of consistent, timely, and high quality data, each indicator is required to have certain pieces of reference information associated with them. Such information includes not only a definition, but clarifies the source of the data, the frequency with which it will be collected, and any necessary disaggregation of data, among other elements.
PIRS and CIRS should be accessible to all parties collecting, analyzing, or using indicator data.
Baselines and Targets
To effectively gauge changes in aspects of performance that Missions or Washington Operating Units (OUs) are monitoring, USAID requires the use of baselines and targets. For context indicators, the use of baselines and triggers are recommended. ADS 188.8.131.52 defines baselines and targets as follows:
Baselines: The value of an indicator before major implementation actions of USAID-supported strategies, projects, or activities. Baseline data enable the tracking of changes that occurred during the project or the activity with the resources allocated to that project or activity.
Targets: Specific, planned level of result to be achieved within a specific timeframe with a given level of resources. Targets should be ambitious but achievable given USAID (and potentially other donor or partner) inputs. Missions and Washington OUs are accountable for assessing progress against their targets.
Monitoring data are the building blocks for learning and adapting in the Program Cycle. They assist in understanding what is working and what is not in terms of achieving objectives. If data are not of good quality, however, they can be misleading and possibly result in the wrong decisions being made.
Due to the importance of these data, there are some things Missions or Washington Operating Units (OUs) should consider for all data being collected through monitoring efforts. Specifically, where will the data come from, what level of quality are the data expected to be, how will data be gathered and stored to protect integrity, and the privacy of those people from which data were collected?
Thinking about and planning around all of these data-related issues can help ensure that data are of sufficient quality to be useful for the Mission, Agency, and other stakeholders as they continue to make important strategy and programming decisions.
Ensuring that USAID is using the highest quality data available for making decisions is of the utmost importance to the Agency. For this reason, USAID has identified five data quality standards that all data from performance monitoring indicators must meet:
- Validity- Data clearly and adequately represent the intended result.
- Integrity- Data have safeguards to minimize the risk of transcription error or data manipulation.
- Precision- Data have sufficient level of detail to permit management decision making.
- Reliability- Data reflect consistent collection processes and analysis methods over time.
- Timeliness- Data are available at a useful frequency, are current, and timely enough to influence management decision making.
As a means of verifying whether data are meeting these standards, USAID requires a Data Quality Assessment (DQA) be conducted on all data reported to an entity external from the original Mission or Washington Operating Unit (OU) that collected the data.
Data sources refer to the origins of the performance and context monitoring data that USAID uses to learn, adapt, and make decisions. There are generally three main sources of data USAID relies on: data USAID collects itself, data collected by USAID’s implementing partners, and data collected by third parties such as other donors or host country governments.
One way USAID staff independently collect data directly is through direct observation and site visits. Typically, observation happens in-person, but Missions are also finding creative ways to monitor, such as through satellite imaging and live video feeds. In some cases, such as in disaster response areas, USAID may be implementing activities directly and will collect data as a part of that process. USAID may also organize evidence summits or other learning activities which could be considered sources of monitoring data.
For monitoring data collected by implementing partners, it should be explicitly clear where partners are obtaining their information. For example, is information coming from focus groups with beneficiary farmers, or is it a nationwide survey of small business owners? This should be documented in the PIRS.
If data are from third-party sources such as a government ministry or international organization, the source of that data should be accompanied by descriptive information on where and how the data can be accessed in the future, such as a link to a website where the data are available
Data Storage and Security
Proper data storage and security are critical to protecting data integrity, optimizing data usability, and safeguarding potentially sensitive or personally identifiable information.
Data storage and security systems can range from simple hard copy files locked in file cabinets, to a password protected spreadsheet, to a sophisticated cloud based management system with role based access controls.
For guidance on USAID’s open data policy, USAID staff and partners should refer to ADS 579. ADS 579 provides a framework for systematically collecting Agency-funded data in a central repository, structuring the data to ensure usability and making the data public, while ensuring rigorous protections for privacy and security.
Analysis, Use, and Reporting
What happens after data are collected is perhaps the most critical aspect of monitoring in the Program Cycle.
In order for monitoring efforts to truly serve their purposes, the monitoring data collected by USAID and its partners should be analyzed, used, shared, and reported. Without these final steps of actually using the monitoring information produced to inform USAID programming, the time and resources devoted to collecting data will have been wasted.
Data analysis plays a very important role in an effective monitoring system. Even the most straightforward data may require some processing and analysis to ensure they are accurate and make sense, but many data require substantial analysis to reach a state where they are usable and ready to be incorporated into a learning activity or report.
The kind of analysis necessary depends on the kinds of data that were collected and how those data are intended to be used. Qualitative data will often undergo content or pattern analyses to see trends. Quantitative data may undergo fairly simple analyses to generate sums or averages, or they may require more complex approaches such as regression analyses. Data may require multiple analyses, such as if data must be disaggregated and therefore analyzed both as aggregates and disaggregates.
It is encouraged for Project and Activity MEL Plans to include data analysis plans. These plans can clarify expectations on how certain data types will be analyzed, including any specific software that may be necessary. This is also an opportunity to ensure a consistent data analysis approach across a project or among multiple partners.
USAID’s monitoring data have a variety of potential uses and users, some within the Mission and others external to the Mission, or even external USAID altogether.
Within a USAID Mission, monitoring data can be used to inform portfolio reviews and decisions about possible adaptations to development programming. Monitoring data can also be used to help decide if an evaluation is needed or to inform or support evaluation findings. Prior to utilizing monitoring data, USAID staff should consider what information will be relevant and ensure that it is ready in time for that use. Ideally, the potential use(s) of monitoring data will have been thought of well in advance of actual use to guarantee that the appropriate data have been collected and analyzed in time for its ultimate use.
USAID regional and pillar bureaus may use monitoring data to understand trends across a region or sector, even though some of the nuances of individual Mission data points may be lost at this level.
Host country governments and other donors may have uses for monitoring data, if the data are available to them. Often this may be discussed during strategy or project design stages, though some uses may not become clear until later in the Program Cycle.
Sharing and Reporting
In order for monitoring data and information to be fully utilized, they should be shared with those who may use them. Some monitoring data must be reported to meet policy requirements, Congressionally mandated reports, or to support the tracking of Presidential Initiatives. For any data that are reported externally by the Mission, Data Quality Assessments (DQAs) must be conducted (see the Monitoring Toolkit page on data quality).
The annual Performance Plan and Report (PPR), which has its own processes and guidance, is the most typical means by which USAID Missions report to Washington.
Where implementing partners are reporting monitoring data to USAID, there may be reporting requirements stated in awards, such as the requirement for quarterly or annual reports. Partners may also be asked to report to USAID through formal management information systems. When reporting on monitoring data, partners should find a way to effectively communicate whether results are being achieved.
Data reported to USAID may be subject to USAID’s Open Data Policy detailed in ADS 579 and discussed further in the Monitoring Data section of this toolkit, under the topic of Data Storage and Security.