This resource covers the basics of data collection for performance monitoring, including: primary and secondary types of data sources, common data collection methods, and the process of identifying appropriate data collection tools. The primary audience is USAID Program Officers, Monitoring, Evaluation, and Learning...
This resource describes how to prepare and maintain an Activity Monitoring, Evaluation, and Learning Plan.
A Discussion Note introducing the concepts of complexity and its relation to USAID programming. The paper outline five complexity-aware monitoring methods
This document provides users with access to a data quality assessment checklist which can be used by operating units when creating data quality assessments.
The purpose of this document is to provide a foundational understanding of probability sampling to USAID staff to equip them as well-informed commissioners and consumers of surveys, evaluations, and other products (hereafter referred to as studies) that require probability sampling. We hope that it will serve as a resource for commissioners to make informed decisions about surveys and to use monitoring, evaluation, and learning (MEL) resources effectively. The main audience for this document includes monitoring, evaluation, and learning specialists, Contracting Officer’s Representative (CORs), and Agreement Officer’s Representative (AORs).
This guide provides information for Agency staff and implementing partners on remote monitoring techniques and when they can be employed.
A review of how systems thinking and political economy lenses fit together.
How to ground programming by making sense of data with stakeholders
A composite indicator (or index) combines two or more data sources into a single measure. From the Self-Reliance Metrics to the CSO Sustainability Index, composite indicators are everywhere at USAID. They are in our Country Development Cooperation Strategies and in the Activity Monitoring, Evaluation, and Learning plans...
Context monitoring solicitation as a model