Skip to main content
USAID Contribution

What Works, What Doesn't, and Why: Notes From a Discussion with Partners on Operational Issues with M&E

Apr 14, 2014

The March 18 “What Works” event, hosted by USAID’s Office of Learning, Evaluation, and Research and the Society for International Development/Washington's Monitoring & Evaluation Working Group,provided an opportunity for USAID staff, implementing partners, and evaluators to open a dialogue on challenges and opportunities in our shared responsibility for monitoring and evaluation and to learn from each other about how to do better.  Because there was such a great turnout, the conversation was narrowed to focus specifically on evaluation. During the event, USAID’s Director of the Office of Learning, Evaluation, and Research, Cindy Clapp-Wincek, posed a question to the gathered attendees: “What are the most significant obstacles to doing evaluation well?” A summary of the discussion and some notes are below.

Next steps

PPL/LER will meet with the Society for International Development (SID) and InterAction to determine how we move this conversation forward. We will try to post that information on Learning Lab in the next week.

Summary of key points 

The attendees broke into seven groups to discuss separate categories: Procurement, Technical, and Contracting Officer’s Representative (COR) Management. Here is a brief summary of some of the most prominently mentioned obstacles:

Procurement

  • USAID does not provide adequate resources to effectively procure evaluation. This includes inadequate budget, inadequate time to collect and analyze data for the evaluation, and inadequate lead time to prepare for the evaluation.
  • There is a limited availability of skilled consultants to conduct evaluations (both international and local). This obstacle is compounded when not enough lead time is provided for an evaluation, as it is hard to get good people on short notice for a rapid deployment.
  • Evaluation statements of work (SOW) are often unclear and not sufficiently flexible to allow for necessary adaptations during implementation of the evaluation.

Technical

  • Data available to evaluators from USAID and implementers is poor. For instance, performance monitoring indicators may be weak or non-existent, relevant baseline data is missing, and counterfactual data was not collected.
  • Evaluation is not well integrated into the project. For instance, insufficient coordination between the implementer and USAID on the evaluation; choice of evaluation type is at odds with project design (randomized treatment vs. targeted); and timing of the evaluation is not suited for use of the findings by the implementer.
  • Evaluations often lack a range of appropriate methods to address important issues of implementation, theory of change, causal links, project contribution to outcomes, etc.

COR Management

  • USAID staff has insufficient understanding of the strengths and weaknesses of evaluation methods, the time needed for evaluations, or experience in participating on evaluations.
  • There is a bias from USAID toward positive results, accountability, compliance, and political sensitivity in managing evaluations, rather than learning and feeding results back into the project.
  • USAID changes the SOW during planning or implementation of the evaluation.

More detailed notes

Procurement

  • USAID needs better integration of evaluation with the project being evaluated, particularly on timing of the evaluation
  • Evaluators needs greater flexibility/adaptability in the SOW during implementation of the evaluation.
  • USAID does not provide enough money, time for evaluation and data analysis, or enough lead time, making it hard to get good people on short notice with rapid deployment
  • Timeframe (to get IRB approval) and budget is inadequate
  • Lack of flexibility in the SOW
  • Limited availability of skilled consultants
  • OCI
  • Unrealistic expectations and budget
  • Third party evaluators—who pays them? Are they really third party if they are paid by the implementer?
  • Timing of procurement processes and decisions
  • Budget
  • Clarification (unclear and changing SOWs)
  • Inexperienced staff

Technical

  • Respective contributions of M&E
  • Data management and use in proper place
  • Feeding knowledge back into learning—not just focused on outputs, but implementation, theory of change/pace of change, causal links and impact
  • Insufficient coordination/involvement between implementer and USAID in development of indicators and questions
  • Data availability and quality—no baselines or monitoring data or counterfactuals
  • Mission expectations/understanding re: scope and strengths and weaknesses of evaluations
  • No good performance monitoring indicators
  • Lack of range of methodologies (to deal with contribution vs. attribution and gap between performance monitoring and evaluation.)
  • Lack of common understanding of types of evaluations and level of rigor
  • Choice of evaluation and its relationship to the type of project (randomized treatment vs. targeted)

COR Management

  • Accountability/compliance motives
  • USAID staff not understanding research methods, needs, etc.
  • Insufficient capacity and resources—in both USAID field staff in M&E and local partners
  • Solution? Bring USAID staff (M&E or technical staff) in on evaluations so they better understand what time it takes and issues
  • Bias toward positive results
  • No flexibility in SOW
  • Focus on end results
  • Multiple stakeholders
  • Politically sensitive results
  • Changing SOW in the planning stage
  • COR interested in the outcome that is desirable