Skip to main content
USAID Contribution

You’ve come a long way, evaluations!

Apr 25, 2016
Winston J. Allen

Image of USAID Evaluations ReportOn March 30, 2016, my colleagues and I participated in an event to mark five years since the release of the USAID Evaluation Policy. USAID released a report at the event taking stock of how evaluation practice has changed since that time. Co-hosted by the Brookings Institution and the Modernizing Foreign Assistance Network (MFAN), the event brought together a broad range of USAID partners and stakeholders to discuss the growing role of evaluation at USAID over the past five years.

Many acknowledged that the Evaluation Policy brought about changes within the Agency. Key among these changes were using evidence to better inform decisions, improve program effectiveness, be accountable and transparent to stakeholders, and support organizational learning. Yet, there was also recognition for a continued need to balance the use of evaluations for learning and accountability.

USAID Administrator Gayle Smith gave an unscripted keynote address reflecting on why the policy has been a success, and the work that still remains. Administrator Smith noted that a key feature of the Evaluation Policy is that, “the policy is not static; rather, it is flexible with the capability of adapting to changes in the field of evaluation and development.”

This adaptive approach to evaluation practice and learning has allowed USAID to transform by building a culture in which there is willingness to use and learn from evaluations and to apply that knowledge to further improve development programming. I have seen the cultural shift Administrator Smith described in her remarks, one in which evaluations really have been a “game changer” for USAID and have inspired confidence among stakeholders about the Agency’s use of evidence and data to improve programs. USAID has shown we can learn what does and does not work, and not just tout every project as a success.

In her remarks, Administrator Smith also identified work what remains to be done. Turning ahead to the future of evaluation at the Agency, the Administrator noted that several challenges still remain, which include:

  • Striking the right balance between learning and accountability.

  • Developing appropriate and effective methods and approaches for evaluation in complex environments.

  • Institutionalizing evaluation within the architecture of the agency.


After the Administrator spoke, George Ingram, a Senior Fellow at the Brookings Institution, moderated a panel that included Ruth Levine, Director of the Global Development and Population Program at the Hewlett Foundation, who led the effort of writing the Evaluation Policy as former Deputy Assistant Administrator in USAID’s Bureau for Policy, Planning and Learning (PPL), and Wade Warren, current Assistant to the Administrator for PPL. It was striking to see the past and present side-by-side reflecting on the policy. Levine noted the participatory nature of its development: it was a collective effort by expert staff across the agency.

Warren highlighted the value of senior leadership embracing the policy and championing evaluation efforts. Other examples Warren provided of how USAID has supported better evaluation practice included:

 

  • More than 1,600 staff who have completed monitoring and evaluation training since 2011.

  • Guidance for program planning and management that includes evaluation planning from the very beginning.

  • Improved evaluation quality and practice through partnership with leaders in the field, such as the International Initiative for Impact Evaluation (3ie) organizations.

     

Going forward, the Agency will continue work to strengthen staff capacity in evaluation, including improving understanding for how to co-design projects and impact evaluations, expanding tools, and building more partnerships for evaluation. Even more importantly, USAID has initiated several ongoing efforts to improve the Agency as a learning organization. Just two examples include:

  • All Washington Bureaus have developed annual evaluation action plans that look at quality and use and identify challenges and the priorities for the year ahead; and

  • Several bureaus have synthesized all the evaluations relevant to a specific sector to summarize key findings and identify gaps in knowledge that then inform sector learning agendas. For example, in March, the Bureau for Food Security published a synthesis report summarizing findings from 196 evaluations.

     

Despite the challenges on the road ahead, it is clear that the Agency has come a long way.