No One Can Know Everything: Collaborating for Better Evaluation Recommendations
In June 2016, USAID/Jordan decided that, while its systems were producing strong evaluations, there must be a way to further enhance the utility of the final recommendations for learning and subsequent adaptive management. The Program Office elected to use the lens of CLA to review the then-current evaluation processes to see how they could be improved. Through this review, the Mission identified a need for enhancing collaboration between evaluation team members and USAID activity managers to ensure that evaluation recommendations were developed and worded in ways that would increase the likelihood of their utilization for improving programs. Despite initial concerns that increased involvement of USAID staff in the recommendations development and revision process could undermine the independence of the evaluation team, a new workshop was added to the overall evaluation process where evaluation stakeholders would collaboratively co-generate the final recommendations after the evaluators had finalized their key findings and conclusions. As a result of all stakeholders’ openness to continuous learning and improvement, these workshops, attended by USAID technical managers of the activity being evaluated, the evaluation team members, Program Office (PRO) staff, and staff from the Mission’s Monitoring and Evaluation Support Project (MESP), implemented by MSI, have resulted in perceived improvements to the utility of the final recommendations without undermining the integrity of the evaluation process. More useful recommendations are expected to facilitate more efficient and effective utilization of the evaluation results for adaptive management by Mission and implementing partner staff, resulting in improved development outcomes for the people of Jordan.