Skip to main content
Community Contribution

Let's Get Meta: Learning how to Learn from Evaluations

Dec 22, 2020
Carla Trippe

Insights from a Meta-Evaluation of USAID/Liberia's Portfolio from 2016-2020

As Social Impact’s (SI) five-year Liberia Strategic Analysis (LSA) contract facilitating data-driven decision making with USAID/Liberia comes to an end, we took a step back to synthesize 31 evaluations and assessments conducted between 2016 and 2020 into one meta-evaluation. The result? Two paths of reflection: 

Image removed.

Why do a meta-evaluation?

The meta-evaluation captures rich lessons around design and implementation, spanning seven sectors and examining over $464 million USD in USAID investments. It identified systemic implementation challenges, emergent best practices, and how recommendations have been applied toward adaptive management. The reflections are not limited to implementation approaches, but also the kind of award mechanisms that enable the desired change. What unfolds is a story about collective contribution toward strategic objectives and conditions for self-reliance. This institutional knowledge is crucial for decision makers, especially USAID activity managers.

But we want to focus on the evolution of our learning approach.

We believe learning is not reading a report.

What does learning look like? We believe learning is not simply reading a report. That’s why SI put a lot of thought over the years into how evaluations can give USAID staff the information they need – packaged in the right way at the right time – for decision making.

The first critical lesson was learning how to emphasize utilization of the evidence. From the design to the data collection to analysis, each step in an evaluation serves as an opportunity to engage your implementer, partners, and government counterparts in shaping the use of the information being produced. The “How Do We Learn?” chapter provides tips on what engagement looks like at each step along the way. The bottom line is that ongoing engagement ensures that the learning process is inclusive, contextually relevant, and fosters long term self-reliance by helping local stakeholders think through how they can use the data.

The second critical lesson was how to facilitate ownership of the evidence and collectively act to realize development objectives. We created more interactive analysis sessions earlier in the evaluation process to involve USAID staff and stakeholders in the sense-making of the evidence. SI and USAID/Liberia also tested more collaborative workshop models to bring implementers, partners, and beneficiaries together to “evaluate” progress, including joint problem-solving and developing ways forward. We recognized that USAID and implementers do not work in a vacuum, and we need stakeholders to realize the development objectives. This is why learning events are so important and have been successful tools to improve how USAID collaborates with implementers and host country governments toward adaptive management.

To share these learnings, the meta-evaluation includes “A Roadmap for Learning after Evaluations” – from dissemination to hosting a learning event to a post-activity review – and a note on “How To: Selecting a Learning Tool” for your development challenges, complete with a full toolkit on when to choose an evaluation, activity review, assessment, or other learning tools considering your time, cost, and other contextual parameters. 

Ultimately, our evaluations should leverage local voices and lived experiences to promote the change they envision for their communities and for their country. The meta-evaluation revealed that the way we support learning should honor the agency of all actors in the complex environment in which we implement. Evidence can inform strategy and programming adaptation, but learning collaboratively can also strengthen partnerships, change behavior, and engender meaningful stewardship of development outcomes.