Fail Again. Fail Better

Nov 17, 2020 by Emily Janoch Comments (0)
COMMUNITY CONTRIBUTION

“Ever failed. No matter. Try again. Fail again. Fail better.” --Samuel Beckett

Here’s my favorite part of that quote: the ultimate goal is not a lack of failure; it’s better failures. That’s good news for CARE, because we just published round two of our Learning From Failure initiative, and…I know this will surprise everyone…we haven’t stopped failures yet. We do have some hopeful signs that we’re failing better; or at least, that we’re improving on some concrete weaknesses we identified in the first round.

It’s an interesting process to launch the second phase of learning from failure. The first round, we didn’t know what we were going to find. We spent as much time talking about how it was the first-ever report of its kind as we did about the actual failures. Our case study admitted, “It's still very early to see specific development impacts.”

Round two isn’t quite the same. It’s not new anymore, so there’s less excitement at having invented something. We’re not discovering data and themes for the first time. In a lot of ways, the stakes are higher. Round two of learning from failure becomes an exercise in continuous performance improvement, rather than a journey of discovery. If we don’t see improvements, we don’t have the excuse that it’s too early to tell.

It also takes a sustained commitment. Launching an exploratory exercise at a small scale is easy, especially when no one quite knows what the answers will be. Pulling together a few pieces of content over a few months is pretty straightforward. It takes some staying power—and real support from leadership—to keep up the work over time, especially in the middle of a pandemic. That’s even more true once we’ve seen one round of results and had a chance to understand the work that it takes to improve.

So what does round 2 of learning from failure look like? Our podcast is still going strong, with a whole series over the summer on the importance of equity in local partnerships and several podcasts learning from what’s been going on in COVID, and what we would change if we could do it all over again. We also published the second meta-analysis, looking at updated results. There’s a podcast on what we learned from the metanalysis process here.

What have we learned about failing better? Here are some of our top lessons:

  • Failure can lead to progress: Two of the big areas that came up in our first round of learning from failure were Monitoring, Evaluation, Accountability, and Learning (MEAL) and gender. Those lessons guided some key investments around improving our MEAL and gender work, and those are the areas where we saw the biggest progress this year. Those investments included building a series of mandatory internal training courses on MEAL, and building new guidelines for improving gender transformative programming and MEAL. So focused investments can make a difference.
  • We need to focus more on adaptation: A lot of our failures in this round were around understanding context—especially how contexts changed, and what realities were on the ground as input supplies, markets, and local capacity shifted. The data from this round implies that focusing just on design won’t be enough to address the context problem. We also need to build in more adaptability. Another core theme was that we need to get more proactive about adaptation—not just waiting for contexts to shift, but always looking to see how we can get more streamlined, efficient, and higher-quality throughout project implementation.
  •  Change (and our ability to measure it) are still slow: While we are proud of being one of the few multi-mandate International NGOs that is able to report its global impacts in the light of the Sustainable Development Goals, we haven’t cracked the nut of getting to rapid-cycle learning and near real-time evidence about what’s not working, especially at a global level. We’re still looking mostly at lagging indicators, even in our learning from failures highlighted in evaluations. This year, a lot of our focus is on how to get faster at this analysis. That includes building out more work around adaptive management, faster data systems to look at key indicators, and more proactive testing of specific ideas using agile methods.
  • We need to develop our action plans for specific actors. While we did present tailored 2019 failure analysis to many project and region specific teams, the recommendations and action plans remained at the global level. With the 2020 analysis, we need to not only present tailored findings to each relevant team, we need to customize action plans and recommendations to each team so they can take forward the actions that are most relevant to them.

 

We’re not alone: We’ve had some wonderful experiences sharing this journey with others—both international development practitioners and others. We’re always learning from new places. The most recent publication from the Food Security Network on Learning from USAID Food Security Development Mid-Term Evaluations echoed many key trends we saw in our analysis. The importance of streamlining activities, focusing on high-quality implementation, focusing more on sustainability plans all showed up in CARE’s analysis as well.

Want to join us on this journey? We’d love to hear more about what you are working on to learn more from failure. E-mail me ([email protected]) to share your experiences.

COMMENTS (0)