What to Expect When You're Expecting CLA: Lessons from the CLAIM Learning Network

Jun 26, 2018 by Larissa Gross Comments (2)
COMMUNITY CONTRIBUTION

This blog was written by Larissa Gross, Senior Strategist at the Pollen Group, on behalf of the CLAIM Learning Network.

What difference does collaborating, learning, and adapting (CLA) make to development? This is a question USAID/PPL and its partner LEARN have been trying to answer and over the past 18 months, and recently set out to expand the knowledge base on CLA with the help of five implementing organizations testing the impact of CLA interventions across activities in different parts of the world.

So what? What did we find?

  1. CLA practices and organizational and developmental outcomes must be clearly defined from the outset to facilitate measurement.
  2. When measuring CLA's contribution: pivot logs are useful for identifying adaptive actions, CLA capacity self-assessments can garner partner buy-in, and detailed theories of change can help mitigate confirmation bias.

What to Expect When Practicing CLA
Based on our research, we compiled insights into what CLA may look like for different organizations and what to expect when you invest in CLA. The key findings members shared are:

  • One size does not fit all so get ready for some self-reflection. Depending on the timeframe, resources, team members, activity goals, and other unique factors specific to your effort, CLA and the organizational processes, operations systems, and culture that support it will (necessarily) vary.
  • Investing in CLA can uncover turbulence - and that’s not a bad thing! It is important to recognize that organizations beginning to practice CLA will need to test out which approaches work best for their needs. Along the way, organizations will need to navigate a process of learning and adapting based on imperfect information - this is part of the process of better understanding the context in which you are working.
  • People will see that they are already doing it. CLA may look like existing practices within activities. If it looks familiar, then we are on the right track! Lean into this familiarity when promoting CLA by connecting to what teams are already doing.

How to Measure CLA’s Contribution to Development Outcomes

When the learning network members gathered together to discuss our key learnings, we were asked to provide a main take away on measuring CLA; the headline of our 18 months of research. The common sentiment emerging from this discussion was that measuring CLA’s contribution is difficult, but it is possible! And, it can be done better.

Building on this sentiment, we pulled together key insights on the main challenges of measuring CLA, what methods and tools helped us to measure CLA better and how we would tweak our methods for future research. In particular, the learning network members emphasized:

  • Defining what CLA practices are, how they are interrelated, and which practices will be measured are key to understanding the power of your findings and comparing with others’ findings. Generating a general theory of change and a detailed theory of change can help to minimize the difficult of comparing across research agendas.
  • Methods that analyzed decision-making processes were most successful in connecting CLA to development outcomes but come with limitations including recall and confirmation bias as well as an inherent focus on actions taken rather than those not taken.
  • Flexibility of research design is necessary due to the complexity of the research agenda. Network members recommended, to increase the likelihood of producing robust findings, including multiple partners and using varying tools to increase the adaptability of the research agenda.  

Background on the CLAIM Learning Network

The CLA Initiative for Measurement (CLAIM) learning network launched in the fall of 2016 to answer the following questions: Does an intentional, systematic and resourced approach to collaborating, learning and adapting contribute to development outcomes? If so, how? And under what conditions? How do we know?

The five implementing partners in the network are:

  1. Counterpart International—focused on the Participatory Responsive Governance—Principal Activity (PRG-PA) in Niger to measure the degree to which staff use CLA-generated knowledge and learning in planning activities and executing decisions in their daily work and the degree of empowerment that participants feel they have in those activities.
  2. The Global Knowledge Initiative—pursued replicable approaches for monitoring and evaluating collaboration by testing and refining the Context-Collaboration-Program Effects (CCPE) Analysis on the Learning and Innovation Network for Knowledge and Solutions (LINKS) program in Uganda.
  3. MarketShare Associates—built and tested a set of CLA-focused tactics such as coaching modules on adaptive management and pivot logs through the DFID-funded Arab Women’s Enterprise Fund (AWEF) in Egypt.
  4. Mercy Corps—field tested promising techniques for promoting adaptive management through pilot projects as part of the Analysis Driven Agile Programming Techniques (ADAPT) initiative.
  5. Pollen Group—conducted two comparative, longitudinal case studies of projects that have made significant investments in CLA in Bangladesh and Zambia.

COMMENTS (2)

Here are a couple of things for you to think about doing a little differently, and a couple of things to try to develop as you're uniquely on the right track.

But first, know I am a super-generalist dabbler, with an interest in many things, including measurement. I've looked at the latter through the lens and frameworks of many fields. More often that not, I find that you guys in the Development field are ahead of other fields that also espouse measurement. 

As a dabbler, I'm no expert. Still, sometimes I see things that might be worth considering by people within the field.

A couple of thoughts about the logical-sounding phrases: "...development outcomes must be clearly defined from the outset to facilitate measurement;" and "If it looks familiar, then we are on the right track!"

While certainly you want to think about measurement early and not proceed with the implementation of a project with no clue on how you're going to evaluate it. So--in a sense--"facilitate" can be OK, but be careful with "facilitate." You don't want to put the cart before the horse. It's a subtle thing, but you don't want the outcomes (or even decisions made along the way) to be overly constrained by the need to measure them.

Here's an article I did from another field that talked about this. As mentioned within it, it took me years to ponder and then work what is was that initially bothered me: 

http://www.sustainablebrands.com/news_and_views/new_metrics/matt_polsky/putting_cart_horse_five_anecdotes_about_sustainable_business_ 

What to make of five anecdotes from the author’s attendance at over a dozen sustainable business metrics conferences, and leading an interagency indicators initiative of New Jersey State Government in 2000-2002 that don’t fit the numbers narrative.

Also, from what I understand about the Development Evaluation Field, which isn't a whole lot, you should allow for changes in what you measure as you learn from the experience of doing the project, unintended consequences which may then require some changes to measurement, and, in general, allow for the emergence of new developments and understandings. Among other things, you don't want an overly rigid system, including measurement, to overlook these. That's where Transformational changes can come from.

On the other hand, you mention two areas I've never seen before: confirmation bias as well as the "inherent focus on actions taken rather than those not taken"--or the counter-factuals. I don't know how you can build on these two, but at least you're aware of them. Perhaps you could brainstorm some possibilities.

Hope this has been useful.

posted 5 months ago
Kat Haugh wrote:

Hi Matt, 

Thank you so much for sharing your insights on the blog.  Your reflections are really interesting and helpful! Your point about making sure not to overly constrain outcomes or decisions because of the need to measure them is right on the money. We talked about that at length as a network. We decided that we needed a more clear definition of CLA from the outset of the project because we realized during the grant period that partners were measuring slightly different aspects of CLA, so it made it hard to compare across projects and identify what activities counted as "CLA" and what activities didn't. If we would have had a clearer definition from the outset, we would have allowed enough flexibility for partners to change that definition over time as they increased their understanding of CLA (in line with what is proposed under Developmental Evaluation - which also would have been interesting learning!), but we needed a shared understanding from the beginning to facilitate more meaningful measurement. 

To your second point, we spent a great deal of time talking through how to mitigate our confirmation bias as a network and how to establish counterfactuals. Both proved to be a challenge for all of us. We share some of our reflections and ideas for future research in the measurement briefer if you'd like to take a look: https://usaidlearninglab.org/sites/default/files/resource/files/062618_claim_briefer_measure_b.pdf

Thanks again, Matt! 

posted 4 months ago