Cholera to COVID19: Applying CLA in Crisis

Apr 2, 2020 by Emily Janoch Comments (0)
COMMUNITY CONTRIBUTION

This blog is a response to our invitation to reflect on how this community is using CLA approaches to respond to COVID-19. We’d love to hear from you, too! To add a blog, click here and select “Add a Blog” in the blue bar. 

I’ll admit, I had about a minute on the second day of COVID lockdown where I thought, “maybe this time will be slower for me. It’s so hard to prioritize learning in emergencies.” After all, I’m the Director of Knowledge Management and Learning, and the vast majority of my time usually focuses on development programming. I won’t be travelling in the immediate future, so I was planning to get around to some of the long-term items that coast along my to do list as immediate priorities got put on hold for COVID response.

I could not have been more wrong. Collaborating, Learning, and Adaptation have been at the heart of CARE’s response to COVID. I’ve been more engaged with more people on CLA in the past three weeks than at anytime before.

We’re having to flex all of our adaptation muscles (and quickly build new ones) to re-purpose existing programs to meet new needs. In less than a week, 79% percent of CARE offices had contingency plans in place for how their current programs would adapt to the new situation. In 4 days, the Gender in Emergencies team published advice about how COVID is going to put special burdens on women. We’ve written new guidelines for how savings groups can adapt to COVID, and how we should think differently about cash and voucher programming.

Perhaps most surprisingly, we’re also finding space for pause and reflect. Even amid the breakneck pace of trying to protect all of our staff, and get contingency plans in place for the inevitable lockdowns, we’re making space to learn. Even better, we’re applying our learning to what we’re doing now.

The team took a little time to go back over our After-Action Reviews from our past responses to the 2011 cholera outbreak in Haiti, the 2014-2015 Ebola response in West Africa, and more recent outbreak responses in Yemen, DRC, and Ecuador. That allowed us to crystalize some core program recommendations for how to get it right this time.

We’re even building on our learning from failure approach, to capture examples of what we can do better this time—both from the HQ perspective and the Country Office teams.

I’ve been amazed at how hungry people are for these tools. Teams involved in the pause and reflect conversations—which are usually about 30 minutes—are finding it helps sharpen their thinking and turn their reflections into action points to move forward. Instead of feeling like an extra crammed onto an already full plate, people are finding that having even a small space to breathe and think helps them figure out what’s next.

From a morale standpoint, this feels like it’s making a difference. Having these conversations lets people see their own expertise and gives us a space to recognize the incredible human power we have here. Teams see how much they are capable of, and how much they have to offer the wider community. We’re also making everything open source so that anyone who wants to use our tools can access them.

The process has also highlighted that our past investments in CLA—like hosting After Action Reviews, documenting the findings, and saving them somewhere accessible—are invaluable. We were able to access that learning and reflect it back to teams almost immediately. I have never been so grateful for meeting reports in my life as I was this week—even the detailed reporting of the small group work.

In a comment that will keep me going on my worst professional days, a much-valued coworker who sat in on one of these calls told me, “In such times of stress and uncertainties, when there are too many unknowns and fears, it is important to have such anchors that remind us of our commitment to serve women and girls at the best of our capacity.”

 

 

COMMENTS (0)

How Are You Using CLA Approaches to Respond to COVID-19?

Apr 1, 2020 by USAID Learning Lab Comments (1)
COMMUNITY CONTRIBUTION

We are experiencing something that none of us expected. The context has shifted under our feet. The world has changed significantly. And yet, our development work must go on and be used to support families, communities, countries, and a world in crisis.

We invite you to share the ways in which you’re using Collaborating, Learning & Adapting (CLA) approaches to switch gears and be responsive to this global pandemic. Submit a blog about your experience - personal or professional - or simply write a response to this post below. (To add a blog, click here and select “Add a Blog” in the blue bar. To add a comment, you will need to become a Learning Lab member, which you can do here.)

Here are some reflection questions to get you thinking:

  • Who are you collaborating with that you weren’t before? How have your forms of collaboration shifted?
  • How are you being intentional about asking the right questions, getting the answers you need, and sharing that information with others to adapt programming? What does that look like for you in a rapidly evolving situation?
  • How have your personal and professional priorities shifted in light of the pandemic? What are you doing differently as a result?
  • How have you adapted programming to respond to the pandemic? Have you been able to? Why or why not?
  • What CLA tools and approaches are you finding most useful? How are you using them? To what end?

For some inspiration, here are some responses from our LEARN team:

  • I am rapidly learning how to use Zoom and other online tools for virtual meetings and facilitated experiences that need more advanced settings, such as virtual breakout rooms. - Monalisa
  • I am adapting scopes of work to prioritize COVID-19 learning needs. - Laura
  • I am taking walks while I’m on calls when I can to get fresh air and stroll around my neighborhood (safely!). - Kat
  • I’m adapting to my energy levels. For me, my day is no longer 8 straight hours of working like I was in the office. I’m working throughout the days and evenings, and taking longer breaks in between for my personal life.  - Monica K.
  • We are helping the client and our partner to rapidly learn and adapt, as they convert a planned TDY to online field support. This includes reassessing purpose, learning new technical tools and facilitation approaches, reimagining relationships and collaboration, scenario planning, and crowd-sourcing for innovation. - Eva

We look forward to hearing from you. Most importantly, we hope that you and your families are healthy and safe at this time.

What the CLA Case Competition Winners Tell Us about the Journey to Self-Reliance

Apr 1, 2020 by Monica Matts Comments (0)

In September 2019, USAID’s Bureau for Policy, Planning and Learning and the LEARN contract announced the winners for the 2019 CLA case competition. This was the fifth year of the competition, and we continue to be impressed by the quality of the entries and the compelling stories that they tell of CLA in action. If you haven’t read about the winners yet, you can find a short summary of their cases here.

USAID also conducted a CLA Case Competition Self-Reliance Analysis of the cases to highlight examples of USAID staff and implementing partners using CLA approaches to improve organizational learning, contribute to better development outcomes, and foster country self-reliance.

In order to understand linkages between CLA and the Journey to Self-Reliance (J2SR), the CLA case competition entry form included a new question: “Did your CLA approach contribute to self-reliance? If so, how?” By including this question, we were hoping to understand, from the USAID Missions and implementing partners who submitted cases, more about how they view the relationship between CLA and the Journey to Self-Reliance, USAID’s approach to development and humanitarian assistance. Because the Journey to Self-Reliance is a relatively new concept, we did not score the submissions on their responses to that question. However, there were several interesting and insightful examples and these were analyzed as part of the Self-Reliance Learning Agenda (SRLA), which supports USAID in operationalizing the Journey to Self-Reliance To help explore the link between CLA and Self-Reliance, the SRLA team analyzed the entries from the 2019 competition and drafted a report. This blog describes some of the findings from that analysis and highlights some of the judges’ favorite examples of CLA supporting self-reliance.

Self-Reliance in the CLA case examples

As USAID’s Policy Framework describes, building self-reliance requires both commitment and capacity; USAID fosters self-reliance by championing local solutions that support capacity and commitment. CLA approaches help USAID Missions and implementing partners to work more effectively with local actors, country partners, and other stakeholders to generate and share learning that can be used to adapt development programs so they are more effective in supporting country self-reliance. In particular, the SRLA analysis highlighted a few common pathways to self-reliance--including, capacity strengthening; engaging with the private sector; and financing self-reliance--and that a variety of CLA approaches support these pathways and self-reliance. 

One pathway is capacity strengthening, and several cases described how CLA approaches can support self-reliance in this way. This case from RTI and USAID/Kenya describes how the Kenya Youth Employment and Skills Program (K-YES) worked collaboratively with Technical and Vocational Education and Training (TVET) centers to understand their capacity development needs. As a result of the assessment process, K-YES was able to develop a tailored and systematic approach to institutional strengthening. A SRLA capacity strengthening paper[1] finds that participatory approaches to determining capacity, such as the one used in Kenya, actively engage local actors in identifying capacity strengths and weaknesses — including informal practices and social norms — that might not be obvious to outsiders and that help to identify what is already working well. As a result of the institutional strengthening, the TVET centers went on to access new funding from public and private sectors, improve their performance scores and, ultimately, assist more young Kenyan men and women in gaining skills and obtaining new or improved employment. As the authors note, CLA approaches have also been taken up by the TVET centers themselves and they are “now able to identify areas of learning and design collaborative processes that ensure inclusive decision making and collective learning, which are key ingredients for sustainability and self-reliance.”

This case from Banyan Global and USAID/Honduras also involved strengthening capacity for a youth employment program. The objective of the Empleando Futuros activity was to decrease risk factors and increase employment for youth. The activity recognized the importance of understanding the local context and working closely with stakeholders--especially the private sector and the youth targeted for the program--to ensure that programming was as effective and relevant as possible. To build capacity, the implementing partner, Banyan Global, worked with local stakeholders to develop a training program for youth. When the activity staff saw high dropout rates among participants, they used CLA approaches to revisit their design and assumptions, examined the evidence and engaged with partners and the private sector to get feedback and decide on how to adapt their approach. The case notes that “through adaptive management, local partners and public and private sector actors have seen how their feedback can be used” to improve outcomes and strengthen capacity.

The case from Honduras also shows the importance of engaging with the private sector, another pathway to support self-reliance described in the paper on CLA cases and self-reliance. This case from India relates to that theme as well. The case describes a sustainable, private sector solution for transporting specimens to laboratories for timely tuberculosis testing. The case notes, “through collaboration with the government and private sector, KHPT demonstrated a way to bridge the gaps in the system through sustainable innovation.” Both the government’s health system and the private sector’s transportation system were strengthened in enduring ways, and the program was transitioned to the Joint Effort for the Elimination of Tuberculosis, funded by the Global Fund to Fight AIDS, Tuberculosis and Malaria.

Mobilizing domestic resources, one of the key approaches under financing self-reliance, to support programs in the long-term, is an important pathway and one that is well illustrated by this case about the Sabal (Sustainable Action for Resilience and Food Security) program in Nepal. With funding from USAID’s Food for Peace program, Save the Children worked with a number of community groups, including farmers’ groups, village savings and loan associations and village animal health workers groups to improve food security and resilience in targeted districts. Using CLA approaches, Sabal staff developed an assessment to understand these groups’ capacities, and based on that information, the program provided targeted coaching and assistance and worked to connect higher capacity groups with domestic resources. Last year, 41 municipalities allocated over $9 million USD from their own budgets to the groups’ activities, and as the submission notes, CLA approaches have “actually facilitated stronger partnerships and buy-in overall.”

Finally, the analysis paper notes a fourth pathway to self-reliance, involving strategic engagement with key counterparts at one or more levels of the partner country governmentThis case from Uganda’s Regional Health Integration to Enhance Services in Eastern Uganda describes how USAID and the implementing team worked with local governments and communities to develop a shared goal of eliminating HIV in Eastern Uganda. Teams took a structured approach to learning by reflecting regularly on performance data, and they adapted by trying different approaches to increasing HIV testing and counseling and shared their findings. As a result, “communities were better equipped with information that enabled them to address social barriers that were hindering accessing and adapting healthy behaviors” and more clients were tested and treated for HIV.

These four cases are particularly good examples of CLA approaches that support the pathways to self-reliance, but the analysis paper notes that there are many other submissions to the 2019 case competition which feature approaches that support self-reliance. As the paper concludes, there is “good reason to be optimistic about the fact that many programs already seek to strengthen host country capacity and/or commitment.” We hope to continue hearing about them and learning from the experience of USAID Missions and implementing partners about effective ways to support self-reliance.

Please contact USAID at [email protected] to share your experiences or evidence. You can also learn more about the Journey to Self-Reliance at https://www.usaid.gov/selfreliance.

 



[1] The papers in this series summarize a landscape analysis conducted by USAID to better understand how existing evidence can contribute to addressing the SRLA learning questions. Initiated during the developmental stages of the SRLA, the aim of this landscape analysis was to conduct an extemporaneous and iterative examination of how concepts related to self-reliance are discussed in existing international development literature. The paper series finds that capacity and capacity strengthening are complex and contested terms with practical, real-life implications for how development practitioners approach capacity strengthening with local partners. The paper series is organized according to four key inquiries related to defining capacity, etc.

Towards a More Thoughtful Approach to Decision-Making

Apr 1, 2020 by Monalisa Salib Comments (0)
COMMUNITY CONTRIBUTION

Effective decision-making - how it’s done and what decisions come to be - is a critical component of effective management. And it’s essential for adaptive management - you can’t be intentional about adapting if you don’t take time to think about what you’ve learned, make decisions based on that, and then take action (see more on this here). In creating the guide to hiring adaptive employees, I came across a variety of sources that highlighted the importance of decision-making and the concept of decision quality. In the guide, it became one of the key competencies of an adaptive employee. It is the ability to make quality decisions based on a mixture of analysis, evidence, and experience.

And if I’m being honest, I am in a management role and oftentimes feel myself being a little unsure about my decisions. I realized I needed a personal checklist or approach to help me become more thoughtful in my decision-making in the hopes that this would make me a more effective leader and adaptive manager. Here’s what I’ve been thinking about:

Before even making a decision, I realized I need to ask myself at least four key questions:

  • Is there urgency? I tend to move quickly, and that’s not always a good thing. First consider: is there real urgency? When does the decision need to be taken in order to avoid delays or the loss of momentum? If you have the luxury of some time, then take advantage of it, even if it is just a couple of days. At the same time, if there isn’t clear urgency, but something is still a priority, do not let decisions languish. You will lose momentum and motivation, and more time is not correlated with better decisions. Ultimately, considering this question informs what your decision deadline should be. Work back from that to know how much time you have to gather more information and think through alternatives.
  • How risky is this? Riskier decisions need to be considered more carefully. And risk is multi-faceted - consider not just financial risk but also risk to important relationships, to achieving our objectives, to our reputation, etc. (USAID’s risk appetite statement has a great list of various types of risk.). Part of CLA is figuring out ways to test new approaches while minimizing risk. But before making a decision, figure out what’s at stake. Is it really serious, or is it the case that regardless of the decision you take, the repercussions would be minimal? If the latter, move faster. (See this great framework from Adam Grant on decision-making and risk.)
  • Should I even be making this decision? Sometimes a staff person comes to you with a decision that needs to be made, but is it really yours to make? Sometimes it is, but we can easily forget that it’s important to give staff autonomy to make decisions about their work. Enabling decisions to be made as close to the work as possible can increase autonomy and overall employee satisfaction with their job. Don’t forget to push back if your staff should really be deciding. Instead of deciding for them, you can coach them through the decision by asking good questions: What are the advantages and disadvantages of going one way or another? Can you think of any alternatives to the options you’ve already established? Who has experience with this situation that you could learn from? (See below for some considerations for potential coaching questions.) On the other hand, if you determine that the decision does rest with you as the leader, then own that, don’t punt, and communicate how the decision will be made to manage expectations.
  • Who else should be involved in making this decision and how should we decide? Let’s say you should be involved in making this decision - are there others who need to be involved as well? Think about people deeply affected by the decision or who need to act on the decision. What kind of decision-making approach is appropriate for the level of urgency and risk? In development, we tend to default to a consensus-based model. I’m generally not a proponent of this approach. Sometimes it makes sense because you really do need everyone bought in. But it can be time-consuming, cause delays, frustrate those on different sides of an issue, and result in all kinds of compromises so the decision is diluted or meaningless. One of the first questions I ask USAID clients is “how are you going to decide this?” Is it a consensus model? Is it staff providing alternatives that leadership decides on? Is it a majority vote? All of these models could make sense depending on the situation, but you need to be intentional and transparent about what process you will use.

Now, you’ve determined that you are the person that needs to make the decision and that you at least have a couple of days to do so. Let’s say there’s medium risk. Here are some other considerations to help you get to a quality decision:

  • Is there an evidence base that can inform this decision? Evidence base refers to existing literature (grey included), data, analyses, assessments, evaluations, and even reflections on prior experience. Are you familiar with the evidence base? Do you know someone who is and can point you to useful resources? Sometimes you are charting new territory and there is very little evidence to help guide you or the evidence that exists is mixed and inconclusive. When you find yourself in this situation, make sure you plan to share out your experience so that others can benefit from the evidence you’ve generated the next time they face a similar decision. See the graphic below for options depending on the status of the evidence base.

  • What do I know based on my personal experience? Has a similar situation happened before? How did you handle it? Which factors were the same or different? What were the outcomes? Ultimately, what can you apply from your previous experience to this decision? Access your personal wisdom. 
  • What can I learn from others’ experiences? If you haven’t faced a similar situation or experience, what can you learn from others that have? A common tool that we use on LEARN is a before action review - this is a simple approach to ask others who have tried something similar about what happened, what worked, what didn’t, and what they would do differently next time to inform your decision. 
  • Are there alternative options that need to be considered? There are rarely only two options. Consider what else could be possible before making a decision. To get your creativity flowing, ask yourself questions like: if there were no restrictions, what would I do? Or if I had a magic wand, what would I do? If money and resources were not an obstacle, what would I do? Or, if it’s a year from now and things worked out, what decision(s) led to that point? Sometimes this opening can get you to new ideas and options, even if your authority or resources are limited. Consider this and other decision-making insights from the Heath Brothers.

You’ve considered these questions and you’ve made a decision. Now what?

  • Document your decisions and the rationales. There is a phenomenon called hindsight bias which refers to how we make sense of decisions after the fact. It’s important to remember why you decided something and, per the below bullet on reflection, refer back to those documented reasons so you don’t succumb to hindsight bias yourself. 
  • Share the design and rationale with those affected. If they weren’t involved directly in making the decision, those affected by it deserve to know what was decided and why. For management reasons, perhaps not everything can be shared, but share enough so you are communicating your leadership values along with the decision.
  • Reflect on the decision-making process and your rationale. After there’s been time to see the results of the decision, evaluate your own decision quality. I would spend less time on whether it was a good or bad decision (again, because of hindsight bias, we can often rationalize prior decisions, so it’s not a great use of time). Instead, reflect on the process and the justifications you used for the decision. Was the process a good one? Were the justifications you used accurate and in line with your personal and organization’s values?

I hope these considerations are helpful to you in your decision-making journeys. Leave us feedback on how you approach decision-making, what you like, or what I maye hvae gotten wrong!

Listen Carefully, Tread Lightly, Adapt Quickly: CARE's Approach to Adaptive Management

Mar 10, 2020 by Emily Janoch Comments (0)
COMMUNITY CONTRIBUTION

My first job in development was as an “Institutional Memory Specialist” (a fancy title to avoid calling me “intern in charge of finding paperwork”) in Bamako, Mali. On my first day, my boss handed me two documents—the proposal and the log frame—and said, “Here’s everything you’ll need to know to get started.” Here’s the problem: by then, we were 4 years into a 5-year project. If nothing has changed in your project since you wrote the proposal, you probably haven’t done a great job of adapting along the way.

It’s easy to understand why adaptive management matters. No matter how much we plan, situations will always change as we’re trying to do our work. That’s especially true for organizations like CARE, which is fighting poverty with some of the most vulnerable people in the most challenging contexts in the world. And it’s only going to get more true, as 80% of the world’s poor people will be living in fragile, fast changing, and unpredictable contexts by 2030. Finding ways to adapt to those contexts and continue to support people in a changing world is the only way to truly achieve our mission. It’s the only way to have lasting impact for people in need.

The challenge is often that it’s easy to say we should do adaptive management, and it’s harder to actually do adaptive management. Too often, project teams—as well as leaders, managers, and donor representatives—set the proposal as a north star from which they cannot deviate. Change is hard. Rapid change of complex projects in a chaotic context is harder.

Diagram showing CARE's tips for Adaptive Management

Change is hard, but it’s possible. In reality, my first boss in Mali was an amazing adaptive manager. He made little tweaks along the way. He knew how to read the context and figure out what happened next. He knew how to do it, but he struggled to explain how it worked. And he never wrote it down.

So how do we make it possible for people who do adaptive management to articulate it effectively so that others can follow along? How do we make it easier for those who are learning to build in practical steps that make adaptation possible? How do we ensure that adaptations are based on the best possible evidence given the need and the timing?

To meet this need, CARE has just released Listen carefully. Tread Lightly. Adapt quickly: CARE’s Approach to Adaptive Management. Drawing from learning and approaches in the CLA toolkit, academic research on experiential learning, and from DFID and Overseas Development Institute (ODI) on Adaptive Development, the document combines CARE’s Tips for Adaptive Management—a series of practical steps to build into a project cycle—with concrete case examples of how others have handled the work. For each tip, a few examples illustrate how projects have implemented that suggestion and the results they have had.

We’re complementing the paper with a series of more in-depth case studies, like:

 As we publish more, you can find them all here. We’re excited to share and see how people can use these tips to transform their programming. Let us know if they are helpful, or if there’s anything you would change.

Four Years Later: How Our Community has been Inspired by the CLA Maturity Tool

Mar 6, 2020 by Monica Matts & Monalisa Salib Comments (0)
COMMUNITY CONTRIBUTION

About four years ago, we shared the Collaborating, Learning & Adapting (CLA) Framework with our USAID Learning Lab community and started using the associated CLA Maturity Tool with USAID Missions and implementing partners. For those who aren’t familiar with the tool, teams and operating units use it to self-assess their CLA practice and plan ways to improve that practice, in service of greater organizational effectiveness and better development outcomes. It is a tool that models CLA in how it is used:

  • Teams come together to discuss their practice using a set of cards (see image below) that describe a spectrum of maturity for each sub-component outlined in the CLA Framework.
  • Team members learn from each other’s perspectives and practices and gain a deeper understanding of their team.
  • They use that information to adapt how they work. They decide on key priorities, those that would be most beneficial to their work, and determine an action plan for making those ideas a reality.

Image: Example of the CLA Maturity Tool Spectrum for Continuous Learning & Improvement (Not Yet Present to Institutionalized)

Since the tool’s creation, we’ve witnessed how the development community has been inspired by and innovated on many of the concepts and approaches from the CLA Maturity Tool. Several teams have created new tools that speak to specific organizational needs. Here are a few of the inspired tools that we know of  - and, let us know if we’re missing some!

  • Chris Collison, a consultant who has worked with USAID’s Bureau for Policy, Planning & Learning and USAID LEARN was inspired to create a more gamified version of the tool in his work with the UK’s National Health Service (NHS). He writes, “... the CLA cards had inspired... my project to build an assessment model about ‘quality improvement’, and then gamify it...People lay out their position, discuss differences, vote on priorities...and then find the card which describes their target, and flip it to access a QR code, which links to a shared folder of knowledge resources and connections to help them reach out and progress.  As more and more hospitals in the UK use the game, we will create a ‘River Diagram’ to help them find people they can learn from. I’m convinced that being able to hold in your hand a description of ‘what good looks like’, makes all the difference - and I’m grateful to the CLA team for being such a good practice to learn from.”

 

Image: Prototype of the quality improvement self-assessment and action planning process designed by consultant Chris Collison for the UK NHS 

  • Mercy Corps created Adapt Scan (soon to be released) after learning about the CLA Maturity Tool from USAID LEARN. We carried out a Before Action Review with the Mercy Corps team planning to develop and pilot Adapt Scan and they used our experience to directly inform the creation of their tool. Adapt Scan is based on Mercy Corps and IRC’s adaptive factors framework and, similar to the CLA Maturity Tool, is used to self-assess and action plan on the adaptive factors that would enable more effective programming. 
  • USAID’s Office of Forestry and Biodiversity is developing their own “Biodiversity Programming Maturity Matrix.” The tool is intended to communicate standards for implementing high-impact biodiversity and integrated development programs and to provide a framework for identifying good practice that aligns with the Program Cycle, USAID’s Biodiversity Policy and the Biodiversity Code. Like the CLA tool, the Biodiversity Maturity Matrix describes a spectrum of practice along certain components of the CLA framework like Collaboration and Theory of Change as they apply to biodiversity programming. The tool is meant to be used by USAID staff for self-assessment and planning purposes. The Office of Forestry and Biodiversity expects to begin testing the tool soon.
  • USAID’s Center for Excellence in Democracy, Rights & Governance (DRG Center) has also adapted the CLA maturity tool to create a draft DRG Integration Maturity Matrix, which it has used during its past two annual DRG integration trainings for USAID staff. Similar to the way we use the CLA Maturity Tool, the DRG Center’s Cross Sectoral Programs (CSP) Team has used the DRG Maturity Matrix to help training participants identify the degree to which their missions or bureaus integrate DRG concerns and approaches into their work. The Matrix also has encouraged staff to focus on opportunities and challenges to DRG integration by asking questions that help identify both programmatic and work process entry points for cross-sectoral work. CSP is in the process of refining the tool for the next DRG Integration training scheduled for June 2020.

What we find most encouraging about these tools is that they:

  • Model CLA in how they are used. Similar to the CLA Maturity Tool, these tools “walk the talk” on CLA. Team members collaborate by coming together to reflect on their existing work practices and determine how to improve and adapt in service of better outcomes. The process for using the tool is just as important, if not more important, than the tool itself.
  • Rely on self-assessment to create a sense of ownership for the results. Many organizational assessments are completed by management consultants who are external to the team. They may interview people and write a report with recommendations. In contrast, a self-assessment approach allows a team to decide on focus areas and agree on priorities and areas for improvement.
  • Encourage incremental and realistic change. Rather than suggesting large-scale reform, these tools encourage choices based on the teams’ needs, context and available resources. This approach allows teams to start with smaller-scale, incremental changes that are easier to embrace and take action on, in order to build momentum for larger reform.
  • Take an inclusive approach to participation. These tools provide ways for staff of all types and in varying positions an opportunity to contribute to the self-assessment. All voices and perspectives are valued with these tools. 

In our work with USAID Missions, the CLA Maturity Tool has proven to be one of our team’s most helpful tools, and the self-assessment and action planning process that it supports is one of the most appreciated. Seeing how other teams and organizations have iterated on the tool, or developed something similar to suit their needs, is a gratifying development and shows how applicable this process is across sectors and disciplines.

ResilienceLinks: A New Knowledge Platform for Global Resilience

Mar 3, 2020 by Center for Resilience Comments (0)
COMMUNITY CONTRIBUTION

Resilience – the ability to manage through crisis without compromising future wellbeing – is increasingly recognized as an essential part of global development. Droughts, flooding, severe price fluctuations, and other shocks and crises threaten hard-won development gains and cost millions of dollars in humanitarian assistance each year. As a changing climate makes extreme weather events more intense and unpredictable, it is more urgent than ever that we understand how to effectively build individual, household, and community resilience.

That’s why we’re excited to announce the launch of ResilienceLinks, a new global knowledge platform for humanitarian and development practitioners. With resources and event announcements from USAID as well as other donors and implementers, ResilienceLinks is a premier source of information coordinating with and complementing other USAID sponsored knowledge management platforms. The platform features:

  • Evidence and analysis: Browse reports, studies, analyses, and other resources from resilience practitioners around the world. Search by topic or country to learn more about resilience building in places and on issues important to you, and contribute your own evidence.
  • World map: Navigate the map and explore resources for resilience focus countries and all countries working on building resilience and resilience capacities.
  • Resilience topics overview: Preview how sectors and topics like water, health, social inclusion, livelihood diversification, gender, and many more intersect with and impact resilience.
  • Events and training: Stay up to date on upcoming resilience-focused events and training and catch up on events you missed.

To build global resilience, we need humanitarian and development practitioners around the world to share learning and knowledge around global resilience. We need evidence and data on what works and why it works so that we can efficiently, effectively, and quickly meet the pressing need to build global resilience. We need to know what works best in a given context and how to adapt that approach to other contexts. We already have a knowledge base that includes many important findings, such as:

  • Evidence from Bangladesh shows that removing capital constraints to migration can have positive impacts on seasonal hunger and well-being. An experimental study found that cash or credit travel subsidies induced more households to migrate, and migrants earned US$110 on average at the destination and saved and carried back about half of the income The families of these migrants consumed 600 calories more per person per day, raised their per-capita expenditures by 30 percent, increased protein consumption by 35 percent, and spent more on child education In doing so, they effectively eliminated the lean season. In terms of value for money, the same amount of food in the form of food aid would cost five times as much.
  • Mercy Corps examined the role of gender and social inclusion in understanding vulnerability and resilience and identified several best practices, including partnering with local organizations that focus on often-excluded groups; incorporating the perspectives of often-excluded persons in program design, governance, and decision-making; and providing gender equity and social inclusion training for program and partner staff.
  • Bonding social capital and community groups are important drivers of resilience. USAID/Zimbabwe undertook a resilience measurement initiative to determine what makes a group effective and sustainable and found that psychological safety —  a shared belief that the team is safe for interpersonal risk taking – is one of the most important and fundamental aspects of a well-functioning group, even more than individual qualifications.

Every day, we are learning more about what works best in building global resilience, and your contributions are essential to continuing to build this evidence base. Head over to ResilienceLinks and explore the resources, country profiles, and topics, and contribute your own resources and events!

Learning Agenda Playbook shares ‘how’ of utilization-focused approach

Feb 19, 2020 by LAB/EIA and USAID LEARN Comments (0)
COMMUNITY CONTRIBUTION
  • Are you thinking about, or actively involved in, planning or implementing a learning agenda effort?
  • Do you support learning and adaptive management within your organization?
  • Are you interested in options for intentional and systematic learning methods?

Check out this new learning agenda resource from the U.S. Global Development Lab’s Office of Evaluation and Impact Assessment (Lab/EIA) and the USAID LEARN contract!

Learning in the Lab cover image

 

What is the Playbook?

Learning (in the) Lab: A Utilization-Focused Learning Playbook is designed to share with our colleagues the tools and resources we’ve used to design, develop, implement, and iterate upon a bureau-wide, utilization-focused learning agenda called the Lab Evaluation, Research, and Learning (ERL) Plan. A process, as well as a series of products, the ERL Plan was designed to strengthen the Lab’s ability to learn, and continuously improve its programs, operations, and strategy. Initially facilitated by EIA, the Plan is holistically informed by—and serves to further institutionalize—strategic learning and adaptive management through collaborative efforts across the Lab. This work also contributes to the evidence base for the Agency-wide Self-Reliance Learning Agenda - an effort to support USAID as it reorients its strategies, partnership models, and program practices to achieve greater development outcomes and foster self-reliance with host country governments and our partners.

Through this Playbook, we hope to inspire and equip others to bring learning and evidence utilization to the fore in their own work by providing practical examples of how we applied innovative thinking from the disciplines of human- and user-centered design, organizational development, and adult learning to operationalize the Lab ERL Plan. This document also serves as a compilation of relevant tools, resources, and insights developed by others at USAID whose knowledge and experience with learning agendas directly benefited the Lab’s learning work. 

What is a utilization-focused approach to learning? Isn’t this just Collaborating, Learning, and Adaptation (CLA)?

Yes…

Collaborating, Learning, and Adapting (CLA), are a set of practices that, when systematically planned and adequately resourced for, help us improve the effectiveness of our work. Learning Agendas—a set of broad questions directly related to the work that USAID conducts, which when answered, enable us to work more effectively and efficiently—are one of the ways that operating units across the Agency practice strategic collaboration, continuous learning, and adaptive management. This Playbook capitalizes on the existing tips, guides, tools, resources, and examples of the general ‘what’ and ‘why’—of learning agendas; including the CLA Toolkit—while expanding on the ‘how’ of designing and implementing learning agenda processes and products to better enable use of learning and evidence in decision making. 

...and

The Playbook and utilization-focused learning approach were developed based on a common challenge identified during a cross-bureau CLA learning agenda effort: while learning agendas are meant to facilitate the knowledge cycle, in which knowledge is 1) generated, 2) synthesized and translated, 3) disseminated, and 4) used to adapt our strategies, programs, and practices—often, learning teams are asked to focus our resources and efforts at the “beginning” of the cycle; investing in evidence generation and synthesis — and then running out of steam when it came to translation, dissemination, and eventual use. We learned that sometimes, by the time the evidence from our respective efforts had been generated and synthesized (and even disseminated), the opportunity to use it had been missed.

To attempt to bridge this gap, in developing the Lab ERL Plan, we asked ourselves: “How might we design and implement a learning agenda that completes the full knowledge cycle, improving the Lab’s ability to utilize evidence in decision-making?” The initial answers to this question—based on successes, failures, and many iterations on the Lab ERL Plan process and products—are what we have documented here as a utilization-focused approach. While we are continuously learning, and refining this approach, we hope that by offering recommended steps in a ‘play-by-play’ guide format, we will help others know how to get started or otherwise adapt their learning efforts to enable the full knowledge cycle.

The Playbook breaks down approaches to integrating a utilization focus throughout a learning agenda into four phases (Define, Discover, Design, and Deliver), each comprised of action steps. While incorporating a utilization focus beginning at the earliest phases of learning agenda development is ideal, readers can engage with the phases or action steps most relevant to their work in the order most useful to them.

 

Graphic depicting process for utilization-focused learning, including Define, Discover, Design, and Deliver phases

 

Check it out, tell us what you think!

At this stage, we are very much interested in feedback that can improve the content and design of the Playbook. For example - put yourself in the shoes of someone tasked with designing or implementing a learning agenda:

  • How might the Playbook help you achieve this task? What else is needed?
  • Is the Playbook written and designed in a way that makes sense? What content might make it more useful, or how might a different design/ format be more useable?
  • The Playbook contains a lot of information. How might we simplify it?
  • Does the Playbook inspire you to pursue a utilization focus? What feels missing?

We would love to hear your insights on these, or any other comments or questions you might have on this early prototype of the Playbook in the comments below!

 

How To Foster Local Ownership for Improved Development Results: Collaborate, Learn, Adapt

Jan 24, 2020 by Kristin Lindell Comments (0)
COMMUNITY CONTRIBUTION

Imagine you’re tasked with designing a program that improves public service delivery to local communities in Uganda. You know that you need to engage local stakeholders for this program to be effective—without government buy-in and engagement, for example, service delivery changes will not be sustained. How would you go about doing it? 

When faced with this challenge, one implementing partner designed a platform, UBridge, to facilitate increased dialogue between local communities and their local leaders. By reporting and responding to incidents of compromised service delivery, community members and the government were empowered to solve their own problems. As a result, the government constructed boreholes for better water access, improved roads and mended bridges for access to markets, and added more classrooms to decrease crowding at schools. 

This offers an example of how local ownership can facilitate and sustain development results, but actually achieving local ownership over development efforts remains challenging, and raises an important question for international development donors and practitioners: how can we use our roles to intentionally and systematically foster local engagement and ownership? A recent review of the evidence on collaborating, learning, and adapting (CLA) and local ownership offers some answers: 

  • Local engagement leads to local ownership and, ultimately, improved development outcomes. Evidence from an in-depth case study on the Ebola crisis in Liberia, an in-depth case study on Community-led Total Sanitation in Zambia, and several CLA case stories demonstrates that when local stakeholders are engaged in defining development challenges and solutions via program activities, the results are more relevant to local needs and opportunities and are more effective than traditional donor-led approaches. This greater relevance and effectiveness in turn increases local stakeholders’ commitment to and engagement in identifying sustainable solutions to community challenges.
  • Bottom-up approaches contribute to better development results. A recent study analyzing about 10,000 development projects found that aid agencies achieve better results when using bottom-up approaches that empower frontline workers and organizations to make decisions based on their local knowledge and relationships. 

We have also always wondered what enables local ownership. Our synthesis of the evidence revealed these enablers:

  • Donor flexibility:  An analysis of seven case studies of development initiatives conducted by the Overseas Development Institute (ODI) found that features of the donor agency environment, such as flexibility and transparency, were significant in facilitating the success of politically smart, locally-led development initiatives.
  • Leadership support: In both of the cases explored in our in-depth case studies, active participation from a diverse range of leaders contributed to the overall success of the interventions.
  • Openness: None of the examples mentioned in the briefer would have been successful without donors and implementers actively convening local stakeholders and co-creating solutions with them.

Likewise, what are some of the barriers to local ownership we observed in the evidence?

  • Limited time for staff to pause and reflect on how to make improvements in the intervention.
  • Coordination challenges as the scope, scale, and speed of the Ebola crisis created a chaotic environment and made coordinating response efforts challenging.
  • Distrust and resistance to government and outsiders due to one country’s recent history of civil war, which left many communities distrustful of government authorities and suspicious of messages related to the program.
  • Sustainability challenges as one national government has been unable or unwilling to continue devoting resources to a Community-led Total Sanitation program after donor funding ended.

You might be wondering, “What CLA approaches can I use to increase local ownership and improve development results?” Try some of these: 

  • Identify which actors, including from among local stakeholders and the private sector, are most critical to achieving shared objectives. 
  • Facilitate conversations among those critical stakeholders to identify shared interests, co-create programming, and develop stronger relationships.
  • Generate and use evidence from, and in collaboration with, stakeholders that would be most useful to decision-making while also working with partners to strengthen their capacity to generate, share, and use evidence.

And lastly, don’t forget to tell us about your experience using CLA approaches to foster local ownership by leaving your comments below.

What is the Work?

Jan 21, 2020 by David Jacobstein Comments (0)
COMMUNITY CONTRIBUTION

In promoting international development, what is the actual work we do?  

How is that work understood?  

Do our words truly reflect the work we want to see? 

Could a shift in this language help us to achieve meaningful changes in our work?   

Recently, at the Global Partnership for Social Accountability’s annual forum, I was part of a unique discussion around a gap practitioners see between the work they do and the way their work is understood. Specifically, the organizations advancing this work in developing countries tended to see their role as fostering trust and social capital, and using this capital to build collaborative approaches to improving the public delivery of goods and services. To be sure, they often pressed for greater follow-through on public commitments, but this idea of “helping government deliver by building connections with constituents” is very different than the logic embedded in the original idea of social accountability. By its nature, this is seen as how we donors can help people hold government to account. They also viewed their work in communities as part of a longer-term effort connected with changes at the national level, rather than a purely community engagement. A more accurate label might be “collaborative engagement for delivery” (it resonates with this presentation by Rick O’Sullivan). 

However, this is not the rationale against which many donors have funded social accountability through the years. Donors in large measure have funded social accountability organizations to “hold government to account” for service delivery while also working on service availability and quality. They see these organizations as ensuring that service providers behave in ways that donor plans and investments expect them to, and punishing them if they don’t. That gap is now creating a challenge for social accountability organizations, as some robust empirical research is finding that social accountability programming can make a critical difference, with other studies finding null results from it. As discussed elsewhere, what research is finding seems to depend on whether the research is building from the strategies taken by change agents, or the research assumes the “catch and sanction malfeasance” logic that is often behind donor funding.

This led to a soul-searching discussion over how much to confront donors with the reality of what their work is, versus continued acceptance of the terms on which they receive funding. This includes whether donors are ready to move on to investing in research that builds theory around the strategies of local change agents, rather than testing theory derived in journals in the global north.

This led me to think how important the language we use to describe the work of development can be to how we design and manage development programming. In particular, for complex social change efforts, it seems to me that better ways of describing work can link with better efforts at measurement and improved performance. 

Why is this? Well, it turns out that the language we use to describe work is internalized into the logic and measurement of a program, even if we all understand that other considerations exist (an old idea, it turns out). 

For example, for many years, donors working to address recurrent crises acknowledged the inevitable return of certain disasters and the need to strengthen the ability of communities to cope outside of the confines of a disaster-response paradigm. However, I would argue that it is really only after the donor community started to define “resilience” as an outcome of interest that we could change our collective behavior. Suddenly, we weren’t just acknowledging the need to respond to repeated shocks, but could put programming against an idea (and slowly figure out how to measure the right results) to achieve that outcome. The shift from a consideration in disaster programming to an area in its own right came first in language and spread as this allowed the concept to anchor programming, including reporting and monitoring. A similar trend happened in the evolution from agricultural extension to value chains to market systems, where the new frameworks or ideas empowered different types of programming to be undertaken and measured. And a similar challenge confronts social accountability now.

I was re-reading Dan Honig’s seminal book Navigation by Judgment on a long flight recently, and this flags a second way in which our language defining the work we are doing matters. For those not familiar with it, Dr. Honig’s research shows that for certain types of development challenges (generally, those that involve interaction with complex social change - so most of them) it is more effective to navigate by judgment rather than by top-down accountability to predetermined metrics., In other words, delegate decisions to frontline agents close to the action who have the best information and tacit knowledge to make course corrections. And again, I think the language we use to define our work (for example, in accountability) and the intermediate outcomes we aim to reach can play a huge role in structuring whether judgment is implicit to achieving those results. 

For example, if you are seeking to improve health service quality, it is more necessary to the work to have someone close to the community in question making adjustments. This is because an idea like health service quality depends on the perception of citizens, and cannot be assured by delivering objectively countable items such as particular drugs or kits. It focuses efforts at learning and adapting programming down to the local level, rather than learning above the project and simply directing it to do what is required. A similar evolution holds within the education sector, where an emphasis on the countable (teacher and student presence) has given way to an emphasis on outcomes requiring more judgment to reach (learning performance), as described by Lant Pritchett in Schoolin Ain’t Learnin

If you accept my argument that the language we use to describe our work matters because it bakes in both our reliance on judgment and tacit knowledge, and our ability to program against “correct” intermediate results, what are the implications? There are at least three that I think are important for development work across a number of sectors:

  • In terms of theories of change and how we learn from evaluations and research, we need to shift the mindset from a theory testing to a theory building approach. This implies a very high value to rigorous empirical work, including experiments, but only those oriented toward discovering mechanisms and variables that matter on the path from inputs to impact - in particular, finding those intermediate outcomes that can become the next “resilience” or “health quality” and what might go into them. It shifts the key question in an impact or performance evaluation from “did an intervention work?” to “why did the intervention sometimes work better?” Part of the process is also to get beyond use of adaptability or flexibility and towards articulation of right outcomes, so that our adaptive implementation is well grounded. These can be both realist evaluations or part of RCTs; the distinguishing factor is the framing behind the learning rather than the technique. Some donors are taking up the torch with an emphasis on middle-range theory building. However, more can be done.
  • Our ability to define and assess meaningful intermediate outcomes is essential. In the education example above, the availability of good cross-country data on learning outcomes has made it possible to anchor work more directly to learning, rather than substituting education access as the outcome of interest with a variety of caveats. In my own democracy, rights, and governance sector, I see huge potential around ways to measure different forms of social capital or the strength of certain norms related to bedrock impacts like state legitimacy or political competition. Theory building (rather than testing) requires sharing and discussion of anchoring intermediate outcomes and data sources that allow programming to improve, and for us to seek to understand our work differently. Investment into measurement of intermediate outcomes seems to be a valuable public good that donors can help to generate. This will then open space for new language and new approaches toward our intended impacts.

If there is one idea that I think offers the greatest opportunity in terms of improving our language, it is to better incorporate the idea and language of probability into our description of “what is the work.” 

For a lot of our programming, we are trying to position important reforms to have a greater chance of succeeding - not only on paper, but in reality. This can range from changes in public financial management rules, to improved protection of certain key rights, to ending fertilizer subsidies, or to task shifting for health care. Yet all too often, our desire to present certainty pushes us to define these areas of work in simple steps, focusing on the visible progress of new laws or policies, or specific numbers of people trained or engaged, with theories of change to fit, leading to a focus on form not function.  

When we're trying to do big things, we have to accept that even the BEST intervention might not work, or at least achieve major impacts in a short time horizon. For example, failing to quickly, make a justice system more inclusive does NOT mean it wasn't money well invested, particularly if we may have positioned actors or seeded ideas in ways that make inclusive justice more likely over the next five years after our project ends. The idea that we should gauge progress similarly in building roads and building justice systems make no sense, but without a language to capture the difference, it remains impossible. 

I believe we would open huge space for our staff and partners if we started to change the language of these programs from a stepwise progression of linear change (pass law/pass implementing legislation/adopt policy/train workers/implement) to results of “make it 20 percent more likely that a budget will be shared publicly” or “have 25 percent more initial TB screening delivered by community health workers rather than hospitals.”  This change would allow us to focus on discovering how to make this happen, rather than always pursuing the easy first steps on paper. 

Perhaps a program to improve access to justice would start to operate in the realm of promoting certain norms, rather than direct training or outreach. Perhaps a program to address natural resource management would focus on intermediate markets and the incentives they create more than direct community engagement. 

Our programming space would be less limited by sub-sector and more defined by local knowledge of the place we’re working in. It would also transform our accountability for our portfolios from numbers of countable outputs, to defensible claims of progress. This would include more emphasis on our attention to the context and the integrity of our engagement and learning, consistent with our enterprise risk statement and with systems thinking. Assessing the probability of transformational change, rather than tracking specific steps of incremental change, offers a different language more appropriate to the realities of ambitious programming.

Next time you sit down to write a project or activity description, take a moment to think about how you are describing the work to be done. Keep your topline objectives the same, but see if you can change the language you use to describe the work to get there - build in a different intermediate outcome, or use a change in probability of the larger impact happening. The more projects and scopes designers can be both honest and clear about “what is the work,” the better we collectively can do that work.

 

Pages

Subscribe to RSS - blogs