Lab Reflection: The Year of Managing Adaptively

Apr 9, 2021 by Learning Lab Team Comments (1)
COMMUNITY CONTRIBUTION

Over the past year, many of us have relied on a plethora of adaptive management guidance and tools – many more than we might have reasonably anticipated – to meet the unique challenges of development programming during a global pandemic. In this blog, we’ve pulled together highlights of the remarkable insights that the Learning Lab community has shared over the past year that shed light on how this community has been managing adaptively. Taking stock, we appreciate the breadth of information and knowledge that has been collectively circulated and expanded upon.

Managing adaptively, as the ability to “[respond] to changes and new information” in an unstable context and environment, is an expansive topic that begs to be made concrete and intentionally applied. That’s why it’s so important to develop and share specific guides, tools, and resources that put the phrase “managing adaptively” into practice. And we push the boundaries of what we learn in doing so.

Some teams use tools as they are in an off-the-shelf or plug-and-play manner. Others may rely on these same tools as a jumping off point for thinking through their precise needs and designing a more customized approach around, let’s say, pulling off a collaborative stakeholder workshop remotely, or exploring digital tools for monitoring.

This blog provides a quick and useful timeline of and reference to resources for managing adaptively collected across the Learning Lab community. Alongside the Adaptive Management cluster in the CLA Toolkit, which we encourage readers to (re)visit, this assembly of shared learning and experience will be an aid as we continue to confront the challenges of disruptive and unexpected change, now and in the future.

Early Days: USAID and Partners Answer the Call for Information Sharing

In mid-March last year, in response to USAID and many other organizations and teams transitioning to working from home, Learning Matters shared The Ultimate Tip Sheet for Working Remotely. It became 2020’s top featured resource from the newsletter.

The next month, we posted an open call for blogs on CLA and COVID-19, and our partners shared inspiring and unique experiences in managing their programs and activities. As we started to collect some of these shared stories and insights, we also published "Resources for Monitoring, Evaluation and Learning during COVID-19" (the second top feature of 2020), a collection of USAID materials for adapting data collection, adaptive management, and COVID-19 guidance for implementing partners.

The "Implementing Community Contributions to Monitoring, Evaluation and Learning during the COVID-19 Pandemic" page enabled our partners to directly contribute resources as well. This collection has grown to include nearly 25 resources spanning Adapting Data Collection for MEL, Third Party Monitoring, and Evaluations and Research. It includes remote survey toolkits, consideration of recent literature on remote monitoring, digitizing MERL practices, impact evaluations during the time of COVID-19, and many other subtopics.

Summer Stage: Hosting Convenings and Joint Learning

Alongside these written references and tools, it was also crucial to convene USAID staff and partners in a live format. USAID’s Bureau for Policy, Planning and Learning and the US Global Development Lab invited partners to join the COVID-19 Monitoring Resources Webinar, hosted in June. (The webinar transcript, recording, and presentation materials are now available.) The webinar reviewed USAID’s Guide for Adopting Remote Monitoring Approaches during COVID-19, while exploring examples of how Missions are using phone surveys to measure COVID-19 impacts in Ethiopia, building response plans in Nepal, and learning from successes and limitations of remote monitoring in Afghanistan.

Year’s End: Maintaining Momentum for Sharing and Learning

The "Global Learning for Adaptive Management (GLAM) Initiative" page offers a timely package of adaptive management tools and resources published in the fall, with additional resources added at the beginning of 2021. The initiative – a partnership between USAID and the Department for International Development (DFID) – “provides tailored guidance and practical support on adaptive management to practitioners and policymakers, generates high-quality evidence and learning about effective monitoring, evaluation and learning for adaptive management, and acts as a catalyst, champion and convenor to change thinking and practice.”

Throughout the year, many other blogs, guides, and resources were produced and shared by USAID, partners, and international development community members, some of which we highlight below and many others which can be found on the Learning Lab blog.

Featured highlights:

We invite you to let the Learning Lab team know what topics you’ve enjoyed reading about, what you’d like to learn more about, and submit your blog ideas or drafts on the site or directly to [email protected]

COMMENTS (1)

What Kind of Network?

Mar 31, 2021 by Kimberly Ratcliff, Monitoring, Evaluation, and Learning Specialist, International Republican Institute Comments (2)
COMMUNITY CONTRIBUTION

An IRI-supported coordination network works with the state government to improve an anti-corruption law in Nuevo Leon, Mexico. (Photo credit: IRI)

Building and supporting networks – whether for advocacy or sharing knowledge – is central to many democracy, human rights, and governance projects. However, while “network” is a term practitioners often use, it is rarely clearly or consistently defined.  At minimum, this undermines result measurements, but can also hinder effective program design and implementation. 

To address this gap, the Evidence and Learning Practice within the International Republican Institute (IRI) designed an ex-post evaluation series to better understand networks, how they function and under what conditions they succeed in achieving their goals.

Developing consistent guidance on networks required, first, an Institute-wide definition. My team defined networks as, “a group of individuals or organizations that pursue a shared objective and interact with each other on an ongoing basis.” This helped us distinguish what was most relevant to IRI from other things that might also be called networks. For example, some teams are interested in mapping “social networks” to determine ties and relationships in a system or community. We did not include this in our definition because the type of support we provide is not applicable to such a group of people.  Examples of networks that do meet IRI’s definition include groups of journalists who share skills and experiences or a coalition of organizations that work together to conduct a get-out-the-vote campaign.

Even with an Institute-wide definition of “network,” there is a great deal of variation in how networks manifest. To account for this range, we developed a continuum that defines networks based on their objectives. At one end of the continuum are networks whose primary objective is to influence its members, which we call “support networks.” An example might be a group of women political party members who come together to share knowledge and learn from others’ experiences. Their goal is to share information within the network, rather than influence some actor outside the network. At the other end are networks whose primary objective is exercising influence outside the network. We call these “coordination networks.” If a group of women political party members work together to lobby their leadership for more female representation in party lists, we would consider them a coordination network rather than a support network. 

We decided to employ a continuum because networks may be more focused on one objective or another but still have characteristics of both. We also found that networks can shift along the continuum over time. For example, a group of journalists might form a support network to learn from each other, but then some event, such as a media crackdown, might galvanize them to lobby for free speech protections.

IRI's Network Continuum

By identifying audiences, goals and structures, this continuum has helped IRI and partners demystify what we mean by “networks.”  To help staff identify a network’s desired impact, we created a basic results chain, pictured here, with the types of outcomes you would expect to see with different kinds of networks. This results chain can lay the foundation for a measurement strategy that aligns with the network’s goal.

IRI Network Results Chain

This results chain helps visualize the different types of actions we might expect from networks on opposite ends of the spectrum, helping organizations to design network-specific measurement strategies. In the past, teams working with a support network might have focused on measuring collective action, determining a network was not successful because collective action did not take place. By clarifying the goal and expected results, we can better understand that a support network does not have to act collectively to be successful because their goal is to influence members inside the network. In the same way, teams working with a coordination network might have previously focused on measuring learning from other network members and assumed a network was not successful because they reported not learning from each other. As shown in the network continuum, a coordination network does not have to include network members learning from each other because they may just each utilize their own expertise in a coordinated way to influence an external actor. 

IRI teams have found these tools extremely helpful in designing programs that build or support networks to achieve specific goals. Using the continuum in program design sessions has helped teams better articulate the purpose of their network, as they now have a model and vocabulary to conceptualize network-centered interventions. Ultimately, this clarity at the beginning of a project contributes to better planning, design, decision-making, measurement, and learning.

Twice the Participation in Half the Time: After Action Reviews in COVID-19

Mar 26, 2021 by Sani Dan Aoude and Emily Janoch Comments (2)
COMMUNITY CONTRIBUTION

A woman from CARE's Honduras pilot voucher program holds a voucher for a supermarket.

After Action Reviews (AARs) have lots of benefits. They give teams a chance to learn from each other about what worked and what didn’t. They can provide immediate feedback on what needs to change. Done right, they give a chance for lots of stakeholders—participants, partners, governments, donors, implementers—to come together and understand what they need to change to get to better results.

They can also provide documentation to inform future activities. CARE’s policy requires that we conduct AARs after large-scale humanitarian crises—and we’ve been doing that for more than a decade. At the beginning of the COVID-19 pandemic, we were able to pull all of our AARs on response to epidemics to inform our response to the pandemic.

Most people agree that AARs are a great tool, but they can be a challenge. Finding the time and space to focus on them is always a challenge in an intense and fast-paced crisis. You need great facilitators who can make sure that everyone gets a chance to express their opinions and that the conversation is constructive. Before the coronavirus, CARE’s recommended tool on AARs for large humanitarian responses called for a 2-3 day in-person workshop with lots of partners.

So, what to do in a crisis like COVID-19, where a two-day workshop is simply not possible? CARE’s Cash and Markets team found a way to do virtual AARs, taking two hours on Zoom to highlight the key components of an AAR. They worked with country teams, partners, and stakeholders in Morocco, Haiti, Honduras, Ecuador, Lebanon, Guatemala and Vietnam to understand what we can learn about cash and voucher assistance in the pandemic.

What did we learn about learning?

  • Making space matters. All of the stakeholders—staff, partners, participants—liked having dedicated space to reflect on what happened, what worked, and how to improve. In the COVID universe, having any group of people consistently excited about a Zoom call is a win.
  • Keep it light and focused. Because it was on Zoom, the team focused explicitly on one area—cash and vouchers—rather than tackling the entire emergency response. Having short sessions on specific target areas can add up to a larger picture of the whole response.
  • Ask a few big picture questions. The team asked three questions: What worked? What could go better? What can we improve for next time? That’s it. Three questions set the tone for a conversation that could be honest about success and failure and generate concrete action steps and recommendations.
  • Connect learning across contexts. By having some consistency in facilitators across all seven AARs, the team could identify system-wide bottlenecks that no one team might have seen. The coordinated facilitation also helped share lessons and approaches between teams—some country teams solved problems others were still working through, so sharing those experiences improved everyone’s work.

What did we learn about cash and voucher assistance?

  • It's all about the preparation: It’s critical to get partnerships, agreements with service providers, technology, and communications strategies set up early. Ideally, this would happen as part of an emergency preparedness plan (EEP). If it’s not in EPP, doing that immediately will help determine the success or failure of an approach.
  • Get lots of folks involved: This means working in partnership—not just working with partners. Listening to what everyone has to say and making joint decisions has much better results than treating some organizations as implementers or sub-contractors. Two actors who got specific attention are:
    1. Communities need to be involved in targeting, setting expectations, communicating what's happening, and deciding on what tools and technology will work.
    2. Administration and procurement teams need to be involved early to understand what’s changing from a non-emergency context and how everyone can work together to make sure the project is meeting people’s needs.
    • Communication is key: Teams that had strong communication plans with a range of actors, including Financial Service Providers, and tools found better results than ones who added communications to their planning later. This was especially true for communicating with communities and participants.
    • Focus on feedback: to be successful, projects need robust feedback mechanisms. They also need to make sure that community members understand how to use those tools, and that staff are responding to the issues that come up. All that makes stronger, more accountable programs.
    • Understand the context: building on solid gender analysis and needs assessments was critical to success. That includes understanding what digital tools make sense in the context, and what level of digital skills exists with staff, partners, and communities so the project can use the right tools and training to get the job done.

    What did we change?

    AARs are only as good as the actions you take to get better. So, what did we do?

    • Lebanon used their COVID-19 cash AAR to shape their cash transfers in response to the Beirut blast.
    • Ecuador updated design of projects with multiple cash transfers, so people had more consistent support.
    • Change global technical support. The global team used these lessons to think about what tools and support they could provide to country teams to ensure that responses with cash or vouchers had the best impacts possible. That includes focusing on helping people apply and adapt Standard Operating Procedures. It also helped to understand operational constraints within the CARE systems, where advisors can advocate for change.

    Using the CLA Maturity Tool, Virtually

    Mar 23, 2021 by Monica Matts Comments (0)
    COMMUNITY CONTRIBUTION

    Example of the adaptive management subcomponent cards from the CLA Maturity Tool.

    Nearly five years ago, the CLA and the LEARN contract developed the Collaborating, Learning and Adapting (CLA) Maturity Tool to help USAID missions and offices think more deliberately about how to plan for and implement CLA approaches. In the years since, many USAID and implementing partner teams have used the tool. They have found it helpful in a number of ways. By engaging in a facilitated conversation using the maturity tool, teams have been able to build a common understanding of Collaborating, Learning and Adapting and the enabling conditions that support it, generate enthusiasm, and bring collective energy to planning for CLA approaches and practices. We also understand that participants in the CLA self-assessment and action planning experience have found the process fun and engaging, in part because using the tool is a tactile experience and that can feel like a game.

    The need for and interest in CLA hasn’t diminished during the past year, even though our ability to engage with a physical version of the tool has. Like with many other things this year, USAID staff realized the need to pivot from the physical CLA maturity tool to an approach that would work with remote staff. 

    The CLA champions in USAID have experimented with using the CLA maturity tool virtually, and we’re happy to report that it works! Similar to other convenings, using the maturity tool in a virtual setting requires some different approaches than you would use in person. Here are some tips and challenges to watch out for, according to our experienced facilitators:

    • The cards translate easily into a slide format. Here is a version that we’ve found helpful. If you have your participants working in Google slides, they can all be in the deck at the same time, reading through the content of the cards, and moving dots or markers to ‘vote’ on the maturity stage. 
    • While easy to use, having slides function as a working space could lead to groupthink. Participants may, for example, be tempted to watch how others use their dot votes before making their own assessment. In person, we’re able to mitigate this tendency by asking everyone to mark their ‘vote’ at the same time; that’s more challenging in the virtual setting.
    • As with most virtual meetings, the loss of visual cues can make facilitation difficult. It can be harder to sense the energy in the ‘room’, to know when participants are ready to contribute, or to draw out those less inclined to speak. Ensuring meaningful and robust participation in the process can be a bit more challenging in a virtual setting. 

    In some ways, the virtual setting has advantages:

    • The chat function can be helpful in eliciting feedback, particularly from quieter participants. 
    • Using virtual tools may help save time and creates more permanent records of the process. For example, after an in-person self-assessment session, someone would need to convert inputs from sticky notes or flipcharts into a written action plan. In a virtual meeting, you can have participants put their inputs into the plan directly.

    For all of the differences between virtual and in-person facilitation of the maturity tool, there are also many similarities. Many of the principles for using the tool apply, whether it’s done in person or online--including the importance of having an experienced facilitator to plan and manage the discussion; engaging all of the participants in the ‘room’ and valuing their contributions; ensuring that you have set aside adequate time for these important conversations; and understanding that the conversation inspired by the tool is more important than reaching consensus on any element of the self-assessment. 

    Similarly, many of the tools in our toolbox work equally well in the virtual space, with perhaps just a few tweaks needed. Resources like the facilitator talking points and report templates can be useful whatever the setting. 

    We are excited to see how users continue to adapt and use the maturity tool to meet their needs, and we hope others consider using it. If your team might benefit from learning more about CLA and engaging in planning around it, this might be the tool for you--whether you’re working remotely or in the office.

    We Can Do Better: Comments at KM4Dev Knowledge Cafe #10 on "Uncomfortable Truths in Development"

    Mar 11, 2021 by Stacey Young Comments (0)

    Here are my remarks from the November 19, 2020 KM4Dev Knowledge Cafe #10 on “Uncomfortable Truths in Development,” with thanks to co-panelists Sarah Cummings, Ann Hendrix-Jenkins and Kishor Pradhan, moderated by Gladys Kemboi.

    I want to say a bit about what we can do differently, and better, specifically as knowledge workers, to address the uncomfortable truths and supremacy models the other panelists have so eloquently critiqued. And in particular, I want to give a couple of examples to illustrate my point that there are alternatives available to us now, today -- we don’t have to wait to do better. 

    As knowledge workers, whether or not we bring awareness and intentionality to this fact, we grapple with power dimensions inherent in norms and hierarchies around

    1. Types of knowledge and the status of evidence (what “counts” as evidence, what kinds of evidence are valued)

    2. Sources of knowledge (whose knowledge is seen as important, how credibility is defined)

    3. Engaging knowledge-holders inclusively (whose knowledge is valued in the sense that they get to participate in decisions)

    And just as supremacy can embed in any or all of these -- supremacy can be countered in all of these:

    Types of knowledge that are valued and the status of evidence. Instead of embracing a linear continuum in which evidence that is proven using scientific methods is seen as the strongest and best evidence, and experience is seen as weakest or dismissed altogether, we can instead consider all the types of evidence available, what questions each type is useful for answering, and the particular role for each type -- and draw on them accordingly. And as we do so, we can also notice -- and intentionally mitigate -- how the ways that types of knowledge are valued unevenly tend to align with systems of power and privilege. We can ask ourselves, what’s considered “best,” how does that valuation reinforce the dominance of developed country paradigms, and how does that mute perspectives that come from developing communities? 

    Sources of knowledge. This is linked to types of knowledge, and gets at whose knowledge is valued as legitimate, as well as the critically important question: Why don’t we routinely begin with the knowledge, ideas and priorities of developing country communities?

    Engaging knowledge-holders inclusively. We have available to us, and should be drawing upon, a synthesized assessment of our efforts from a large number of people on the receiving end of them, in the form of Time to Listen: Hearing People on the Receiving End of International Aid (free download at the link). This is the  product of the CDA Collaborative’s listening project that engaged 6000 people in 125 organisations in 20 countries. The book argues strongly for cumulative experience as a type of high-quality -- in fact, essential -- knowledge. And it demonstrates an intentional listening methodology that explicitly mitigates the interviewers’ biases to ensure that respondents’ views are clearly understood and respectfully considered. What happens when you use that approach? Lo and behold, a resounding consensus on what’s needed, i.e., to move from an “externally driven aid system” to a “collaborative aid system.” See the table in Chapter 12 for a concise comparison of these two systems, and you’ll see the resonance of so many important debates that have taken place in the aid sector over several decades.

    You’ll also see clear, concrete recommendations: for how to move from the one to the other -- these include: 

    ● Collaborating with “Local” colleagues as drivers of their own development ● Focusing on reinforcing local capacities and existing strengths 

    ● Making decisions collaboratively 

    ● And fitting money and timing to strategy, and not the other way around.

    A second example: The End of the Cognitive Empire: The Coming of Age of Epistemologies of the South, by Boaventura de Sousa Santos, goes beyond welcoming in the knowledge of people in developing communities, to combining it with “Western” frameworks to advance substantive change. In just one example, the author describes how in Ecuador, activists combined Western cultural elements of “constitutional protections” and non-Western cultural elements of “nature as the source of all rights” to enshrine the rights of nature in the Constitution. This enabled activists to secure environmental protections on grounds that were already accepted and embraced locally -- a clear instance of strategically leveraging local frameworks for local benefit.

    Third and finally, Linda Tuhiwai Smith (author of Decolonizing Methodologies: Research and Indigenous Peoples) and Fiona Cram have articulated a set of Kaupapa Maori principles for research and evaluation. These can be summarized as: 

    • Build reciprocal, culturally respectful relationships 

    • Be generous with knowledge and ensure it flows both ways 

    • Engage with people on their own terms 

    • Show humility when sharing knowledge 

    • Respect people’s authoritative knowledge about their own lives 

    • Look, listen, and then speak -- understand before judging 

    • Be cautious so as not to abuse or ignore insider and outsider status 

    • Be familiar -- get to know communities in which you work

     

    There are many more (and more specific articulations of) practical approaches that are available to us as we commit to decolonizing aid and countering supremacy in development, working with and through local communities in support of their priorities and in ways that value and foreground their frameworks and knowledge. 

    Introducing the Newest Member of the Links Family: BiodiversityLinks

    Mar 11, 2021 by Learning Lab Comments (0)

    Learning Lab is part of the constellation of USAID sites through which Agency staff and partners engage and share in learning and knowledge exchange around topics important to development work. Another of our "sister sites," BiodiversityLinks, has recently refreshed and relaunched. Below is the BiodiversityLinks announcement and summary of new and exciting features.

     

    The family of USAID links sites are accessible at the footer of the homepage on Learning Lab.

    _____

     

    BiodiversityLinks, USAID’s newly refreshed and relaunched knowledge portal for biodiversity conservation, features key USAID tools and resources, as well as new evidence and learning.

     

    Many long-time users likely remember BiodiversityLinks’ predecessors, the Natural Resources Management and Development (RM) Portal and the Biodiversity Conservation Gateway. BiodiversityLinks takes the best of these sites forward into a platform that fuels learning to improve biodiversity programming.

     

    The site features a redesigned homepage and updated navigation throughout. LearningLab users are now able to explore biodiversity’s interactions and cross-sector benefits to help further collaborating, learning, and adapting for better development results. You can discover learning and evidence based resources, webinars, and collaborative learning groups focused on topics such as conservation enterprises, private sector engagement in Latin America and the Caribbean (LAC), and combating wildlife trafficking, while also taking advantage of the new Library functionality to more easily share curated resources with colleagues and partners. 

     

    Enjoy exploring the new site, and please contribute your resources, learning, and stories related to biodiversity! We will continue adding resources and adapting the site based on your feedback so please reach out to the site managers with any submissions, thoughts, or questions so we can address them and better meet your needs.

    Win-Win: Why we need to invest in gender equality in agriculture

    Mar 11, 2021 by Emily Janoch and Josee Ntabahungu Comments (0)
    COMMUNITY CONTRIBUTION

    At CARE, we’ve been convinced for a long time that supporting gender equality is critical to people changing their lives and leaving poverty. From a human rights perspective, we know it’s “worth it” – even though approaches that focus on supporting women and men to change their own lives are more complicated and sometimes take longer to pay off.

    That’s great when you’re already convinced, but what about for the skeptics out there? People who think, “gender equality is all well and good, but that’s a problem you only worry about when you’ve already got enough to eat.” CARE’s got some exciting new research that proves that working on more progressive approaches to gender equality, which incorporate investment in collaborating, learning and adapting (CLA), doesn’t just help improve equality and outcomes in women’s rights. It also improves incomes, food security, and agriculture production.

    In Burundi, CARE partnered with the Africa Center for Gender, Social Research and Impact Assessment, Great Lakes Inkingi Development (GLID), RBU 2000 Plus, and the University of Burundi in partnership with the International Rice Research Institute (IRRI) on a project to test what works to improve gender equality and food security at the same time. The project was funded with $2.6 million from the Bill & Melinda Gates Foundation from 2016-2020 and reached 9,911 people directly and more than 37,000 people indirectly.

    Investing in collaborating, learning, and adapting as part of the project was a key component of the success. Without those investments, the project would not have been able to achieve impact, and communities would not have made the changes they wanted to see. The project deliberately set out to provide rigorous research to test what results gender transformative work has relative to approaches that don’t focus on changing the underlying causes of gender inequality. The team also deliberately brought in a lot of partners—from the government to communities, to Burundian universities, to regional research groups—to help generate learning that will inform decisions for lots of different actors. Participating in the project and in creating the learning makes it easier to act on the results.

    A final evaluation was conducted to assess the impact of the project, showing that a gender-transformative approach in the agriculture sector could be adapted and applied to other contexts, using lessons learned. Here’s what we found:

    What changed?

    • Working on gender equality has higher returns on investment: Techniques that focus on helping women access the support they need for gender equality and change discriminatory social and gender norms showed a return of $5 for every $1 invested, compared to techniques that only shared messages about equality, which only gave a $3 return for every $1 spent.
    • Gender equality grows more food: Women who got more opportunities and support to address gender inequality increased their rice production 2.7 times, compared to just 2 times the production for people who only got agriculture training and information on gender equality. They were also 26% more likely to have enough food to eat.
    • Empowered women earn more money: Women who participated in activities with more focus on equality were 94% more likely to get to equality, and 3 times more likely to move to a higher income bracket.
    • The Gender Parity Index (GPI), improved by 51% in gender transformative groups and by less than 10%in the gender light and control groups.
    • Everyone eats better when women have a fair chance. Families in the groups that focused on equality were 26% more likely to have enough food and diverse diets, and women that participated in groups that didn’t focus on gender equality had less diverse diets at the end of the program than they did at the beginning. Families in the activities that focused on equality were most likely to be eating enough food.
    • Women feel safer: Women in the more progressive groups were 89% more likely to feel safe disagreeing with their partner at the end of project than they were at the beginning. Men and women both were 35% less likely to support gender-based violence.
    • Women are more confident they can change their lives: Women in the most progressive groups were the most likely to believe that they could act together to change their lives and create change. For example, they are most confident that they can change the way women are treated at health centers.

    How did it happen?

    • Engage, don’t inform: The groups that worked with men and community leaders to address gender inequality and get them actively talking about gender norms and power imbalances were much more effective than ones that simply shared messages that gender equality matters.
    • Adapt and apply proven tools in new contexts. Since 2016, CARE Burundi has implemented the EKATA approach – Empowerment through Knowledge And Transformative Action – which originally started in Bangladesh. They also applied the Abatangamuco approach – which Burundi invented to work with men and boys towards gender equality.
    • Combine skills training with the ability to work together in groups and negotiation. The EKATA approach works with women to build their skills in negotiation, leadership, conflict management, and working together for change. At the same time, it brings in men and leaders to talk with women and find ways to change the habits and norms that are leading to inequality and violence.
    • Test what works: The project tested if focusing on achieving gender equality worked better than simply sharing messages about women’s rights and gender equality on top of agricultural training.
    • Generate good evidence: The project used a rigorous research design to test what worked best, and how much extra it cost to get to successful results. This is evidence we can use for years to understand what components lead to better returns on investment not just in gender equality, but also in incomes, food production, and nutrition.
    • Be practical and think of cost: The project didn’t just test for impacts, it also looked at what it cost to get to extra impact. In fact, it only cost 10% more to implement the gender equality activities. Ultimately, that little extra investment pays off.

    Want to learn more?

    Check out the final evaluation, the Policy Brief, Cost Benefit Analysis, and the Impact Report.

    Climatelinks Announces New Design and Features

    Mar 2, 2021 by Learning Lab Comments (0)
    COMMUNITY CONTRIBUTION

    Learning Lab is one of a constellation of USAID sites through which we hope to share and exchange knowledge and learning with the Agency’s staff and partners. Another of our “sister sites," Climatelinks, has recently undergone a redesign. Below is the Climatelinks announcement and a breakdown of the new features.

    The family of USAID “links” sites are accessible at the footer of the homepage on Learning Lab.

    ----

    Climatelinks has a new design and updated content – just in time for the United States to rejoin the Paris Climate Accord. The site has been revamped to better meet the needs of the USAID, its climate and development partners, and the climate community around the world. 

    Originally launched in October 2015, the site also serves as a venue for information sharing and exchange within its community of users. Some key features include: 

     

    Updated Sector Pages 

    To help mainstream climate into other sectors, the site includes our updated Climate Risk Management Portal and dedicated Monitoring & Evaluation pages. Each page includes recent resources, blogs, and events.

    New Country Pages 

    We have updated all USAID country pages with a more user-friendly data dashboard, including climate change indicators, such as annual change in greenhouse gas emissions, percent forest area, deforestation rates, GAIN vulnerability index ratings, USAID funding, and policy indicators, such as country plans and commitments.

    Resource Library

    Cliamtelinks curates and archives technical guidance and knowledge related to USAID’s climate programming. The site houses about 3,000 resources, including more than 600 blogs. The library houses new policy documents, webcasts, training courses, and more.

    Climate Risk Management Portal 

    This page includes our most popular resources––the Climate Risk Profiles and Greenhouse Gas Emissions Fact Sheets. There are also Climate Risk Management training resources that anyone can take from home.

    Photo Gallery

    The redesigned photo gallery now hosts more than 300 photos from climate and development programs around the world. Stunning photos come from two recent Climatelinks photo contests, as well as blog contributors. These images tell the story of nature-based solutions to climate change by geography and sector.

     

    We invite you to explore the site to discover its new features. Please contact the Climatelinks team with your feedback.

    Are Agency-Level Global Indicators Enlightening or Constricting?

    Feb 24, 2021 by Alysson Akiko Oakley Comments (0)
    COMMUNITY CONTRIBUTION

    There is an endless tussle between the divergent needs of data users in the international development sector. Program stakeholders need tailored measurement systems that provide concrete evidence for making adaptive management decisions and assessing progress against the program’s criteria for success. These criteria are often highly contextualized to ensure the program is implemented in a manner that contributes to sustainability and ongoing relevance for communities.

    On the other hand, the larger agency within which the program sits – be it a funder or an implementation organization or a larger consortium of such organizations – often needs data that are aggregable across multiple geographies, demographics, program goals, and even technical sectors. Such data respond to a very different set of criteria, one that combines common denominators with top-down goals to produce a sense of an agency’s reach, impact, and effectiveness.


    Snapshot of Pact’s publicly accessible Global Indicator dashboard

    This past January marked the tenth year that Pact has reported such a suite of global indicators, measuring more than 10 million data points, each of which in turn required its own set of criteria to qualify. For example, a single datum from our CSO performance improvement indicator requires a scorecard measurement across four domains, eight subdomains, multiple indicators per subdomain, and measured at two moments in time. These data are aggregated across all of Pact’s projects over seven sectors – health, governance, mining, natural resources and more – and are publicly available on our data dashboard. Like other organizations, Pact uses customized indicators to track and assess our global reach and the results of our work. These indicators also convey in a transparent manner what we care about and how we hold ourselves accountable. Standard foreign assistance indicators (“F-indicators”) used by various U.S. federal agencies serve a similar function.

    There are opportunity costs to this data. In a previous post, I have highlighted the practical journey that a datum (a piece of data) takes from conceptualization to collection, verification, submission, and finally utilization, to emphasize the value and cost. There are also limitations to these data. By design, such data lose context, and are therefore less useful to specific program stakeholders who are charged with collecting them in the first place. Looking at historical trends is also less helpful, as the data tend to mirror the project lifecycle or the priorities of external funders. Similarly, such data are at times difficult to use for strategic decisions if an organization is accountable to multiple funders that in turn have their own priorities and measurement systems. And, of course, as meaningful outcomes are system-specific, it can be limiting to use universal criteria that may not be relevant to the context at hand.

    Snapshot of this year’s global indicator report

    For these reasons, most organizations do not have global indicators. Those that do have tackled the challenge in diverse ways. Last year, Pact led a panel discussion at the American Evaluation Association conference, together with FHI 360, IREX, and Save the Children, to discuss the various ways we have developed and used global indicator systems. An important lesson was how each organization built a system that met the unique strategic needs of the organization; each system differed significantly from that of the others.

    Pact’s system was built to measure our global results and to articulate what we care most about. Just as any project indicator operationalizes a project’s goal in concrete terms, Pact’s agency-level indicators articulate what success is to the organization. What does it mean to advance global health outcomes if you cannot articulate what a successful global outcome would be? Knowing this tells you not only what an organization focuses on, but where it has grounded its technical expertise. For example, Pact’s approach includes outcome-level indicators that measure changes in systems, not just basic reach.

    In addition, while strategic decision-making can be limited if the indicators are subject to funding trends and the project lifecycle, by determining what indicators to use and not use, an organization is publicizing what it stands for and works toward. Lastly, by sharing the data publicly as Pact has done with its data dashboard, an organization can contribute to data transparency.

    As an organization’s goals evolve over time, so too should the system that measures progress towards those goals.  With 10 years of experience, Pact is now embarking on an indicator revision process to expand and refine our suite to focus more on the outcomes we want to achieve over the next decade, in addition to measuring our adherence to the principles that guide us as an organization. We are working to do so in a highly participatory way to ensure individual program stakeholders can utilize the data, and to align them with an agency-level strategic review to ensure that the data contribute to strategic decision-making.

    If there is one lesson that Pact has learned, it is that making data actionable requires strategic intention to avoid the limitations and opportunity costs that otherwise can constrain data utility. For Pact, agency-level global indicators enable us to be intentional about what we hope to achieve and the manner in which we achieve it.

    What has been your experience in designing and implementing an agency-level measurement system? How have you made it actionable for various levels of organizational needs?

     

    Filed Under: CLA in Action

    Remote, Innovative, and Meaningful: Facilitating Stakeholder Engagement During COVID-19

    Jan 25, 2021 by Kamweti Mutu and Katherine Connolly Comments (1)
    COMMUNITY CONTRIBUTION

    Ms. Annette Kenganzi and Ms. Justine Namara at an Inception Workshop in Uganda. Photo by Nathan Chesterman for Environmental Incentives.

    Read the full, original blog post here.

    Bringing together diverse stakeholders is never simple, and doing so during a global public health crisis presents additional challenges. However, the USAID-funded Economics of Natural Capital in East Africa project (Natural Capital) has been identifying creative and adaptive approaches to solve such complex problems. 

    The project is tasked with documenting and elevating the current and perceived value of natural capital to strengthen management of four conservation landscapes in East Africa. Natural capital includes the resources and services that nature provides, from forests and fish to carbon storage. Project staff are working with partners to assess the value of these landscapes, which span international boundaries across six countries, and to help stakeholders at the community, national, and regional level use the project’s findings to inform management and policy decisions. 

    STAKEHOLDER ENGAGEMENT—AT A DISTANCE

    Since late August 2020, the Natural Capital team has used innovative approaches to host and facilitate five virtual stakeholder engagement sessions, reaching more than 200 participants from Burundi, Kenya, Rwanda, South Sudan, Tanzania, and Uganda. With the goal of informing the project’s communications plan, the sessions were designed to better understand:

    1. Stakeholder perceptions of ecosystem services in each landscape and threats to those services;

    2. Important policymakers and implementers to engage in landscape protection;

    3. And, best practices for communicating with various stakeholders.

    Project staff and the East African Community (EAC) secretariat initially developed a meeting strategy to ensure participants could actively participate in these virtual environments, relying as well on commitment from project coordinators across all six countries. The strategy included: procuring internet bundles for participants in remote areas; employing World Café-style breakouts; providing translation services; and encouraging participants to treat the process as their own and share contextual knowledge. 

    USING FEEDBACK TO INFORM COMMUNICATION

    Altogether, key takeaways from the sessions, summarized in workshop reports, included:

    1. Stakeholders are looking for inclusive management coordination with authorities;

    2. Messaging must be targeted and include translations that resonate with the various actors;

    3. And, local communities’ livelihoods must be considered, including providing support for income-generating activities and human well-being goals while facilitating stewardship of capital.

    Furthermore, the five workshop reports inform a communications plan that will guide the project’s efforts to effectively connect the value of natural capital and the findings of conservation research with key audiences. The plan includes local and regional dissemination tactics and products for non-governmental organizations, regional governments, community-based organizations, and private sector actors. 

    LOOKING FORWARD 

    As communities continue to cope with the limitations brought on by the COVID-19 pandemic, it is clear that virtual facilitation is not a stop-gap solution but, rather, an effective tool to manage long-term engagement. These stakeholder engagement sessions highlight the value of nimble planning, finding commonalities across participants, and making the most of tools to leverage remote engagement.

    Pages

    Subscribe to RSS - blogs