Women Workers Respond

Jan 12, 2021 by Diana Wu Comments (0)
COMMUNITY CONTRIBUTION

This story is part of CARE's Women Respond series, which is an initiative to learn from and amplify the direct perspectives, experiences and priorities shared by poor and working-class women facing COVID-19 across the geographies where CARE works.

Migrant workers – those leaving their homes to find income through a range of work such as making clothes for the global garment industry and tending others’ homes and families – have faced particular threats through the COVID-19 pandemic. These threats have impacts that ripple from their own lives to their home communities. To inform response and advocacy work, CARE’s Women Respond initiative examined how the COVID-19 pandemic impacted women workers’ lives.

Since April last year, CARE surveyed over 2400 workers across Bangladesh, Cambodia, Ecuador, Laos, Myanmar and Vietnam to learn about the impact of the pandemic on them. Many of these workers typically leave rural areas to take jobs in cities as garment factory or domestic workers or informal work. Some workers had crossed national borders to find jobs and secure income to send back home, putting them at greater risk as migrants. As migrants, we know these workers face further risk as their jobs fall disproportionately in the informal economy, and they may face further isolation and risk with fewer social support networks, limited access to governmental resources and services – and at times criminalization – due to their legal status.

CARE’s surveys reported that the largest impacts of the pandemic on migrant workers’ lives have been on cash, food security, mental health and physical health/care:

Digging deeper, stories from women highlight how the responsibility to meet household needs around education, food and health fall disproportionately on their time and labor. In Ecuador and Vietnam, CARE staff and partners have also found escalated risk of violence – often at the hands of intimate partners and within the workplace. In Vietnam, reported rates of domestic violence has nearly doubled since COVID-19. While surveys do not report gendered violence highly, it is broadly recognized that experiences of intimate partner violence are severely underreported.

Looking closer at the responses from each country, it is also important to note that the impact of COVID-19 varies vastly across each of the countries in terms of cases and deaths among the places polled. As of December 8, 2020, Cambodia and Laos report 0 deaths, while Vietnam and its population of nearly 100 million reported 35 deaths. In contrast, the toll of COVID-19 fell more heavily on Ecuador, Myanmar and Bangladesh where thousands – and in the case of Ecuador tens of thousands – of lives had been lost due to COVID-19.

With stark differences in these realities, there were some variations in responses country by country. In Myanmar, factory workers reported materials for protection from COVID-19 as the greatest priority for 53% of respondents. At the same time, however, economic security remained a critical stressor across all countries. For those hit hardest by COVID-19, the threats of – and lockdowns in response to – COVID-19 pushed workers into further precarity, with workplace shutdowns, reduced hours, and layoffs. As one garment factory worker from Myanmar described, “During COVID, our family income drastically reduced as my brothers lost their jobs. I became the main earner to support my family and I needed to send money back to my family every month. I cannot go back home. My salary is less than before, and I also have family back home depending on my salary. … Because of COVID-19 cases increasing in Myanmar, the government announced the closure of factories to control people gathering. When the factories reopened, there is no overtime requirement and we get smaller wages.” 

For domestic workers, lockdowns have meant further confinement in the home according to CARE staff and unions supporting domestic workers in Ecuador, increasing further social isolation and risk of abuse as they lock down in their employers’ home. Some have reported being forced to care for people with COVID-19 without any protection, jeopardizing their lives and those of their families. In Ecuador, 89% do not have social security, and 19% work more than 40 hours a week for just $186.89.

Unstable work and access to cash has had clear impacts on education, health, food access, stress and anxiety, and conflicts within the home, according to women workers across Bangladesh, Cambodia, Ecuador, Laos, Vietnam, and Myanmar. Through these times, women across all sites reported the remittances they send to loved ones declined and 59% of women shared plans to migrate back to their hometowns. Since COVID-19 restrictions were introduced across the Mekong region, 100,000 Laotian migrant workers have already returned to Laos. Some migrant workers across Laos and Myanmar, however, noted with less access to income, they were unable to return home even if they wanted to. In many countries in the Mekong region, people who returned to rural areas reported facing stigmatization and suspicion, as communities worried that migrants brought COVID-19 with them.

To meet needs through these times, these women workers have taken a number of measures. According to the survey, 66% mentioned social distancing and adopting strict hygiene practices to stay safe, with reports from Bangladesh noting that uncertain jobs have meant uncertain access to health insurance or care. To meet financial needs, 87% reported diversifying income through small businesses and 59% planned to migrate elsewhere. Workers across countries also reported taking on debt and selling assets/belongings to meet basic needs. Workers surveyed across Bangladesh (97%), Vietnam (90%) and Cambodia (84%) reported changing their food consumption patterns and eating less, and workers in Laos and Myanmar named food as a priority need. In Ecuador, colleagues reported that some poor migrants have entered sex work, often in ways that expose them to greater risks of exploitation, transmission of sexual infection and unwanted pregnancies in addition to COVID risk.

These stories highlight the interconnectedness of lives and the domino effect of struggles throughout the pandemic. Beyond seeing COVID-19’s impact on work in both formal and informal workplaces, women’s stories highlight how losses in work impact their homes and those they support. At the same time, cuts in access to income and healthcare can spiral into crises in food, housing and physical as well as mental health. These stories also show how the impacts of COVID-19 fall most heavily on women workers as they face not only uncertain incomes to meet their own needs, but increased work at home, deeper crisis for loved ones who have relied on their income and increased gender-based violence. These times call for robust responses that support poor and working-class people to meet needs in ways that don’t drive them into further precarity but position them to move forward – individually and collectively.

COMMENTS (0)

Let's Get Meta: Learning how to Learn from Evaluations

Dec 22, 2020 by Carla Trippe Comments (0)
COMMUNITY CONTRIBUTION

Insights from a Meta-Evaluation of USAID/Liberia's Portfolio from 2016-2020

As Social Impact’s (SI) five-year Liberia Strategic Analysis (LSA) contract facilitating data-driven decision making with USAID/Liberia comes to an end, we took a step back to synthesize 31 evaluations and assessments conducted between 2016 and 2020 into one meta-evaluation. The result? Two paths of reflection: 

Why do a meta-evaluation?

The meta-evaluation captures rich lessons around design and implementation, spanning seven sectors and examining over $464 million USD in USAID investments. It identified systemic implementation challenges, emergent best practices, and how recommendations have been applied toward adaptive management. The reflections are not limited to implementation approaches, but also the kind of award mechanisms that enable the desired change. What unfolds is a story about collective contribution toward strategic objectives and conditions for self-reliance. This institutional knowledge is crucial for decision makers, especially USAID activity managers.

But we want to focus on the evolution of our learning approach.

We believe learning is not reading a report.

What does learning look like? We believe learning is not simply reading a report. That’s why SI put a lot of thought over the years into how evaluations can give USAID staff the information they need – packaged in the right way at the right time – for decision making.

The first critical lesson was learning how to emphasize utilization of the evidence. From the design to the data collection to analysis, each step in an evaluation serves as an opportunity to engage your implementer, partners, and government counterparts in shaping the use of the information being produced. The “How Do We Learn?” chapter provides tips on what engagement looks like at each step along the way. The bottom line is that ongoing engagement ensures that the learning process is inclusive, contextually relevant, and fosters long term self-reliance by helping local stakeholders think through how they can use the data.

The second critical lesson was how to facilitate ownership of the evidence and collectively act to realize development objectives. We created more interactive analysis sessions earlier in the evaluation process to involve USAID staff and stakeholders in the sense-making of the evidence. SI and USAID/Liberia also tested more collaborative workshop models to bring implementers, partners, and beneficiaries together to “evaluate” progress, including joint problem-solving and developing ways forward. We recognized that USAID and implementers do not work in a vacuum, and we need stakeholders to realize the development objectives. This is why learning events are so important and have been successful tools to improve how USAID collaborates with implementers and host country governments toward adaptive management.

To share these learnings, the meta-evaluation includes “A Roadmap for Learning after Evaluations” – from dissemination to hosting a learning event to a post-activity review – and a note on “How To: Selecting a Learning Tool” for your development challenges, complete with a full toolkit on when to choose an evaluation, activity review, assessment, or other learning tools considering your time, cost, and other contextual parameters. 

Ultimately, our evaluations should leverage local voices and lived experiences to promote the change they envision for their communities and for their country. The meta-evaluation revealed that the way we support learning should honor the agency of all actors in the complex environment in which we implement. Evidence can inform strategy and programming adaptation, but learning collaboratively can also strengthen partnerships, change behavior, and engender meaningful stewardship of development outcomes.

Fail Again. Fail Better

Nov 17, 2020 by Emily Janoch Comments (0)
COMMUNITY CONTRIBUTION

“Ever failed. No matter. Try again. Fail again. Fail better.” --Samuel Beckett

Here’s my favorite part of that quote: the ultimate goal is not a lack of failure; it’s better failures. That’s good news for CARE, because we just published round two of our Learning From Failure initiative, and…I know this will surprise everyone…we haven’t stopped failures yet. We do have some hopeful signs that we’re failing better; or at least, that we’re improving on some concrete weaknesses we identified in the first round.

It’s an interesting process to launch the second phase of learning from failure. The first round, we didn’t know what we were going to find. We spent as much time talking about how it was the first-ever report of its kind as we did about the actual failures. Our case study admitted, “It's still very early to see specific development impacts.”

Round two isn’t quite the same. It’s not new anymore, so there’s less excitement at having invented something. We’re not discovering data and themes for the first time. In a lot of ways, the stakes are higher. Round two of learning from failure becomes an exercise in continuous performance improvement, rather than a journey of discovery. If we don’t see improvements, we don’t have the excuse that it’s too early to tell.

It also takes a sustained commitment. Launching an exploratory exercise at a small scale is easy, especially when no one quite knows what the answers will be. Pulling together a few pieces of content over a few months is pretty straightforward. It takes some staying power—and real support from leadership—to keep up the work over time, especially in the middle of a pandemic. That’s even more true once we’ve seen one round of results and had a chance to understand the work that it takes to improve.

So what does round 2 of learning from failure look like? Our podcast is still going strong, with a whole series over the summer on the importance of equity in local partnerships and several podcasts learning from what’s been going on in COVID, and what we would change if we could do it all over again. We also published the second meta-analysis, looking at updated results. There’s a podcast on what we learned from the metanalysis process here.

What have we learned about failing better? Here are some of our top lessons:

  • Failure can lead to progress: Two of the big areas that came up in our first round of learning from failure were Monitoring, Evaluation, Accountability, and Learning (MEAL) and gender. Those lessons guided some key investments around improving our MEAL and gender work, and those are the areas where we saw the biggest progress this year. Those investments included building a series of mandatory internal training courses on MEAL, and building new guidelines for improving gender transformative programming and MEAL. So focused investments can make a difference.
  • We need to focus more on adaptation: A lot of our failures in this round were around understanding context—especially how contexts changed, and what realities were on the ground as input supplies, markets, and local capacity shifted. The data from this round implies that focusing just on design won’t be enough to address the context problem. We also need to build in more adaptability. Another core theme was that we need to get more proactive about adaptation—not just waiting for contexts to shift, but always looking to see how we can get more streamlined, efficient, and higher-quality throughout project implementation.
  •  Change (and our ability to measure it) are still slow: While we are proud of being one of the few multi-mandate International NGOs that is able to report its global impacts in the light of the Sustainable Development Goals, we haven’t cracked the nut of getting to rapid-cycle learning and near real-time evidence about what’s not working, especially at a global level. We’re still looking mostly at lagging indicators, even in our learning from failures highlighted in evaluations. This year, a lot of our focus is on how to get faster at this analysis. That includes building out more work around adaptive management, faster data systems to look at key indicators, and more proactive testing of specific ideas using agile methods.
  • We need to develop our action plans for specific actors. While we did present tailored 2019 failure analysis to many project and region specific teams, the recommendations and action plans remained at the global level. With the 2020 analysis, we need to not only present tailored findings to each relevant team, we need to customize action plans and recommendations to each team so they can take forward the actions that are most relevant to them.

 

We’re not alone: We’ve had some wonderful experiences sharing this journey with others—both international development practitioners and others. We’re always learning from new places. The most recent publication from the Food Security Network on Learning from USAID Food Security Development Mid-Term Evaluations echoed many key trends we saw in our analysis. The importance of streamlining activities, focusing on high-quality implementation, focusing more on sustainability plans all showed up in CARE’s analysis as well.

Want to join us on this journey? We’d love to hear more about what you are working on to learn more from failure. E-mail me ([email protected]) to share your experiences.

That was then, this is now.

Nov 4, 2020 by Multi-Donor Learning Partnership for Development Impact Comments (0)
COMMUNITY CONTRIBUTION

A collective look-back at the first 2 years of the Multi-Donor Learning Partnership for Development Impact


This blog was cross-posted from the Multi-Donor Learning Partnership (MDLP) website. 

A little more than two years ago, a small group of knowledge management (KM) and organizational learning (OL) leaders and practitioners in some of the world’s most influential sponsors of international development, met in a generously windowed conference room in Stockholm. 

The attendees at that first gathering represented the leading KM and OL minds from a collection of some of the most impactful organizations in international development, including USAID, UNICEF, DFID (now the UK’s Foreign, Commonwealth & Development Office), Sida, the World Bank, and the Interamerican Development Bank (the group subsequently added GIZ, IFAD, and the Wellcome Trust).

The meeting in late September 2018 hosted by Sida -- the Swedish International Development Agency – was the inaugural gathering of what became known as the “Multi-Donor Learning Partnership”.   

What made this group different was its voluntary nature, and its shared commitment to form an intentional community of like-minded practitioners to serve as a peer-support network to collectively strengthen – and demonstrate the value of – intentional KM and OL efforts in improving development results. They realized we were all working on similar challenges and that we could do better by learning from and supporting one another.

Recognizing that this group  was an experiment with no guarantee of “success”, the members agreed to explore the partnership for an initial 24-month term, at which point they would conduct an intentional “Pause and Reflect” moment to assess whether it had provided value, and decide whether to continue on beyond September 2020.

Two Years On

Two years on, the group (now nine members) fulfilled that commitment for a pause-and-reflect moment about progress for the partnership.   

Organizational Logos

On September 29 and October 2, 2020, the Multi-Donor Learning Partnership members met virtually for the second semi-annual meeting of a year dominated by a global pandemic.  A cornerstone of that gathering was the retrospective look at its two years together, culminating a two-step process facilitated entirely virtually. 

Step 1: Asynchronous Reflection

For a two-week period, MDLP members had access to a virtual space where they could take time to review intentions, progress, and results, and add comments, reflections, and suggestions related to three overarching questions:

1.     What did they hope would happen when they joined the partnership;

2.     What actually happened over the course of the 2 years; and 

3.     What should they do differently if we decide to continue on?

Step 2: Collaborative Reflection 

During the final two-day meeting of this phase of the MDLP, members of the MDLP were then facilitated through a process by Piers Bocock (Acute Incite) and Chris Collison (Knowledgeable Ltd) during which the results of individual reflections were synthesized and discussed.

Whiteboard with sticky notes 

Recognizing that many informal inter-organizational groups like this rarely produce value beyond networking opportunities, the partnership was able to identify positive outputs and outcomes while also identifying areas for improvement if the group agrees to continue its collaboration.  Specific results identified as part of the reflection included:

·      The creation of a Learning Culture, despite its mostly virtual nature. One of the widely acknowledged primary outcomes was the creation of a supportive, trusting and open culture of learning across the community members.   

·      Facilitation of 22 learning events, including facilitated cross-partnership meetings, webinars, peer assists, and topic-specific conversations. In addition to six partnership convenings over the two years, the MDL also hosted and facilitated seven webinars; seven peer-assists, and two virtual conversations about COVID-19.

·      A plan for a co-created compendium of practical KM/OL Cases from MDLP members that highlight different but complementary elements of effective KM/OL for development efforts. 

Following this in-depth reflection, partnership members made the unanimous agreement that yes, the MDLP had indeed helped its members in their work, and that there was real potential to ramp up more collective activities moving forward. The first demonstrable result from the next iteration of the MDLP is likely to be the publication of a book which will draw out KM/OL insights, examples, guidance and future perspectives around effective KM and OL for international development organizations, expected by the summer of 2021.

CLA in the Time of COVID-19: Adapting, Pivoting and Partnering to Maintain Nutrition Progress

Oct 23, 2020 by Abigail Conrad Comments (0)
COMMUNITY CONTRIBUTION

USAID Advancing Nutrition is the agency’s flagship multi-sectoral nutrition project. Led by JSI Research & Training Institute, Inc. (JSI) and a diverse group of experienced partners, we strengthen the enabling environment and support country-led scaling of effective, integrated, and sustainable multi-sectoral nutrition programs, interventions, and food and health systems.

A Collaborating, Learning, and Adapting Moment

Like so many other U.S. Agency for International Development (USAID) projects and activities, USAID Advancing Nutrition was caught off guard by COVID-19, which forced us to quickly reassess and adapt our programming as the pandemic spread across the globe. This unprecedented challenge propelled us into new and uncertain territory and reminds us daily how shocks require development projects to adapt quickly and skillfully in the face of uncertainty and instability. USAID’s framework for Collaborating, Learning, and Adapting (CLA) supports development partners in the face of unanticipated challenges and is guiding our current efforts to pause, reflect, learn, and adapt based on the best real-time evidence available. As Project Director Heather Danton noted, COVID-19 has put us all in a “CLA moment.”

Adapting in Real Time

We originally developed a CLA Plan in the first year of the project, and it guides our efforts to support adaptive management of our operational and technical work. In project years one and two, we developed staff awareness about CLA by engaging staff across teams to develop the CLA Plan, hosting project-wide events to build awareness of CLA, holding small team meetings to discuss CLA and what it means in the context of our project, and implementing a CLA tracker to monitor use of CLA across diverse activities.

CLA in Practice

This strong foundation helped us quickly adapt to COVID-19 and make both project-wide and country- and activity-specific adjustments. We formed an internal COVID-19 Task Force with broad staff representation to assess needs and provide guidance to support project decision-making. The Task Force developed guidance and a tool to help teams reflect on how COVID-19 could affect project plans and what adaptations might be necessary. These findings informed discussions about USAID Advancing Nutrition activities and global nutrition work, generally, with USAID.

Understanding that teams across the project would face similar challenges, we also created several working groups to monitor and disseminate emerging information, synthesize best practices for virtual engagement, develop guidance for remote data collection, and help teams develop new skillsets. These working groups have supported adaptations across the project and created a shared repository of information to prevent teams from duplicating work.

We also initiated several specific COVID-19 awareness and risk mitigation activities. USAID and our Social and Behavior Change and Knowledge Management teams collaborated with UNICEF and members of the Infant Feeding in Emergencies Technical Working Group to adapt infant and young child feeding (IYCF) counseling cards that use images from the IYCF Image Bank for COVID-19. This group also created an IYCF counseling package for health service and nutrition workers to use when counseling mothers and families in cases of suspected or confirmed COVID-19. Our Knowledge Management team is curating key resources on COVID-19 and nutrition on our website and has prepared briefs for nutrition social and behavior change programs about how to adapt to COVID and handle misinformation.

Our country programs have tailored specific adaptations to their country context and specific activities. In the Kyrgyz Republic, our Chief of Party, Nazgul Abazbekova, has spearheaded the move from in-person activities to remote implementation, including virtually recruiting and training community mobilizers, sharing videos on social media, and conducting household baseline surveys by phone. This team also recruited and trained 900 social mobilization and nutrition activists via WhatsApp, and will soon start to provide virtual, rather than in-person, home visits to “1,000-Day” households (those with a pregnant woman or child under 2 years of age).

The Road Ahead

As we ended FY20, we came to terms with the fact that COVID-19 is going to continue to affect our work for the foreseeable future. As we begin FY21, we are identifying ways to assess risk and implement precautions across our activities and country programs more consistently and in a way that allows us to implement and support critical nutrition services while limiting risk for our staff, service providers, stakeholders, and community members with whom we work.

This blog was produced for the U. S. Agency for International Development. It was prepared under the terms of contract 7200AA18C00070 awarded to JSI Research & Training Institute, Inc. The contents are the responsibility of JSI and do not necessarily reflect the views of USAID or the U.S. Government.

Filed Under: CLA in Action

Locally Led Development: Engaging Local Stakeholders in Building An Evidence Base

Sep 29, 2020 by USAID Office of Local Sustainability Comments (0)
COMMUNITY CONTRIBUTION

Authors: The Office of Local Sustainability Evidence and Learning Team (Danielle Pearl, Amanda Satterwhite, Colleen Brady, Elliot Signorelli, Cydney Gumann, Paul Vincelli, and KC Das)

For two months in the Spring of 2020, the Office of Local Sustainability's Evidence and Learning Team invited staff from across USAID to join us in exploring how the Agency approaches its generation and use of evidence from the perspective of locally led development. Our seven-part Standards of Evidence for Locally Led Development series brought together eight expert presenters and more than 670 participants to engage in conversations ranging from scientific research to complexity-aware monitoring to ethical considerations when conducting research. 

This blog post outlines some of our key learnings from the series and introduces the topics covered by each presenter. We encourage you to explore the event resources and share your own takeaways in the comment section below.

WHY A SERIES ON STANDARDS OF EVIDENCE?

In 2019, under the Office of Local Sustainability’s Broad Agency Announcement for Locally Led Development Innovation, we issued a global call for potential partners to co-create with us on a range of topics relating to advancing knowledge and practice of locally led development. Our five co-creation workshops with 81 organizations and their partners from around the world placed special emphasis on generating credible, rigorous research and evidence; however, our review of the eventual submissions revealed important opportunities to strengthen future calls for research and development proposals. 

As the team turned to next steps, we asked ourselves: how can we ensure that we are building a base of evidence and practical knowledge in a way that meets local development priorities, enhances local capacity, and engages local actors as end-users of that evidence? These were not questions with easy answers, but we knew we were not alone in seeking to answer them. And so the idea for the Standards of Evidence webinar series was born!

WHAT DID WE LEARN?

All of the presentations shared a common focus on enhancing both the effectiveness and the local leadership of development activities. The three questions at the heart of our efforts to make development more locally led - whose priorities are driving the agenda? whose capacities are engaged in bringing about desired change? whose resources are enabling change to happen? - can be asked about every facet of USAID’s work. 

These efforts recognize that when local people and organizations are empowered to lead in making decisions about their own development, their capacity and commitment are enhanced and outcomes are more likely to be sustained. Striving for greater local leadership in the realm of research and evidence generation raises its own challenges, from calling into question the power dynamics of expertise, the valuing of local knowledge and feedback, accountability to local people for the use of their data, the accessibility of decision-making processes in which data and evidence are used, and so much more. 

Three key themes emerged from the series: 

  1. Practitioners should draw on a wide range of evidence to inform locally led development programming.
    Our first presenter, Dr. P. V. Sundareshwar from the Global Development Lab’s Center for Development Research, introduced USAID’s Scientific Research Policy and highlighted the important role scientific research plays in building a reliable evidence base. During the presentation, he said: “Generally, the perception of research at USAID is that it’s pretty esoteric and it sits outside of what we want to do in terms of programming. But it might surprise you that many operating units at USAID actually invest in research because it helps them achieve their objectives. Evidence can come from multiple different places, it could be experiential evidence, it can come from peer learning, or research..with research-evidence being particularly critical for designing impactful solutions.”

    Another presenter, Dr. Paul Vincelli of the University of Kentucky Department of Plant Pathology, one of two Jefferson Science Fellows with the Office of Local Sustainability this past year, discussed the diverse types of evidence that could be applied to development programming. This included Randomized Control Trials (RCTs), meta-analyses, systematic reviews, and other analytical tools used by natural scientists or social scientists. His recommendation to the audience was that: “…diverse approaches to gathering and assessing evidence may all have value in different contexts, in different disciplines, for different questions.”

    Dr. Jennifer Kuzara, a Senior MEL Specialist with USAID’s Expanding Monitoring and Evaluation Capacity (MECap) contract, introduced three theory-based approaches that can help to build evidence in complex environments where assigning attribution between an intervention and a result may be difficult. According to Jennifer, these approaches, which include contribution analysis, process tracing, and realist evaluation, can help answer questions, like: “…under what conditions and for whom will these linkages hold true? And under what conditions and for whom might they not hold true?”

  2. Local actors can provide important contextual knowledge throughout the life of your research programs.
    Tjip Walker from the Bureau for Policy, Planning and Learning (PPL/P) emphasized how complicated and complex our development programming is, which requires development practitioners “to develop a locally grounded and locally relevant theory of change that is reflective of that particular situation.” Because of this, evidence generated in one context, commonly referred to as “best practices,” might not easily translate into a different context. Tjip emphasized the importance of testing assumptions, monitoring changes, and adapting throughout the design and implementation of activities, but doing so in a way that reflects the voices of local actors.

    Tania Alfonso from PPL/LER encouraged the audience to always ask: “For whose hypothesis, and for whom does this intervention, this theory of change, work?” She went on to provide examples of how bringing in local voices may be able to identify if interventions are having uneven success across target beneficiaries.

    And Dr. KC Das, another Jefferson Science Fellow in the Office of Local Sustainability, spoke about the importance of bringing local actors in early and often. He said in his presentation: “…the more local people...are involved in the design, the higher degree of the context detail appears...” 

  3. Prioritize local actors as the end users for your evidence to help them build their own self-reliance and increase sustainability of your program’s outcomes.
    Dr. Derek Simon shared how USAID/Cambodia’s Country Development Cooperation Strategies (CDCS) development process established feedback loops as part of their stakeholder engagement by ensuring notes from their public forums were shared back with participants. According to Derek, this would allow them to “…use their own notes and their own work for their own advocacy and research purposes.”

    And Dr. Joe Amick from the Global Development Lab’s Office of Evaluation and Impact Assessment (Lab/EIA) shared how his office’s partnership with a local research institute in Uganda helped to enhance the knowledge of local actors. Joe said: “…at the end of the day, at least half of the research team is staying in the local country. And they’re going to be able to keep this information for themselves and put it into their next project or their next activity.”

WHERE DO WE GO FROM HERE?

For the Office of Local Sustainability, the Standards of Evidence series reinforced our conviction that both credible evidence and local leadership are critical to achieving lasting results and advancing self-reliance. Reflecting on how USAID creates and uses evidence from this perspective compels us to consider new areas in which we need to expand and deepen the evidence base supporting our practice of locally led development.

Under the auspices of the Local Works Program, we will continue to gather experience and evidence to expand the evidence base for locally led approaches to development, and to advance broader discussions in international development around aid effectiveness, local ownership, and how to achieve lasting change that is sustained by local people, using local resources and capacity, beyond the end of aid.

We will support the creation and utilization of new knowledge to enhance the effectiveness and the local leadership of development. In the near future, we will do so by:

  • Advancing the development community’s knowledge of locally led development through a blend of research, utilization of performance reporting, evaluations (including ex post), and other methods;
  • Co-developing relevant practices and tools for broader uptake in development programs;
  • Developing new and more effective ways to facilitate evidence use by USAID, our partners, and the communities we serve. 

We hope you will stay tuned for these future efforts, and welcome your feedback at: [email protected] 

So, you want to start a podcast?

Sep 8, 2020 by Amy Leo Comments (0)
COMMUNITY CONTRIBUTION

Amy Leo was previously the human behind USAID Learning Lab. Now, she’s a Senior Adaptive Management Specialist with Acute Incite’s International Development Practice. This blog post was originally published on the Incites blog on August 26, 2020.

Microphone and laptop in podcast studio 

In the USAID Learning Lab podcast studio. Credit: Amy Leo.

If you work in communications, chances are you’ve been in at least one work planning meeting in which someone suggests: “let’s start a podcast!” It's not a bad idea; the number of podcast listeners in the U.S. is only rising. And, the quick shift to virtual work as a result of the COVID-19 pandemic has also led to an increased interest in this medium. But, producing a podcast is a time-consuming endeavor and many wonder if it’s worth it.  

Since I began producing the USAID Learning Lab podcast in early 2017, many colleagues in the international development sector have reached out for advice on starting their own. So, I thought I’d share some questions that I think potential podcast producers should consider before diving in, as well as snapshots of my journey getting the USAID Learning Lab podcast off the ground. 

Will people listen? 

The USAID Learning Lab podcast was an experiment in informal learning. Would USAID staff and partners take the time to listen to something work-related? 

A common question I've heard from prospective podcast producers is "What is a realistic expectation for listenership?" If the target audience for your podcast is, for example, USAID staff and implementing partners interested in a specific topic, recognize that that is already a relatively small group of people. Then, consider what percent of this population might be interested in listening to a podcast about a work-related topic.  

Looking at the numbers, the USAID Learning Lab podcast episodes have a wide range of listens: the first and most listened-to episode has over 2,800 while the least-listened-to episode has around 750. Unfortunately, it isn’t possible to know how many listeners those listens represent, or how long episodes were played. So, keep in mind that listen counts are just one part of the picture. Another is anecdotal evidence from listeners that the message resonates with them and the behavior change that may result from what they learn. If your pool of potential listeners is small but the potential impact is high – go for it. 

Should you use a podcast to tell your story? 

You may not have something new to say, but rather something that you want to say in a new way. The most engaging podcasts have a story to tell or a point to make, and do so in unexpected or creative ways. 

Think about the podcasts you listen to. What makes you want to listen from beginning to end and go back for more? Podcasts give people the freedom to tell stories in their own voice. You can hear their enthusiasm, thoughtful pauses, stumbles and hesitation. All of this lends its own authenticity to the medium. My favorite interview for the USAID Learning Lab podcast (and I’m not just saying this to be deferential because he’s now my boss!), was with former USAID LEARN contract Chief of Party, now Acute Incite co-founder Piers Bocock on the topic of collaboration. In it, he describes how good collaboration is like improvised jazz music. You can hear the genuine passion in his voice while saxophones play in the background of the segment, adding yet another dimension to it. A blog post or a report could not have captured his tone in the same way or provided the same sensory experience. 

What’s your budget? 

The USAID Learning Lab podcast began as a scrappy single-episode experiment with a simple audio recorder in a shared lactation room, and evolved into a heavily edited production in a designated ”studio” (empty office) with entry-level professional equipment. 

Money should not be a barrier to starting a podcast – you can do it with a free audio recording app on your phone and free editing software on your computer. It may not sound amazing, but the quick shift to remote work brought on by the coronavirus has lowered the standards for audio quality everywhere, and made disturbances from pets and children a lot more acceptable! If you have some money to invest, a simple $100 USB microphone is a good choice for recording one or two voices. Our most elaborate recording set up cost about $1,000 and included microphones, microphone stands, and an audio mixer. For more information about equipment, I recommend checking out the resources at transom.org.  

Time is the more precious resource to consider when creating a podcast. In addition to the time it takes to draft a script, conduct interviews, edit the episode, publish, and promote it, you’ll also need to learn how to use your equipment and software. This was my greatest obstacle – Audacity, the audio editing tool I used, is user-friendly and intuitive but very buggy. Estimate how much time these things might take, and then double it.  

Your level of effort also depends on your chosen podcast format. There’s the straight interview-based format in which you publish exactly what you record with maybe an intro or closing added. At the other end of the spectrum is a highly curated and edited format in which you conduct multiple interviews and then weave together the content by topic or a narrative arc. The CLA at USAID series was a hybrid between these two approaches; each episode included three short interviews on a given topic that were edited for length and clarity. For the Leaders in Leading series, the hosts first interviewed seven people and then listened to each interview to identify pull quotes on key topics. Next, I recorded them discussing the topics and edited in the pull quotes. This was the most time-consuming format but yielded more focused episodes on key topics of interest. The content was at the core, not just the people. Your format should be chosen based on the message you’re communicating and the time you have to dedicate to production. 

How will you spread the word? 

What opportunities do you have to tell potential listeners about your podcast? To give you an idea of reach, the USAID Learning Lab newsletter had over 10,000 subscribers when I managed it, with an average open rate of around 20%. So, around 2,000 people learned about podcast episodes as they were released, and we usually had about 500 listens within the first week. 

In addition to sharing new episodes as they are released, do you have opportunities to weave your podcast content into other conversations? For example, the collaborating, learning and adapting (CLA) community of practice used podcast segments in their monthly dialogues, and episodes were referenced in USAID Learning Lab blog posts and during USAID’s CLA training. Our podcast content was “evergreen” (didn’t refer to current events), so we have been able to use and reference it for years. For example, I recently got this feedback from a listener several years after these series launched: “I wanted to thank you and the LEARN team for the Leaders in Learning and Inside Out podcasts. I even re-listened to a few of them and I'm struck by their staying power/shelf-life in our dynamic contexts.” Given the effort you are undertaking to produce your podcast, consider how long episodes may be relevant as they are discovered and revisited in the future. 

So, should you start that podcast? 

Creating the USAID Learning Lab podcast was one of the most challenging things I’ve done, but unquestionably my proudest and most enjoyable professional accomplishment. One key ingredient to the success and longevity of the podcast was the support of my supervisor, Ian Lathrop, and USAID LEARN project leadership. As I honed the craft over the course of three series, improvements can be heard in the sound quality and content from the first episode to the last. They took a risk by allowing me to experiment with this medium, and, like many endeavors, it took time for me to even meet my own standards. If (when!) you feel this way, here’s a pep talk on this very conundrum from the patron saint of podcasts: Ira Glass. 

So, should you start that podcast? Try a pilot episode, see how it goes, and enjoy the journey! 

CLA Through Data Management Systems: A Tale of Two Projects

Sep 4, 2020 by Rafael Pilliard-Hellwig Comments (1)
COMMUNITY CONTRIBUTION

At FHI 360, we have been engaged in thinking about how to make our collaborating, learning and adapting (CLA) efforts intentional, systematic, and resourced. Many workshops have hinged around the question is our CLA as institutionalized as it could be? We’ve seen that if we repeatedly ask, “why not,” the topic of enabling conditions invariably surfaces as a contributing factor. Setting up the right precursors to CLA, then, should be central to any institutionalization strategy. Recently, we have been reflecting on the role that data systems play in this progression.

The USAID-funded Ma3an and LENS programs are two FHI 360 projects that garnered visibility for their approach to working with data. Both shared similar trajectories in the growth and management of their data platforms, and both realized first-hand that the process of building these systems, if well executed, could have knock-on benefits for CLA

First, both projects gravitated to using data systems and tools that were directly modifiable by field teams, allowing these systems become a site of regular adaptive management. Project staff were able to add fields to data collection forms, create complex data validation rules on the fly, and de-duplicate records on a daily basis. This reduced dependence on home office staff or external contractors for small changes, allowing requests for outside help to focus on more specialized needs. This ability to drive day-to-day field changes to data systems is what allowed them to be nimble and adaptive—in short, not just by being fit-for-purpose, but also by being fit-for-capacity. 

Second, both projects hired personnel with data management skills to manage these platforms, who in turn spurred internal collaboration.  Though home office staff and outside vendors helped design pieces of the information ecosystem, it was these local staff that marked a real turning point in adoption. This is because these individuals championed data tools with other staff and customized data management tools regularly to meet their needs. For example, USAID-Ma3an staff created their own record de-duplication module within their data entry systems; USAID-LENS staff created a battery of unit tests that ran data validation audits daily. The proximity of those managing the system those using it meant that the internal collaboration was frictionless and frequent.

Third, project data systems needed to adapt to evolving requirements continuously in order to address to new learning questions emerging from implementation. USAID-LENS’ reporting requirements grew exponentially over time in response increased data collection, and its systems needed to scale accordingly. Initial quarterly reports only included a handful of data points from Excel files. That changed when the Jordan mission rolled out highly granular data-reporting platforms like DevResults and made geospatial reporting a requirement. At the same time, the leadership team increasingly saw the value of bespoke internal research, and more and more datasets were being generated through the project’s learning agenda. As a result, USAID-LENS pivoted to hosting all of its monitoring data on its knowledge management system. By its final year, USAID-LENS was tracking more than half a million records in its data platform, and reporting thousands of data points per quarter on individuals, businesses, financial transactions, and other indicators. Spreadsheet software would not have scaled to handle this volume data, making subsequent collaboration and learning from data impossible.

In contrast, USAID-Ma3an right-sized its data systems after a data quality assessment found that its systems exceeded the capacity of field staff and partners. By simplifying elements in the data pipeline and automating others, USAID-Ma3an tailored its system to focus on the essential needs of its users. This iterative fine-tuning allowed the program to respond to new learning about how stakeholders were and were not using the platform.

In short, these successes in enabling CLA can be traced to…

1.      … decisions to hire field staff who would build in-house data systems and tools. This structure naturally stimulated internal collaboration and institutional memory: Because these embedded staff both used and created the platforms and tools, human-centered design was both organic and systematic. These individuals worked with field staff (e.g., M&E and technical teams). Because software development was close to the field, the know-how around the solution was firmly entrenched in the field office and user base.

2.      … a preference for easy-to-use point-and-click solutions that matched local capacity and allowed teams to manage their data adaptively: Data management solutions were often tweaked with minimal use of code. This let users adapt data platforms to evolving needs, while relying on systems that could scale. This ultimately increased ownership over the tools and platforms.

3.      … adaptation of data collection tools to capture microdata, which enabled continuous learning and improvement and M&E for learning: In contrast to data that exists only at some level of aggregation, data that sits at the level of the unit of measure is much richer, being easier to reorganize and re-analyze. This effectively allowed M&E teams to use this data for pattern discovery and short loop learning cycles. Those insights lead to course corrections and early detection of problems. But collection of microdata also comes at a cost: the price paid is greater time digitizing and ingesting information, and a heightened responsibility to manage personal data.

4.      … supportive leadership teams that adequately resourced data systems. This not only translated into budget line-items for direct expenses, but also decisions to hire local staff to manage those platforms. Wherever possible, cost was reduced by relying on tools that were either free or freely available to FHI360 programs.

These lessons are not new. In fact, many of these ideas reaffirm digital development principles. But what is perhaps novel is the reflection surrounding how this data management approach enabled CLA. Our experience in implementing USAID-Ma3an and USAID-LENS underscores that a human-centered philosophy to information systems enables several CLA elements, including adaptive management, continuous learning and improvement, knowledge management, internal collaboration, and institutional memory.

Filed Under: CLA in Action

Three Insights on Activity Monitoring, Evaluation & Learning Plans

Aug 26, 2020 by Monalisa Salib Comments (0)
COMMUNITY CONTRIBUTION

Monalisa Salib is Social Impact's Sr. Director for Organizational Learning, USAID/Learns

Under the USAID/Vietnam Learns contract, Social Impact recently updated its Activity Monitoring, Evaluation & Learning Plan (AMELP). This exercise reinforced insights that we have long promoted with USAID and implementing partners (IPs) on AMELPs that can be applied to a variety of Activities.

 1.     Think of the AMELP as a critical management tool, not a Monitoring & Evaluation tool.

While the AMELP is indeed a MEL tool (it is in the name after all), it is more accurate to consider the AMELP a management tool akin to, or at the same level of importance as the work plan.

This is critical because the work plan often lists interventions without reference to the bigger picture. Without an AMELP, the implementer and USAID do not articulate their higher-level results or how they will know if they have achieved them. By framing and using an AMELP as a management tool, teams ensure they are focused on achieving results and avoid having AMELP development relegated only to MEL staff.

 2.     There are certain questions we should all be asking ourselves as we implement programs.

In the Learns AMELP, we designed our MEL activities based on key learning questions. Here is our list of learning questions that are relevant to every AMELP and why they are so important:

Learning Priorities for Management and Overall Effectiveness

Why is this important?

Progress towards results: Is the Activity achieving intended results and outcomes? Why or why not?

 

Because the Activity is designed to create some sort of change, AMELPs need to clearly articulate how partners will track whether change is occurring. Knowing why (or why not) the change is occurring helps us make adjustments or replicate success.  

Negative consequences: Have there been any unintended negative consequences because of the Activity?

Humanitarian assistance professionals are well-versed in the principle of “Do No Harm.” However, this can be less deliberate in development programs, despite the possibility of negative consequences. We need to be intentional in finding out whether we are doing harm so we can adapt if necessary.

Context shifts: How are shifts in context affecting the Activity's ability to achieve results or creating new opportunities for impact?

Though many implementers are regularly monitoring context, they rarely are intentional in planning for it. While everyone has now been forced to monitor context due to COVID-19, there have always been other shifts that affect our ability to achieve results.[1]

Feedback from end users: What feedback do program participants and end users have on our performance?

Our programs are meant to impact other people’s lives, but it is relatively common to review an AMELP that has no approach for receiving feedback from end users. Implementers often get this information informally, but we should aim to be purposeful and have mechanisms to both capture this information and act on it.[2]

 

3.     Focus your remaining learning priorities on your theory of change, particularly areas where you are less confident.

Given often limited time and resources, IPs should focus their time and attention on key questions stemming from their theories of change (ToCs) and their relative confidence levels in the causal links within that ToC. For example: if you do x, how confident are you it will work and get you to y? Areas with lower confidence levels are rich for learning. You do not want to wait until a final evaluation to reveal an approach does not work.

We know many partners are currently revisiting their work plan (and hopefully AMELPs) with the start of a new fiscal year. I hope these insights help you, and welcome insights from others on your experience using AMELPs.



[1] For more on context monitoring, see the USAID Monitoring Toolkit.

[2] As a further resource, Feedback Labs does excellent work on creating and using feedback loops.

 

This post originally appeared on the Social Impact blog here

A New Normal: Adapting our Approach to MERLA during COVID-19

Aug 12, 2020 by Molly Chen and Amal Mohammad Comments (0)
COMMUNITY CONTRIBUTION

If you’d asked what we thought would be a “worst case scenario” for our work in monitoring, evaluation, research, learning & adapting (MERLA) before March of this year, we might have answered “data quality checks are not going well” or “it’s been tough to get a Learning Session scheduled for the whole team.” However, in those scenarios, we are usually able to uphold our assumptions to continue our work. 

COVID-19 has provided the unique opportunity for our MERLA team at RTI International to quickly learn and adapt as we work through the short- and long-term challenges of implementing projects at a time like this. Data for decision making, and a pretty level head, are critical factors when everything is uncharted territory. In any organization during “normal” times, knowledge management (KM) can pose a huge challenge and different  initiatives to improve processes can only get us so far. We need people to earnestly participate and provide the information to oil our KM machines. As we all puzzle through how to learn and adapt to the new challenges that we face, it’s important that we create opportunities to learn from one another and that we learn to ask for help. 

Our MERLA Community of Practice, a group of more than 70 staff at RTI, has worked to bring staff together to learn from one another, problem-solve, and fill gaps in our knowledge over the last few months. In addition to our external monthly Learning and Adapting during COVID-19 webinar series, where we’ve invited partners and collaborators from various organizations (USAID, Harvard University, UNICEF, World Bank, and others) to share their lessons learned during the pandemic, we’ve been organizing internal avenues for knowledge sharing across our organization. We recently hosted a “MERLA during COVID-19 Forum,” where projects from across our different technical areas presented what they’re seeing on the ground and how they are learning and adapting their approaches to monitoring, evaluation and learning (MEL) and KM during this challenging time. 

 

Here are several key lessons learned from our projects as they adapted their MERLA methods during COVID-19: 

 

1.       Adapt questionnaires for phone surveys and use tools and applications available to respondents. Instead of attempting to convert a traditional face-to-face questionnaire to be conducted over the phone, customize the survey conducted over the phone using SMS, CATI, IVR, WhatsApp, Zoom, or other applications that are available to respondents. This could include shortening the overall survey time to account for noise or distractions, providing an incentive like a phone credit, and establishing trust by familiarizing respondents with the process before the actual survey call. In a recent webinar, RTI hosted a panel of experts working on phone surveys to collect data for COVID-19 in low and middle income countries.

2.       Coordinate and collaborate with MEL staff and focal points in government institutions. Coordinating and collaborating among MEL staff, technical staff, and government focal points at the central and regional levels is necessary to effectively collect, centralize, and quality check data from different institutions. For instance, when respondents were unreachable or did not respond on time, the USAID Wildlife Asia activity tapped into their network of local partners and technical experts to reach out to participants.

3.       Adapt trainings, data collection tools, and applications to the local context. Planning for the unique circumstances of phone interviews can create a better experience for enumerators and respondents. For example, the USAID Tusome Pamoja activity in Tanzania provided enumerators with clear guidelines and tips to help them adapt to remote sampling, interviews, and data collection. Some of the key success factors included developing flexible data collection protocols, setting up WhatsApp groups among enumerators so they could learn from each other, collaborating and adapting in real time when faced with difficulties, and preparing replacement respondents in case of connectivity issues.

4.       Flexibility and follow-up are critical to sustaining relationships. Developing compatible protocols and instruments to the remote setup is essential, while flexibility and follow-up contribute to later success. For example, the USAID/Governance for Local Development (GOLD) activity in Senegal has reported that using video conferences to train partners to submit their data through an online data collection system has proven useful in maintaining data flow. They created a WhatsApp group to communicate to partners, are following up via telephone calls for feedback and data quality reviews and collaborating with other programs to share data and lessons learned.

As we continue to share knowledge within RTI and with the development community, we’ve developed a few general observations. First, learning from each other’s experience, good practices and lessons is critically important, especially in an extremely vulnerable and stressful time. The MERLA during COVID-19 Forum provided an important opportunity for project teams to ask each other questions about methodologies, tools and technologies that have been working, about how to problem-solve tough decisions while ensuring reliable data is collected on time, and most importantly, to ask “how are you adapting?” Second, there’s no one size fits all solution for adapting to this new environment. Each country, project, and person have their own unique circumstances that are important to acknowledge and to understand while we all navigate this new terrain. Third, be open to new ideas, because if there was ever a time to test what we thought we knew about anything, the time is now. Let’s embrace that some of the best ideas come when we’re tested. And last but certainly not least, let’s share with each other, across our teams, organizations, and this international community what works and what doesn’t, because it’s a long road ahead and remember, we’re all in this together. 

 

Pages

Subscribe to RSS - blogs