Are Agency-Level Global Indicators Enlightening or Constricting?

Feb 24, 2021 by Alysson Akiko Oakley Comments (0)
COMMUNITY CONTRIBUTION

There is an endless tussle between the divergent needs of data users in the international development sector. Program stakeholders need tailored measurement systems that provide concrete evidence for making adaptive management decisions and assessing progress against the program’s criteria for success. These criteria are often highly contextualized to ensure the program is implemented in a manner that contributes to sustainability and ongoing relevance for communities.

On the other hand, the larger agency within which the program sits – be it a funder or an implementation organization or a larger consortium of such organizations – often needs data that are aggregable across multiple geographies, demographics, program goals, and even technical sectors. Such data respond to a very different set of criteria, one that combines common denominators with top-down goals to produce a sense of an agency’s reach, impact, and effectiveness.


Snapshot of Pact’s publicly accessible Global Indicator dashboard

This past January marked the tenth year that Pact has reported such a suite of global indicators, measuring more than 10 million data points, each of which in turn required its own set of criteria to qualify. For example, a single datum from our CSO performance improvement indicator requires a scorecard measurement across four domains, eight subdomains, multiple indicators per subdomain, and measured at two moments in time. These data are aggregated across all of Pact’s projects over seven sectors – health, governance, mining, natural resources and more – and are publicly available on our data dashboard. Like other organizations, Pact uses customized indicators to track and assess our global reach and the results of our work. These indicators also convey in a transparent manner what we care about and how we hold ourselves accountable. Standard foreign assistance indicators (“F-indicators”) used by various U.S. federal agencies serve a similar function.

There are opportunity costs to this data. In a previous post, I have highlighted the practical journey that a datum (a piece of data) takes from conceptualization to collection, verification, submission, and finally utilization, to emphasize the value and cost. There are also limitations to these data. By design, such data lose context, and are therefore less useful to specific program stakeholders who are charged with collecting them in the first place. Looking at historical trends is also less helpful, as the data tend to mirror the project lifecycle or the priorities of external funders. Similarly, such data are at times difficult to use for strategic decisions if an organization is accountable to multiple funders that in turn have their own priorities and measurement systems. And, of course, as meaningful outcomes are system-specific, it can be limiting to use universal criteria that may not be relevant to the context at hand.

Snapshot of this year’s global indicator report

For these reasons, most organizations do not have global indicators. Those that do have tackled the challenge in diverse ways. Last year, Pact led a panel discussion at the American Evaluation Association conference, together with FHI 360, IREX, and Save the Children, to discuss the various ways we have developed and used global indicator systems. An important lesson was how each organization built a system that met the unique strategic needs of the organization; each system differed significantly from that of the others.

Pact’s system was built to measure our global results and to articulate what we care most about. Just as any project indicator operationalizes a project’s goal in concrete terms, Pact’s agency-level indicators articulate what success is to the organization. What does it mean to advance global health outcomes if you cannot articulate what a successful global outcome would be? Knowing this tells you not only what an organization focuses on, but where it has grounded its technical expertise. For example, Pact’s approach includes outcome-level indicators that measure changes in systems, not just basic reach.

In addition, while strategic decision-making can be limited if the indicators are subject to funding trends and the project lifecycle, by determining what indicators to use and not use, an organization is publicizing what it stands for and works toward. Lastly, by sharing the data publicly as Pact has done with its data dashboard, an organization can contribute to data transparency.

As an organization’s goals evolve over time, so too should the system that measures progress towards those goals.  With 10 years of experience, Pact is now embarking on an indicator revision process to expand and refine our suite to focus more on the outcomes we want to achieve over the next decade, in addition to measuring our adherence to the principles that guide us as an organization. We are working to do so in a highly participatory way to ensure individual program stakeholders can utilize the data, and to align them with an agency-level strategic review to ensure that the data contribute to strategic decision-making.

If there is one lesson that Pact has learned, it is that making data actionable requires strategic intention to avoid the limitations and opportunity costs that otherwise can constrain data utility. For Pact, agency-level global indicators enable us to be intentional about what we hope to achieve and the manner in which we achieve it.

What has been your experience in designing and implementing an agency-level measurement system? How have you made it actionable for various levels of organizational needs?

 

Filed Under: CLA in Action

COMMENTS (0)

Rhetoric or Redundant? Making the Most of Adaptive Management

Jan 25, 2021 by Alysson Akiko Oakley Comments (2)
COMMUNITY CONTRIBUTION

Making the Most of Adaptive Management in International Development

“Adaptive management” has become one of those pieces of jargon dropped into international development program plans and proposals as an all-encompassing, sometimes throw-away term with little practical meaning. Yet we continue to use it. Why? Because while it often conceals more than it illuminates, it also represents something essential: the idea that a project must be managed in a way that enables ethical adaptation as the problem, context, or project needs shift.

This concept has become a self-evident truth, such that the “adaptive” in adaptive management borders on redundancy. However, that it has become self-evident says a lot about how much our thinking has advanced over the years, from measuring success based on strict adherence to workplans and tasks and bean-counting, to a gradual embrace of uncertainty and systems complexity. Yet the practical application of managing adaptively remains varied and nebulous. To discuss different models, Pact convened a panel of leading practitioners and thinkers in effectively applying adaptive management.

 An overview of Pact’s “RRR” adaptive management process

An overview of Pact’s “RRR” adaptive management process

The panel featured Emily Janoch (Deputy Director for Research, Innovation, Evaluation, and Learning for the CARE USA Food and Nutrition Security team), David Jacobstein (Democracy Specialist in the Cross-Sectoral Programs Division of the Democracy, Human Rights, and Governance Center, USAID), and Laura Zambrano (Pact; USAID/Colombia’s Chief of Party for the Migrant Human Rights Activity). The panelists provided concrete tips for designing adaptive projects from the outset, skills to prioritize when building a team, and determining what is “enough” adaptation. Among other tips, Emily noted the need to build in budget flexibility at the outset in anticipation of necessary pivots, as well as for critical adaptive management activities such as learning events and reflection and convening. Laura emphasized the need to articulate the role of adaptive management in the project from day one—including hiring staff able to adapt nimbly —to ensure that all stakeholders are fully bought in well in advance of a need to pivot. She noted that a lengthy inception phase and rapid response funds were particularly helpful aspects in the project she leads.

David explained that the criteria of success for “good enough” adaptive management is whether the project achieved its desired impact, not how many times there was a successful pivot. A good project will demonstrate resilience to internal and external shocks, and it is therefore more important to ask why a project did not change in response to those shocks, rather than whether or how it changed.

The speakers reinforced that projects must now be designed with deliberate acknowledgement of and reference to uncertainty and systems complexity. By that logic, so should the project’s attendant adaptive management systems.

For this reason, in 2020, Pact completed a comprehensive review of existing guidance and toolkits on adaptive management. This included examining not only guidance aimed at international development practitioners, but materials developed in other sectors and industries. Based on this review, we identified the need for practical guidance that helps international development practitioners tailor their management approaches to the degree of uncertainty or complexity facing their project, with the aim of better meeting the needs of Pact’s projects and the communities we serve.

Snowden’s 2007 Cynefin framework

Snowden’s 2007 Cynefin framework

Our approach, detailed in Pact’s new handbook, differentiates a program’s degree of complexity using the Cynefin framework, which was originally articulated by David Snowden in 1999 and published in 2007 with Marie Boone. The Cynefin framework distinguishes between five different sense-making contexts: simple, complicated, complex, chaotic, and disorder. Pact’s simple premise is that we must first understand a project’s degree of complexity to determine the ideal adaptive management system in order to manage uncertainty and achieve project goals. Importantly, the “adaptive management system” must be holistic in nature and involves hiring decisions (i.e. hiring the right people), the selected monitoring framework, resource allocations, and more.

How do we determine a program’s degree of complexity? Building off lessons drawn from program evaluation, and in particular the contributions of Patricia Rogers and Sue Funnell, we developed a straightforward program complexity determination questionnaire that differentiates complexity in three program theory domains: theory of context, theory of change, and theory of action. These three domains may have different degrees of complexity, and their unique combination suggests an overall strategy for the program.

Pact’s adaptive management framework, which merges the Cynefin framework and Program Theory  

Pact’s adaptive management framework, which merges the Cynefin framework and Program Theory

We designed the handbook to provide practical guidance for practitioners. For example, this included providing simple recommendations regarding how to structure their processes and resources based on the level of complexity:

 

A snapshot of Pact’s adaptive management system guidance, based on degrees of complexity 

A snapshot of Pact’s adaptive management system guidance, based on degrees of complexity

We also provided a set of concrete tools and “how-to” guidance to help practitioners conduct learning reviews, hire for adaptive management-friendly mindsets, and track management decisions. A full list of tools is provided here:

1.      Adaptive Management Intensity Self-Questionnaire

2.      Staff Roles and Responsibilities in Adaptive Management

3.      Scenario Planning Decision Matrix and Template

4.      Guide to Context Indicators for Adaptive Management

5.      Reflection Meeting Template

6.      Learning and Reflection Meeting Agenda Template

7.      Guide to Preparing for Learning Reviews

8.      Decision Tracker for Adaptive Management

This is a living document and we hope to adapt it as we learn from our projects and peers. We invite you to share your feedback and experiences!

(with thanks to Mason Ingram, Kate Byom, and Lauren Serpe)

Filed Under: CLA in Action

Balancing crisis response with sustainable impact: Seven questions to ensure learning, ethics and accountability amid Covid-19 and beyond

Jun 5, 2020 by Alysson Akiko Oakley, Director, Results and Measurement, Pact Comments (0)
COMMUNITY CONTRIBUTION

Crises bring many things to light: how prepared you are to respond and adapt quickly, how you apply lessons from past experiences while adopting new practices, how you manage uncertainty and more. All of this is a testament to our existing practices and attendant systems.

How we respond also sets us on a path toward our future systems. This is especially true in development work, where we implement large, complex projects hand-in-hand with the people and communities we serve. These communities are part of much larger, highly complex living systems. Development projects have ripple effects throughout these systems well beyond our immediate goals, which is why interventions are planned with great care. For example, a straightforward literacy program can affect community engagement practices or domestic violence levels in positive or negative ways depending on how the project is implemented. Part of the Do No Harm promise of our sector is to be accountable for weighing and mitigating potential harm. This is why ethics are a significant part of our work, and why we are so concerned with sustainability.

In our rush to address the “here and now” of the Covid-19 crisis, we will certainly create ripple effects beyond our imagining, such as unintentionally institutionalizing undesirable processes. Besides our new activities, we should also consider ripple effects that will not happen as we repurpose existing programs to meet emerging needs. More than ever, this is the time to take the long view. Even in this crisis, we need to think beyond the short-term and be accountable for the lasting impacts of our actions.

One way is to regularly evaluate what we are doing, how we are doing it, and what it is resulting in to be accountable and to learn to improve – even when it is difficult to make the time for this. “Evaluate now, before it’s too late,” renowned evaluator Michael Quinn Patton recently warned. Developmental evaluation and other designs that enable nearly real-time learning and evidence-based adaptation are essential processes for programs wishing to navigate this complexity.

While evaluation practitioners can play a leading role, everyone in the development sector must shoulder this responsibility. Pact, for example, has engaged in learning across our projects and globally to ensure we are accountable to all those we serve, including intentional processes to adapt and improve as part of our Covid-19 response.

Here are seven questions that build from the world of evaluation and principles guiding evaluators that we should all be asking ourselves as we respond to Covid-19. Tackling these questions will ensure that we learn from our actions and that our inevitable ripple effects minimize harm and serve communities first.

Pact staff and community members in Tanzania take part in a workshop before the Covid-19 pandemic to gather ideas for improving Pact’s programming

What negative impacts might arise from our interventions? Oftentimes, far from a sprint, a crisis can turn into a marathon: Refugee crises can become multigenerational settlements, for example. Our actions will affect how that marathon plays out, so we must consider all outcomes. For example, might we create new problems by disrupting local markets or spark tensions among groups based on identity in our rush to help as quickly as possible? To help determine whether the good outweighs the potential bad, a useful technique is to build on past learnings and engage in negative program theory mapping exercises, a conceptual method that maps out pathways and causes of the undesirable rather than the desirable. Often we find that many undesirable results could have been avoided with even a simple exercise.

Are we working to break the “extractive data syndrome”? As a result of Covid-19, we have greatly expanded our digital solutions in projects that have relied heretofore on face-to-face interactions. This has greatly increased the quantity of data we collect and store on individuals. This also increases the chances of unintentional data violations. We must ensure that our processes and systems are built to enable and protect; that we are doing all we can to minimize the data burden and to weigh the potential costs against the actual benefit for those we serve; and to share back to those who generously shared their data – their metrics, their perspectives, their stories – with us. Providing tailored access to databases is low-hanging fruit, and involving those we serve in project designs and decisions – including decisions over data needs and data utilization – will ensure that data are used by those who provided them.

Are we accommodating potential changes to an individual’s consent? While we have long recognized that informed consent is not a check-the-box activity of a moment in time, we should broaden our understanding and perceive consent as a lifelong relationship. Acknowledging that today’s consent is a factor of multiple interacting parts – existing context, personal interest, the structures of rules and opportunities that constrain and enable us, the ideas we hold about roles and rights – is also acknowledging that our interactions with affected populations may change over time, and what was once “informed” or “consent” could be neither in the future. We should ensure that we consider whether any data collected – numbers, opinions, photos, videos – are strictly necessary,  make use of secondary data whenever possible, and assess the implications of a future change in consent on the valuation of today’s data priorities.

Are we working to reach individuals and communities equitably? The costs of Covid-19 are not falling on everyone equally. For example, as we have significantly increased our dependence on remote technologies, many cannot be reached because of limited connectivity, resources, language barriers, literacy and more, creating a “digital divide.” This tends to affect those already underserved or marginalized the most. Rapid participatory assessments can be used in a timely manner to design remote systems or non-tech solutions that work to reach all equitably, rather than relying on preexisting assumptions.

Are we intentionally working to promote empowerment at a time when so many are feeling and experiencing disempowerment? As executive governing powers have expanded globally to institute emergency health measures, many are now living under greater restrictions in all aspects of their lives. This has led to increased challenges, from an uptick in domestic violence to limits on freedom of expression. We can and should leverage “empowerment” and “transformation” evaluation designs that serve equally as interventions themselves, build evaluative thinking, and put the articulation of criteria of success in the hands of those we serve, thereby democratizing our work.

Are we protecting the right to refuse treatment or service? An important component of informed consent is the right to decline participation or refuse treatment. In a moment of crisis, when an individual faces bodily harm, emergency civil restrictions or the fear of being denied access to other critical services, can this right truly be exercised? How does this affect an individual’s sense of personal control and therefore empowerment? On the flip side, what about the right of development practitioners to exercise that same choice? How do we balance the risk of providing services with risks to beneficiaries if we do not serve them? These decisions affect how communities perceive our role, and also our own expectations therein. Stakeholder feedback mechanisms as part of regular monitoring systems can be extremely helpful in assessing changes in participant views and needs and should be become integral components of Covid-19 responses.

What about our Covid-19 adaptations should be continued, halted or modified in an uncertain future when we are back to a “new normal”? While some projects were remote from the start, many depend on face-to-face interaction to achieve desired outcomes. We need to intentionally collect evidence of whether our adaptations are successful, what should be continued into a “new normal” or halted altogether. Just because we can do something does not mean we should. The wrong lesson is that because we have been able to rely on remote methodologies during Covid-19, we should continue to do so when it is no longer strictly necessary. Much of development work depends on relationships. Relationships may be able to be maintained digitally, but they cannot always be catalyzed digitally. We must determine what is working better than in the past to ensure our systems are resilient and capable of better future responses and local ownership. We must learn, today, before it is too late.

Filed Under: Learning in Action
Subscribe to RSS - Alysson's blog