What is the Work?
In promoting international development, what is the actual work we do?
How is that work understood?
Do our words truly reflect the work we want to see?
Could a shift in this language help us to achieve meaningful changes in our work?
Recently, at the Global Partnership for Social Accountability’s annual forum, I was part of a unique discussion around a gap practitioners see between the work they do and the way their work is understood. Specifically, the organizations advancing this work in developing countries tended to see their role as fostering trust and social capital, and using this capital to build collaborative approaches to improving the public delivery of goods and services. To be sure, they often pressed for greater follow-through on public commitments, but this idea of “helping government deliver by building connections with constituents” is very different than the logic embedded in the original idea of social accountability. By its nature, this is seen as how we donors can help people hold government to account. They also viewed their work in communities as part of a longer-term effort connected with changes at the national level, rather than a purely community engagement. A more accurate label might be “collaborative engagement for delivery” (it resonates with this presentation by Rick O’Sullivan).
However, this is not the rationale against which many donors have funded social accountability through the years. Donors in large measure have funded social accountability organizations to “hold government to account” for service delivery while also working on service availability and quality. They see these organizations as ensuring that service providers behave in ways that donor plans and investments expect them to, and punishing them if they don’t. That gap is now creating a challenge for social accountability organizations, as some robust empirical research is finding that social accountability programming can make a critical difference, with other studies finding null results from it. As discussed elsewhere, what research is finding seems to depend on whether the research is building from the strategies taken by change agents, or the research assumes the “catch and sanction malfeasance” logic that is often behind donor funding.
This led to a soul-searching discussion over how much to confront donors with the reality of what their work is, versus continued acceptance of the terms on which they receive funding. This includes whether donors are ready to move on to investing in research that builds theory around the strategies of local change agents, rather than testing theory derived in journals in the global north.
This led me to think how important the language we use to describe the work of development can be to how we design and manage development programming. In particular, for complex social change efforts, it seems to me that better ways of describing work can link with better efforts at measurement and improved performance.
Why is this? Well, it turns out that the language we use to describe work is internalized into the logic and measurement of a program, even if we all understand that other considerations exist (an old idea, it turns out).
For example, for many years, donors working to address recurrent crises acknowledged the inevitable return of certain disasters and the need to strengthen the ability of communities to cope outside of the confines of a disaster-response paradigm. However, I would argue that it is really only after the donor community started to define “resilience” as an outcome of interest that we could change our collective behavior. Suddenly, we weren’t just acknowledging the need to respond to repeated shocks, but could put programming against an idea (and slowly figure out how to measure the right results) to achieve that outcome. The shift from a consideration in disaster programming to an area in its own right came first in language and spread as this allowed the concept to anchor programming, including reporting and monitoring. A similar trend happened in the evolution from agricultural extension to value chains to market systems, where the new frameworks or ideas empowered different types of programming to be undertaken and measured. And a similar challenge confronts social accountability now.
I was re-reading Dan Honig’s seminal book Navigation by Judgment on a long flight recently, and this flags a second way in which our language defining the work we are doing matters. For those not familiar with it, Dr. Honig’s research shows that for certain types of development challenges (generally, those that involve interaction with complex social change - so most of them) it is more effective to navigate by judgment rather than by top-down accountability to predetermined metrics., In other words, delegate decisions to frontline agents close to the action who have the best information and tacit knowledge to make course corrections. And again, I think the language we use to define our work (for example, in accountability) and the intermediate outcomes we aim to reach can play a huge role in structuring whether judgment is implicit to achieving those results.
For example, if you are seeking to improve health service quality, it is more necessary to the work to have someone close to the community in question making adjustments. This is because an idea like health service quality depends on the perception of citizens, and cannot be assured by delivering objectively countable items such as particular drugs or kits. It focuses efforts at learning and adapting programming down to the local level, rather than learning above the project and simply directing it to do what is required. A similar evolution holds within the education sector, where an emphasis on the countable (teacher and student presence) has given way to an emphasis on outcomes requiring more judgment to reach (learning performance), as described by Lant Pritchett in Schoolin Ain’t Learnin.
If you accept my argument that the language we use to describe our work matters because it bakes in both our reliance on judgment and tacit knowledge, and our ability to program against “correct” intermediate results, what are the implications? There are at least three that I think are important for development work across a number of sectors:
- In terms of theories of change and how we learn from evaluations and research, we need to shift the mindset from a theory testing to a theory building approach. This implies a very high value to rigorous empirical work, including experiments, but only those oriented toward discovering mechanisms and variables that matter on the path from inputs to impact - in particular, finding those intermediate outcomes that can become the next “resilience” or “health quality” and what might go into them. It shifts the key question in an impact or performance evaluation from “did an intervention work?” to “why did the intervention sometimes work better?” Part of the process is also to get beyond use of adaptability or flexibility and towards articulation of right outcomes, so that our adaptive implementation is well grounded. These can be both realist evaluations or part of RCTs; the distinguishing factor is the framing behind the learning rather than the technique. Some donors are taking up the torch with an emphasis on middle-range theory building. However, more can be done.
- Our ability to define and assess meaningful intermediate outcomes is essential. In the education example above, the availability of good cross-country data on learning outcomes has made it possible to anchor work more directly to learning, rather than substituting education access as the outcome of interest with a variety of caveats. In my own democracy, rights, and governance sector, I see huge potential around ways to measure different forms of social capital or the strength of certain norms related to bedrock impacts like state legitimacy or political competition. Theory building (rather than testing) requires sharing and discussion of anchoring intermediate outcomes and data sources that allow programming to improve, and for us to seek to understand our work differently. Investment into measurement of intermediate outcomes seems to be a valuable public good that donors can help to generate. This will then open space for new language and new approaches toward our intended impacts.
If there is one idea that I think offers the greatest opportunity in terms of improving our language, it is to better incorporate the idea and language of probability into our description of “what is the work.”
For a lot of our programming, we are trying to position important reforms to have a greater chance of succeeding - not only on paper, but in reality. This can range from changes in public financial management rules, to improved protection of certain key rights, to ending fertilizer subsidies, or to task shifting for health care. Yet all too often, our desire to present certainty pushes us to define these areas of work in simple steps, focusing on the visible progress of new laws or policies, or specific numbers of people trained or engaged, with theories of change to fit, leading to a focus on form not function.
When we're trying to do big things, we have to accept that even the BEST intervention might not work, or at least achieve major impacts in a short time horizon. For example, failing to quickly, make a justice system more inclusive does NOT mean it wasn't money well invested, particularly if we may have positioned actors or seeded ideas in ways that make inclusive justice more likely over the next five years after our project ends. The idea that we should gauge progress similarly in building roads and building justice systems make no sense, but without a language to capture the difference, it remains impossible.
I believe we would open huge space for our staff and partners if we started to change the language of these programs from a stepwise progression of linear change (pass law/pass implementing legislation/adopt policy/train workers/implement) to results of “make it 20 percent more likely that a budget will be shared publicly” or “have 25 percent more initial TB screening delivered by community health workers rather than hospitals.” This change would allow us to focus on discovering how to make this happen, rather than always pursuing the easy first steps on paper.
Perhaps a program to improve access to justice would start to operate in the realm of promoting certain norms, rather than direct training or outreach. Perhaps a program to address natural resource management would focus on intermediate markets and the incentives they create more than direct community engagement.
Our programming space would be less limited by sub-sector and more defined by local knowledge of the place we’re working in. It would also transform our accountability for our portfolios from numbers of countable outputs, to defensible claims of progress. This would include more emphasis on our attention to the context and the integrity of our engagement and learning, consistent with our enterprise risk statement and with systems thinking. Assessing the probability of transformational change, rather than tracking specific steps of incremental change, offers a different language more appropriate to the realities of ambitious programming.
Next time you sit down to write a project or activity description, take a moment to think about how you are describing the work to be done. Keep your topline objectives the same, but see if you can change the language you use to describe the work to get there - build in a different intermediate outcome, or use a change in probability of the larger impact happening. The more projects and scopes designers can be both honest and clear about “what is the work,” the better we collectively can do that work.