Using Evidence and Understanding Complexity: Three Questions to Guide our Work
Matthew Baker is MERL Specialist for Organizational Learning & Research on the USAID LEARN contract.
If you ask people what’s in vogue in development, chances are they will—at some point—mention two trends:
- Using or applying evidence in our work
- Understanding the complexity of the places we work in
If we recognize that the world is an incredibly complex place, what does that say about the amount of evidence we need to understand? It likely is complex, too. This challenge is compounded by the fact that development, unlike science, does not just need to describe the world, but also tell us what we can do to improve it. How do we square the need to use evidence with the fact that the evidence we have is never complete? The key to this are three questions that can help us reconcile these two concepts so that we can achieve better development results.
Whose evidence matters?
Chronic diseases such as heart disease, type 2 diabetes, and obesity are difficult to manage or prevent because they are difficult to understand and treat. As others have noted, one of the hallmarks of complex adaptive systems in the human sphere is the lack of consensus on what the problem consists of let alone the solution. In recognition that medical challenges often go beyond the walls of the hospital, the concept of patient-centered care has risen in prominence. This approach emphasizes something we likely already recognize: the importance of involving the people whom you are seeking to support and assist. In a similar vein, we need to consider evidence that involves the ‘lived’ experience of those who have worked in places and on relevant topics; this tacit knowledge is so often overlooked. This is already reflected in USAID’s Collaborating, Learning and Adapting Framework, which includes consulting the technical evidence base as a component of learning.
Photo Credit: Benoit Almeras, Handicap International.
If we were to borrow from the patient-centered care model from the medical field and human-centered design from technology and product design, we would identify the evidence our beneficiaries care most about, align them with our own priorities, and prioritize our work accordingly. Many successful programs are doing this already, but there is still room to do it across the board both intentionally and systematically. We have enough examples of failed development projects, such as the Thaba-Tseka development projects in Lesotho, where outside initiated development projects focused on improving outcomes, in that case economic welfare, failed to adequately understand or take seriously the ways that the local system worked undermining its stated goals.
Is the evidence about what to do or how to do it?
In the philosophy of science, there’s a concept called the underdetermination of scientific theory which basically says that the evidence we have available to us at any particular time might not be enough to determine what beliefs we should have in response to it. Those who work in development are likely to have faced this problem multiple times over when, despite the evidence, it is not clear what should be done. Choices between alternatives are often better made based on the fit between them, the interests and beliefs of those impacted, and the goals of a particular program. In development, the idea of consulting and engaging with relevant stakeholders is recognized as one of the rules of the trade. And, we often need to make judgments, and not just decisions, in our work. Judgments are rooted in values, as well as facts, and not just our values, but those of our partners/beneficiaries. Just as the politically informed programming movement and the increasing use of political economy analysis makes clear, decisions about what to do often fall between the realms of interests and evidence. Despite this, we need to ensure that evidence is used to help us figure out how to do it.
In addition to existing evidence, what theories could help me understand this situation?
As Kurt Lewin said, "there is nothing more practical than a good theory." Ideas and theories are as powerful in development as any other field. The breakthroughs of tomorrow are often rooted more in (sometimes imaginative) theories than just evidence. Theories are powerful not just to explain the world but to change it by adjusting our view of what is possible. Take the story of the digital age as a case in point. It started when a guy called Claude Shannon, an American mathematician, developed a theory back in 1948. The theory—known as the noisy-channel coding theorem—described the idea of how it could be possible to transmit digital information nearly error free. Before computers this was more a theoretical problem than a real one. However, as computers and the internet were being developed in the 1990s there was a renewed interest in finding the practical solution to this challenge; not only would it make communication efficient, but also cheap. In the end, the solution was found by rediscovering a way of coding data called “Gallager codes” from the 1960s. In other words, it took looking back forty years to utilize a theory developed ten years before that to create the digital age we now take for granted. In the realm of complexity, that would be quite the feedback loop!
Needless to say, combining theory and evidence is essential to good development work. Think of the recent work around understanding appropriate approaches and responses to violent extremism in the development community. Important progress is being made on gathering relevant data - look at the work of the Resolve Network, for instance, in gathering more than 2300 pieces of relevant research. However, there is still a need to understand the relationship between development work and theory, we will need to continue to develop robust theories of change around how the intervention may (and importantly, may not) work as intended. For instance, an assessment report for a USAID program in West Africa noted “the clear need for a well-defined and articulated theory of change that recognizes the fluidity of the context and incorporates the means to test and adapt hypotheses, linkages to program objectives, and programming.” Our ability to learn our own lessons by ensuring that we systematically lay clear how we propose to achieve outcomes - and do not overstate the case - is vital to ensuring that we are able to leverage evidence in understanding and influencing the complex problems we are tasked with solving.
How do we use evidence and at the same time recognize how complex the world we work in?
How have you done this in a project or program you have worked on? Any tips?