Are Agency-Level Global Indicators Enlightening or Constricting?
There is an endless tussle between the divergent needs of data users in the international development sector. Program stakeholders need tailored measurement systems that provide concrete evidence for making adaptive management decisions and assessing progress against the program’s criteria for success. These criteria are often highly contextualized to ensure the program is implemented in a manner that contributes to sustainability and ongoing relevance for communities.
On the other hand, the larger agency within which the program sits – be it a funder or an implementation organization or a larger consortium of such organizations – often needs data that are aggregable across multiple geographies, demographics, program goals, and even technical sectors. Such data respond to a very different set of criteria, one that combines common denominators with top-down goals to produce a sense of an agency’s reach, impact, and effectiveness.
Snapshot of Pact’s publicly accessible Global Indicator dashboard
This past January marked the tenth year that Pact has reported such a suite of global indicators, measuring more than 10 million data points, each of which in turn required its own set of criteria to qualify. For example, a single datum from our CSO performance improvement indicator requires a scorecard measurement across four domains, eight subdomains, multiple indicators per subdomain, and measured at two moments in time. These data are aggregated across all of Pact’s projects over seven sectors – health, governance, mining, natural resources and more – and are publicly available on our data dashboard. Like other organizations, Pact uses customized indicators to track and assess our global reach and the results of our work. These indicators also convey in a transparent manner what we care about and how we hold ourselves accountable. Standard foreign assistance indicators (“F-indicators”) used by various U.S. federal agencies serve a similar function.
There are opportunity costs to this data. In a previous post, I have highlighted the practical journey that a datum (a piece of data) takes from conceptualization to collection, verification, submission, and finally utilization, to emphasize the value and cost. There are also limitations to these data. By design, such data lose context, and are therefore less useful to specific program stakeholders who are charged with collecting them in the first place. Looking at historical trends is also less helpful, as the data tend to mirror the project lifecycle or the priorities of external funders. Similarly, such data are at times difficult to use for strategic decisions if an organization is accountable to multiple funders that in turn have their own priorities and measurement systems. And, of course, as meaningful outcomes are system-specific, it can be limiting to use universal criteria that may not be relevant to the context at hand.
Snapshot of this year’s global indicator report
For these reasons, most organizations do not have global indicators. Those that do have tackled the challenge in diverse ways. Last year, Pact led a panel discussion at the American Evaluation Association conference, together with FHI 360, IREX, and Save the Children, to discuss the various ways we have developed and used global indicator systems. An important lesson was how each organization built a system that met the unique strategic needs of the organization; each system differed significantly from that of the others.
Pact’s system was built to measure our global results and to articulate what we care most about. Just as any project indicator operationalizes a project’s goal in concrete terms, Pact’s agency-level indicators articulate what success is to the organization. What does it mean to advance global health outcomes if you cannot articulate what a successful global outcome would be? Knowing this tells you not only what an organization focuses on, but where it has grounded its technical expertise. For example, Pact’s approach includes outcome-level indicators that measure changes in systems, not just basic reach.
In addition, while strategic decision-making can be limited if the indicators are subject to funding trends and the project lifecycle, by determining what indicators to use and not use, an organization is publicizing what it stands for and works toward. Lastly, by sharing the data publicly as Pact has done with its data dashboard, an organization can contribute to data transparency.
As an organization’s goals evolve over time, so too should the system that measures progress towards those goals. With 10 years of experience, Pact is now embarking on an indicator revision process to expand and refine our suite to focus more on the outcomes we want to achieve over the next decade, in addition to measuring our adherence to the principles that guide us as an organization. We are working to do so in a highly participatory way to ensure individual program stakeholders can utilize the data, and to align them with an agency-level strategic review to ensure that the data contribute to strategic decision-making.
If there is one lesson that Pact has learned, it is that making data actionable requires strategic intention to avoid the limitations and opportunity costs that otherwise can constrain data utility. For Pact, agency-level global indicators enable us to be intentional about what we hope to achieve and the manner in which we achieve it.
What has been your experience in designing and implementing an agency-level measurement system? How have you made it actionable for various levels of organizational needs?