Post-Event Resources: Unappreciative Inquiry Webinar

Comments (9)
Author(s):
Jerome Gallagher
Organization(s):
Institution(s):
Date Published:
February 2, 2016
Contribution:
USAID Contribution

Is the draft evaluation report that landed on your desk from the external evaluation team you hired exemplary, or not quite? Are you afraid you might have a “surrealist evaluation” that reports the “most insignificant change” through “mixed-up methods”?  Before moving on to utilizing your evaluation findings, evaluation reviewers and managers have a responsibility to ensure that evaluation findings and conclusions are credible and well supported with logic and evidence. 

This presentation introduced a 12 point checklist to help prompt critical thinking and guide evaluation reviewers through common pitfalls in program evaluations based on the presenter's many years of experience in reviewing and critiquing external evaluations of USAID programs. Numerous examples from USAID evaluations were provided to highlight the practical relevance of the checklist. Time was allotted at the end for questions from attendees. Also cat photos...there were cat photos.

Key takeaways from this webinar include:

  • Prior to considering an evaluation worthy of use, commissioners of evaluation have a responsibility to ensure that evaluation findings, conclusions, and recommendations are credible and well supported with logic and evidence. 
  • "Unappreciative Inquiry" is just another way of describing a critical, skeptical review of an evaluation report for potential weaknesses in credibility, validity, and soundness.
  • The Unappreciative Inquiry Checklist includes five rules for preparing to read an evaluation report and 12 items to look out for in an evaluation report that might suggest problems or potential problems with the credibility of the evaluation. 
  • Everyone loves cat photos.
Presenter: Jerome Gallagher is a Monitoring and Evaluation Specialist with DevTech Systems Inc. and a staff member of the USAID/PPL's Program Cycle Service Center. He previously served as the Evaluation Specialist for USAID's Europe and Eurasia Bureau after many years with USAID implementing partners.

COMMENTS (9)

Thank you to everyone who participated in the Unappreciative Inquiry Webinar. Unfortunately, we didn't have much time for questions and answers, so I'm going to use this space over the next few days to try to answer some of the questions that came up in the chat box of the webinar. Everyone is also welcome to post new questions. -- Jerome Gallagher

posted 4 years ago

Q: Is this new checklist used by USAID? (from Maram)

A: This is a checklist that I developed based on my experiences reveiwing USAID evaluation reports. It is not official USAID guidance. For official USAID guidance on evaluation, including USAID's evaluation report checklist and review template, check out the USAID evaluation toolkit here on learning lab: http://usaidlearninglab.org/evaluation. Still, I do hope commissioners and reviewers of evaluations at USAID and elsewhere will find this checklist to be a useful tool as they read and review evalaution reports. 

posted 4 years ago

Q: These are great points and well articulated. I feel like some of the issues relate to USAID's own planning process, i.e., using after the fact qualitative approaches to answer questions that should have been addressed through a prospective design. (from Diana)

A: Thanks and I agree. The first two items in the checklist, "Evaluation questions outside the realm of the researchable" and "the question/method mismatch" are both items that could really be addressed early on the planning phase of an evaluation by the commissioner of an evaluation, particularly the evaluation questions. If they have not been addressed by the time the draft evaluation report is completed (for whatever reason), then they should still be items that a reviewer should look out for when checking for problems or potential problems with the credibility of the report. But, I definitely agree that they should be addressed in planning. There are some good resources in the USAID evaluation toolkit (http://usaidlearninglab.org/evaluation) that could help on these issues, including some resources on developing evaluation questions and an excellent resource on evaluability assessment which, if used more, might help some of the question/method mismatch problems. 


posted 4 years ago

Q: Any comments on theory of change? Shouldn't ToC be a key feature of your checklist to improve impact evalaution design and execution? (from Miranda)

A: I actually don't have much to say about Theory of Change. It’s always great to see a well-articulated theory of change and it certainly helps in conducting an evaluation and for reviewing an evaluation. For a successful impact evaluation, a well-articulated theory of change is probably a necessity.

But this checklist is about reviewing an evaluation report for credibility, not about evaluation design. When tasked with reviewing an evaluation report, the project that was evaluated may or may not have a well-articulated theory of change.  So you have to deal with what reality hands you. Credible findings and conclusions should be possible in an evaluation report regardless of whether or not the project had a well-articulated theory of change. And while reading a well-articulated theory of change in an evaluation report does tend to help me understand the project being evaluated, I don’t find that it is something that I particularly focus on when trying to uncover potential credibility problems in the findings and conclusions.

 

posted 4 years ago

Regarding:  "IV. Don’t get caught in the utilization trap"**, I might say:  "Don't be fooled by a slick report (lacking credibility)."  In any case, I take your point.  Too often we can get hung up on punctuation and miss the bigger picture!

** It is important to make sure that an evaluation report is written in a manner that is accessible for use by the intended audience. But start the review of an evaluation report with a focus on the credibility of the evaluation findings, not how well they are presented."

posted 4 years ago

Donald, great point! I have to admit that I probably see more evaluations at USAID that would probably benefit from some better editing, a more accessible executive summary, and stronger visualization than evaluations that are overly slick. But I agree that equating slickness with credibility is just as much a utilization trap as focusing too much effort on improving presentation at the cost of focusing on credibility. 

posted 4 years ago

Q: What is the best way of presenting findings while applying mixed methods? In other words, if an evaluator uses qualitative and quantitative methods, is presenting the qualitative findings/discussions and the quantitative findings/discussions in clearly separated two sections the preferred one or presenting them in one section by complementing each other? (from Awoke)

 

A: Great question. My preference for when the evaluator uses mix methods is for the annexes to provide detailed information on what was collected from each method separately. So, if an evaluation includes both a survey and focus groups, I like to be able to go to an annex that lays out the survey data and another annex that provides the focus group transcripts or summaries. In the main body of the report, how the data is presented really depends on what makes sense for the particular evaluation question that is being addressed. I often find that data from both methods (in this example, the survey and focus groups) can be presented and discussed in the same section when they are being used to support the same finding or conclusion, but it should still be clear when a finding or conclusion is supported by the survey data, the focus group data, or both. If the support seems questionable, I can go to annex to get more detail on a particular data source. 

posted 4 years ago

Q: Is there a trade-off between straight-forward simple conclusions and usability of findings by decision-makers? (from Gergana) 

 

A: In my experience, simple straight-forward conclusions have a better chance of being used by decision makers compared to messy, complex conclusions. For instance, if you conclude that participation in a training program leads to a desired behavior change, then that conclusion will be easier to use than a conclusion that participation in a training program only leads to a desired behavior change for some participants but not others and we don’t know what the relationship is between the training program and the desired behavior at all in some districts where the program operated because those were not examined. However, just because a straight-forward simple conclusion is more likely to be used by decisions makers, that doesn’t mean that we should try to tidy-up reality or pretend we have evidence that we don’t have. Better to recognize the limitations of our evidence and make the best choices we can based on the evidence we have and by thinking through how it should inform our decisions. 

posted 4 years ago

Q: Suppose, we realized the evaluation question was really outside the realm of the researchable during the review of the draft evaluation report. And as a result, the evaluator did not address it fully. What should the evaluation COR/reviewer do when such problems are identified? (from Awoke)

 

A: Certainly it is preferable to try to determine during the development of the evaluation SOW if and evaluation question is researchable. If that doesn’t happen, then after the contract is awarded, one hopes that it would come up during the design phase of the evaluation prior to the start of data collection. However, if it is only during the review of draft evaluation report that the COR/reviewer realizes that the question was really outside the realm of the researchable then the first step is to acknowledge this and communicate with the evaluation team to determine if they agree. How you go from there really depends on the response of the evaluation team. The team may agree with you, for instance, by acknowledging that while both parties (the commissioner of the evaluation and the evaluator) thought the evaluation question was a researchable question at first, the process of data collection or analysis has revealed that this was outside the realm of the researchable for this particular evaluation. On the other hand, the evaluation team may disagree and argue that they have in fact answered the evaluation question. The former is easier to deal with. Moving forward where there is agreement may involve keeping the question in the report but explaining in the report what findings the evaluators can present, but also why they are insufficient for answering the question. Or it may involve revisiting the evaluation SOW and narrowing the question to something that is answerable with the data that was collected. If there is no agreement that the question is outside the realm of the researchable, then moving forward means trying to work with the evaluators toward reaching a consensus about what can be credibly reported in the evaluation. If that fails than USAID has the option to state their differences with the evaluation conclusions in a “Statement of Differences” attached to the evaluation report. 

posted 4 years ago