Last week I was introduced to a new podcast organized by the Accord Research Alliance (a group of faith-based development organizations) focusing on the intersection of faith-based development and impact evaluation. This is much overdue service and one that I hope survives the test of time.
The Soundcloud page for the podcast is here.
The most recent podcast interviews Kate Williams, who works with Compassion International helping them with impact evaluation. The episode is worth a listen just to get a sense of what Compassion International is doing in terms of monitoring and evaluation. It all sounds super cool. (This is a side note, but if you are looking for a faith-based organization to support, there is pretty solid evidence that Compassion International’s work actually helps people around the world.)
Beyond hyping this new podcast, I’d like to respond to the question at the heart of episode 4, “how much rigor is really needed?”. In the podcast, the answer to the question seemed to be that more rigorous evaluations are needed as an organization grows and expands its reach. The implication is that smaller organizations may be able to “get away” with less rigorous evaluations, compared to larger organizations.
This is an interesting answer to this question. I had never really considered the relative size of an organization to matter a whole lot in the decision of an evaluation method. Sure, larger organizations have larger budgets and therefore more resources to spend on evaluation activities. This answer, however, brushes aside an important detail of impact evaluation.
Impact evaluation is always, and should always, be about learning about the world. In the case of Compassion International, and many other faith-based development organizations, evaluation activities provide lessons about how to effectively implement programs. I don’t really see why a smaller organization wouldn’t want to learn these lessons. This is especially the case if the small organization is implementing a very unique program that might never been tried before.
So, my answer to “how much rigor is really needed?” is it depends on how confident the program leadership is about the effectiveness of the program. If everyone, when they are really being honest with themselves, is pretty sure that they are doing the best they can, then there is very little need to spend much resources on rigorous impact evaluation. If, on the other hand, many people are wondering if the program could be better (or if a particular program is better than alternatives), then the resources spent on a rigorous impact evaluation will be well-spent. This decision is, in my mind, independent of the size of the organization.
Of course, the size of an organization has implications for sample size. This, however, has more implications for evaluation methodology rather than how much rigor is needed. Even if the sample size is insufficient for statistically identifying a causal impact, smaller organizations can perform qualitative studies designed in the spirit of rigorous evaluation methodologies.
Economists kinda get a bad rap for repeating the same thing over and over again, but the optimal strategy for any organization (regardless of size) is to make decisions by equating marginal benefit with the marginal cost. If, on the margin, an organization would learn a lot from an expensive impact evaluation they should do it; no matter the size of their budget or “global reach”.