This is the second in a series of posts about my experience at the U.S. Global Development Lab. Read part 1 here.
This second post highlights the Lab’s commitment to impact and evidence. I’ve written before that the lack of a reliable and rigorous evidence base is largely to blame for the observation that spending on aid and development haven’t done much in terms of aiding and developing. When I joined the Lab, I was pleased to meet an office full of people who not only shared this perspective, but were also actively working to correct it. The Lab is joining the “credibility revolution” in a number of ways, I’ll highlight three.
First, I wrote in part 1 about how the Lab aims to produce development innovations. The first requirement for the Lab to fund any idea is that the idea must have rigorous evidence behind it. This has two key implications, from what I have seen.
- The Lab has a lot of scientists working in it; in fact it has the most scientists per capita of any bureau in the entire government. Here the term ‘scientist’ generally refers to people who hold an advanced degree (often a PhD) in a scientific field. These scientists come from a diverse set of fields (not just the usual development-ty fields… erm… economics) such as biology, chemistry, forestry, etc. Many of these scientists are hired through the AAAS Fellowship program and support the Lab’s capacity to understand and interpret cutting edge scientific research and apply it to the Lab’s global development objectives.
- A lot of development organizations often suggest something along the lines of, “Yeah, rigorous evidence is nice, but we are not a research institution.” At the Lab, the “we are not a research institution” excuse is not a valid argument. Although, it is rare that the Lab actually produces its own research (it’s often contracted out), they work hard to identify gaps in the evidence base and are relentlessly uncomfortable when USAID is funding programs that are not supported by rigorous evidence.
Second, the Lab is partnering with Google and the NGO Give Directly to perform ‘cash benchmarking’ studies in a variety of sectors in a variety of contexts. These studies seek to understand what would happen if we just gave all the money needed to run a USAID program directly to the end beneficiaries?
I want you to stop and pause for a second… USAID is studying whether or not their programs, their work, their bureaucracy is better than simply liquefying these programs and simply giving people around the world the money… I think this is incredible and deserves much appreciation!
This is pretty much as good as it gets from an evidence perspective. Often times monitoring and evaluation (M&E) activates perform studies that aim to understand if some program “worked”. The definition of “worked” could range from “providing positive benefits” to “providing benefits that outweigh the costs”. These cash benchmarking studies take this one step further by taking into account the opportunity cost of just giving all the operating expenses of some program directly to the end beneficiaries. (For frequent readers: This is basically the index funds for development idea, I’ve written about before.)
Third, the Lab is working to improve the way USAID (and other funders) both implement and use evidence within their work. Traditionally M&E within implementing and funding development agencies has aimed to improve accountability of aid projects and programs. The role data and evaluation plays is in ensuring that public funds have been used in the intended manner, as described by some sort of contract or scope of work.
Perhaps this is obvious, but this is a rather rigid structure for M&E. It almost entirely prohibits the ability of program administrators and development practitioners to adapt or make corrections mid-program cycle. This is again perhaps obvious, but poverty is an unsolved problem. This means we don’t know how to solve it. This being the case, we need to be learning as much as we can about what works, what doesn’t, and why.
The MERLIN (Monitoring, Evaluation, Research, and Learning Innovations) Program (the program I am working most closely with during my time at the Lab this summer) is made up of a mix of organizations each with their own specialty and strength spanning from randomized control trials and complex systems modeling to social network analyses. They have developed five “innovative” (to the USAID context at least) M&E activities that allow USAID programs to engage with complex systems more effectively and be more adaptive in their management and programing.
The third and final post of this series will focus on the Lab’s organizational structure and their commitment to collaboration.
Leave a Reply