This American Life is a fantastic program for many reasons. Their story on nonprofit program evaluation is great in that it doesn’t call itself “nonprofit program evaluation,” but rather “I was just trying to help.” It’s a fantastic, concise synopsis of the nonprofit sector and everything charities do in a pretty understandable phrase.
Okay, nonprofits out there; you’re just trying to help. Are you making a difference? Are you helping a little? A lot? Better than some? Worse than others? What does it mean to help?
The story here is a nonprofit which simply gives money to people who don’t have any. No conditions or strings, no experts or methodology. It is possible that the end recipients (in Kenya in this story) may use the funds for really sound purposes, or perhaps not. The question is whether a blanket drop of cash is more effective in improving lives than say, Heifer International, which gives cows to poor people and helps train them in raising the animals for both food and as a business. Answering this question first requires another one: What is “effective?” Just what are you trying to do?
There was a time when nonprofits could get support with a good pitch, a great story about an individual, and great intentions. This can still work in many ways, but large-scale philanthropy is looking toward different ways of measuring outcomes. The W.K. Kellogg Foundation first published a guide to logic models many years ago in the hopes of bringing targeted measures to their grantees. A bulk annual report about how many hours of service your nonprofit provided last year is better than no reporting, but a report that measures what kinds of inputs created what kind of outputs toward what kind of outcomes allows everyone in the nonprofit sector to see if one nonprofit has built a better mousetrap than another. That nonprofit may then make the pitch for more resources, or for partner organizations to replicate their model for still more change in the world, or both.
The end result of the Kenya story was a lot of data being gathered to compare outcomes for families, and a $2.3 million dollar grant from Google to GiveWell to keep publishing data and keep giving money. It is of course possible that other nonprofit programs have better long-term outcomes, but GiveWell is is testing their assumptions – their logic model – and openly sharing the results for the whole sector to learn more, to challenge assumptions, and to contribute to a better mousetrap.
Nonprofits can learn from this and take three specific steps. First, create a logic model supporting your theory of change in the world . Second, define measures of outcomes, and outputs that support those outcomes, and measure them. Finally, share what you know in the most transparent way possible. Transparency will create trust, and trust is the cornerstone of getting donations to support more change.