Real World Impact Measurement

"Good enough" means simple enough to do, but rigorous enough to mean something.

Share

This article was originally published by Stanford Social Innovation Review on September 24th, 2012 with the headline: Real World Impact Measurement

Measuring impact is kind of like raising kids: It's often hard, it costs more than you think it's going to -- and you absolutely have to do the best job you can. The Mulago Foundation obsesses over impact because it's the only way to know whether the money we spend is doing any good. In fact, we don't invest in organizations that don't measure their impact -- they're flying blind and we would be too.

We funders are often eager to compare organizations, but evaluation is first and foremost about understanding whether a nonprofit or social business is succeeding on its own terms. Whether they set out to get farmers out of poverty or rehabilitate stray cats, we need a way to know if they succeeded or failed, and to what degree.

While big randomized controlled trials are useful when an intervention is ready to scale up, they cost a lot, and you can't use them to navigate an organization. You need an ongoing stream of good-quality information, but you can't spend a ton of money on it. Superficial data will get you nowhere, but overdoing it will clog the works and probably leave you confused. What you need is an approach to impact measurement that is simple enough to do, but rigorous enough to mean something.

We like to see an organization build evaluation into its operations, both for efficiency and because findings can be integrated quickly into operations. We've found that there are four steps that help us and those we work with think through a plan to measure impact:

1. Figure out exactly what you're trying to accomplish
2. Pick the right indicator
3. Get good quality numbers
4. Show that it was you

Here's how it works:

1. Figure out what you're trying to accomplish

You can't think about impact until you know exactly what you're setting out accomplish. Most mission statements don't help that much. We like to see missions boiled down to eight words or less, including a verb, a target population, and an outcome that implies something to measure—like this:

• Getting African one-acre farmers out of poverty
• Preventing HIV infection in Brazil

This defines success and failure. If we can't get to this kind of concise statement, we don't go any further -- either because the organization doesn't really know what they're trying to do or because we simply wouldn't be able to know if they're doing it.

2. Pick the right indicator

Try this: Ask your team, "If you could measure only one thing, what would it be?" Ignore the howls of protest; it's a really useful exercise. Here's some examples relating to the missions shown above:

• Improve farmer income
• Decrease HIV infection rates

Sometimes that one indicator is doable, and that's great. Other times, you might need to capture it with a carefully chosen -- and minimal -- combination of indicators. When there is a behavior with a well-documented connection to impact -- such as the drop in malaria mortality from kids sleeping under mosquito nets in Kenya -- you can measure that behavior and use it as a proxy for impact. Projects that can't at least identify a behavior to measure are too vague for us to consider. Notice that while things like "awareness" or "empowerment" might be critical to the process that drives behaviors, we're interested in measuring the change that results from that behavior.

We don't pretend that this method captures all of the useful impacts and accomplishments of a given organization and their intervention. As philanthropic investors, though, it answers the most critical question of all: Did they fulfill the mission?

3. Get real numbers

You need to 1) show a change and 2) have confidence that it's real. This means that:

1. You have a baseline and measure again at the right interval; and
2. You sampled enough of the right people (or trees, or whatever) in the right way.

There are also two parts to figuring this out: the logical side and the technical side. With an adequate knowledge of the setting, you can do a lot by just eyeballing the evaluation plan -- looking carefully at the methods to be used to see if they make sense. Most bad schemes have an obvious flaw on close examination: They didn't get good baseline data; they're asking the dads when they ought to ask the moms; or they're surveying in a culturally inappropriate way. The technical part has mostly to do with sample size and the technical aspects of gathering data, and a competent statistician can easily help you figure what is adequate.

4. Show that it was you

Real impact is the difference between what happened with you and what would have happened without you. If you have real numbers that show impact, you need to make the case for attribution -- that it was your efforts that caused the change. This is often the most difficult part of measuring impact, because it can be hard to figure out what would have happened without you.

We break the demonstration of attribution down into three levels of ascending cost and complexity:

1. Narrative attribution: You've got an airtight story that shows that it is very unlikely that the change was from something else. This approach is vastly overused, but it can be valid when the change is big, tightly coupled with the intervention, and involves few variables, and when you've got a deep knowledge of the setting.
2. Matched controls: At the outset of your work, you identified settings or populations similar enough to the ones you work with to serve as valid comparisons. This works when there aren't too many other variables, when you can find good matches, and when you can watch the process closely enough to know that significant unforeseen factors didn't arise during the intervention period. This is never perfect; it's often good enough.
3. Randomized controlled trials: RCTs are the gold standard in most cases, and are needed when the stakes are high and there are too many variables to confidently say that your comparison groups are similar enough to show attribution.

In the end, though, the key to figuring out real impact is honest, curious, and constructive skepticism. Funders ought to be willing to pay for it, because everyone benefits from a rigorous look at impact: the doers, the donors, the social sector itself, and most importantly, those who are hoping for a brighter day ahead.