The latest guest post by Jason Saul!
For the last 15 years I have been focused on a single knotty question: how do you measure social impact? Across the sector, billions have been spent on evaluations, millions have been spent on capacity building, thousands of studies have been published and hundreds of conference sessions have been held. Yet no one seems to have come up with the answer. How is it that we can measure the temperature on Mars, but we can’t measure what happens within the orbit of a nonprofit organization? Why is measurement so confounding?
After years of consulting with thousands of nonprofits on this issue, it finally struck me: we’re focusing on the wrong problem. This isn’t actually a measurement problem – it’s a strategy problem. The reason why it’s so hard to quantify impact is because, far too often, nonprofits are trying to measure outcomes their programs are not designed to produce. Simply put, we’re trying to cheat our way to the answer. When programs are specifically engineered to produce a particular outcome, they’re pretty easy to measure. Think about how easy it is to measure whether a job training program reduces unemployment or whether a tutoring program increases grade advancement. Simple – both were designed to produce those outcomes. There’s no need for data prestidigitation. Even the “tough” measurement cases such as the Arts can be measured through proxies when the outcomes are clear (think: reaching new audiences or exposing new talent).
Where we get in trouble is when we try to “stretch” our statements of impact beyond the outcomes that are reasonably proximate to our work. Take the case of an after school sports club that we recently advised: in an effort to attract “Gates money”, the executive director wanted to demonstrate that her program was impacting high school graduation rates. The only problem was that the program primarily involved playing basketball with kids after school. While there was a study program, few attended it and those that did basically just worked on their homework. So I guess we could bemoan the measurement challenge of estimating the program’s impact on high school graduation, or we could just be intellectually honest. There are many bone fide (and valued) outcomes that this program produces: reducing risky behaviors, increasing student interest in school, encouraging healthy lifestyles, etc. While those outcomes may not be as “sexy” as improving graduation rates, they are quite important predicates.
Intellectual honesty is one way to solve the measurement problem. That doesn’t mean we need to prove everything to a statistical certainty: randomized control studies are always nice, but often practically infeasible. It means that we need to demonstrate a substantial contribution to the outcome. If you’re an advocacy organization looking to pass a law, substantial contributionmeans you led the coalition, lobbied the legislature and helped craft the legislation. If you’re running a direct service program, substantial contribution to an outcome is a function of dosage, frequency and duration. I recall a corporate citizenship executive once asking me how to measure the impact of a one-day volunteering event on employee retention. The answer was simple: there is no impact!
Of course, the other way to solve the measurement problem is to just improve our programs. If we want to be able to say more, we need to actually do more. I recall meeting with an arts group whose primary goal was to engage younger artists and support them in their careers. The organization spent 80% of its budget on a weekly newspaper for artists. When I asked whether young artists ever read the paper, the executive director replied: “no, they’re all online!” Yet the organization kept publishing the newspaper because that’s what it always did. Measuring this organization’s impact on young artists would have been extremely difficult – but not because of measurement, because the strategy was never designed to produce that impact. Put simply, we are using yesterday’s strategies to produce today’s outcomes. If we want to really make a difference, we need a new generation of social strategies, not a new generation of social metrics.
We can do this. Take the United Way. For years, many of the financial self-sufficiency programs it funded were piecemeal: a busing program here, a computer job bank there. But the organization stated an intention to impact financial self-sufficiency for the working poor. So United Way decided to design a new program – a “prosperity center” where a coordinated set of services would be offered under one roof (job training, counseling, asset building, etc.) to make asubstantial contribution to the outcome of economic independence. United Way can now track the number of participants who became “economically stable” as a proxy for repaired credit, gainful employment and training. Measurement is easy when the program is designed to drive it. See more about the United Way of Oakland’s prosperity center, which is called “SparkPoint,” here.
Funders have a role to play too. Instead of goading nonprofits to prove the impossible, let’s set reasonable expectations for results. Funders should ask organizations to state their intended outcomes upfront, and make the case for how they will make a substantial contribution toward achieving those outcomes. Instead of requiring 10% of the grant be used to hire an evaluator, foundations should require that 10% of the grant be used to design the program for greater impact.
At the end of the day, we have two choices: we can be less ambitious with our measurement or more ambitious with our programs. I say we do both!