What We're Thinking

Using Data to Maximize Impact

Kathy Reich spoke to a group of nonprofits from California’s Central Coast in February 2015 on Using Data to Maximize Impactwhy every nonprofit should have an evaluative culture and evaluation strategy, and how to conduct useful evaluation without breaking the bank. Below is the address text as prepared for delivery.

Hi everyone. It is great to be here today—thank you for having me. Kaki and Nancy have asked me to talk a bit about using data for impact—why every nonprofit should have an evaluative culture and evaluation strategy. They also asked me to talk about how you can conduct useful evaluation without breaking the bank.

That’s a bit of a tall order, but I think I can at least touch on all of those points. Although we’re a fairly big group, I hope we can make this a conversation, not a lecture. I myself am quite uncomfortable with the whole lecture thing. So to get you talking, please turn to a neighbor—ideally someone you don’t work with every day.

First question: When I say the word “evaluation,” what comes to mind? And why?

Ok, let’s hear a few answers.

All right, now find a different person to talk to—again, someone who’s not in your day-to-day. Second question: What is one good experience you’ve had with evaluation? What was good about it? What did you learn?

Ok, let’s hear a few answers.

And finally, let’s try this one more time. Find a third person—ideally, someone you don’t work with every day. Final question: Do you use evaluation results to raise money? And if so, how?

Ok, let’s hear a few answers.

Did you learn anything in that quick little idea exchange? Did anything surprise you?

Today I am going to talk about why I think evaluation is critical to nonprofit success. I’ll also talk about how you can use your results from thoughtful evaluation, and how you can do quality evaluative work on a shoestring budget.

But first, a little bit on why this is important to me. First off, a true confession. I am not an evaluator. I’m not even an expert in evaluation—I took maybe one course on it in grad school, and everything I’ve learned about it, I’ve learned on the job. When I first entered the nonprofit world, more than 15 years ago, I didn’t think I needed to worry about evaluation. Evaluation meant randomized controlled trials, and expensive research firms, and lots of things my small struggling nonprofit couldn’t afford to do. It wasn’t relevant to me.

Well, times change. Today, I believe that evaluation needs to be a core part of what every nonprofit does—regardless of its size, geography, or issue area. Evaluation done well can help you improve your services, meet your mission, and yes, even raise money.

What changed for me? Well, my career changed. I left working in that small struggling nonprofit—the one where I once literally jumped for joy because I scored a $10,000 donation that allowed us to meet payroll and stay open for another month. I joined the Packard Foundation, where we have a very different idea of evaluation than the type you might have studied in college or grad school. At the Packard Foundation, we’re lucky to have a culture that prizes evaluation for the sake of learning and improvement, not to give people a pass/fail grade. The Packard Foundation has an evaluation philosophy, which you can find online at www.packard.org, that encompasses five core principles:

1) Continuously learn and adapt. When developing our strategies, we develop clear and measurable goals and build in feedback loops for tracking progress on those goals. When implementing (and adapting) our strategies, we gather information about what works and how the context is shifting, and then use our insights to navigate a path forward.

2) Learn in Partnership. We listen closely and learn in partnership with our grantees, funders, constituents and other stakeholders.

3) Inform our decisions with multiple inputs. We analyze multiple sources of information and combine our learning with that of external evaluation results to inform our decision making.

4) Cultivate Curiosity. Cultivating a culture of curiosity is essential to surfacing insight into our successes, our failures and emerging possibilities. By listening closely and learning about what is coming out of our work, how our fields are changing, and what others are doing and thinking, we can identify opportunities to improve our strategies and increase program impact.

5) Share learning to amplify impact. We believe that openly sharing what we’re learning can generate value for our constituencies and drive impact in our fields. The “What We’re Learning” page on our web site at Packard.org is just one example of this.

Ok, so fast forward. I’ve been at Packard for over a decade, and I now run the Organizational Effectiveness and Philanthropy program, which works with grantees around the world to help them improve their management, leadership, and strategic development. Have any of you gotten grants over the years from the OE program? What did you use it for? Traditionally, most organizations have used OE support for strategic planning, strategic communications planning, leadership development, and fund development. That makes sense: every nonprofit needs a strong strategy, strong communications, strong leaders, and sufficient revenue to thrive. But over the years, in part because the Packard Foundation’s own evaluative culture is so strong, the OE program has come to see another essential building block for nonprofit success: the ability to assess your progress toward impact. In the past five years, we’ve made an increasing number of grants to organizations that want to build their capacity to evaluate their work. And I’ve seen these nonprofits use evaluation to hone the quality of their services, and to tell compelling stories about the work they do and why it’s important.

One example is the nonprofit GlobalGiving, which used an OE grant to build a framework for evaluating its work. GlobalGiving is a charity fundraising web site that gives social entrepreneurs and nonprofits from anywhere in the world a chance to raise the money that they need to improve their communities. They go through some rigorous due diligence to identify those nonprofits and list them on their site. So you’d think they’d be very data-driven when it comes to their own performance. But five years ago, they couldn’t tell you how all that money they raised was really being spent—and most importantly, whether it was making a difference on the ground in communities. With support from an OE grant, GlobalGiving engaged McKinsey to help them develop a framework for how to evaluate their impact. They paid McKinsey for the work, but McKinsey ended up donating a significant amount of time pro bono. McKinsey and GlobalGiving worked together to clarify the impact that the organization wanted to have, what outcomes they’d seek to achieve, how those outcomes would add up to the desired impact, and what evidence they would gather to assess whether they were making progress toward those outcomes. A bit to its surprise, GlobalGiving found that the new evaluation framework became the foundation for its strategic direction for the next 5 to 10 years. Forcing themselves to clarify what mattered most to them and how to measure it sharpened their strategy, their communications, their leadership, and their fundraising. Evaluation became the engine for their growth.

Now, a lot of stuff can go into an evaluation framework, but at its core, it includes the following elements:

1) A description of outcomes that are most important for your nonprofit to achieve. At the end of the day, what difference are you trying to make in the world?

2) Indicators that will help you know that you are moving toward those outcomes. How will you know if you are getting to that difference?

3) Data so that you can measure your progress against those indicators, to help you know whether you are meeting those outcomes.

4) Reliable systems to collect that data

5) Opportunities for meaningful reflection on your progress, so that you can adapt your strategy and change course—and sometimes, even change your outcomes.

6) A description of how you will share results and what you are learning—who are the key audiences for this information? Your nonprofit’s staff should come first, but who else might care—funders? Clients? Other nonprofits doing similar work?

In the Packard Foundation OE program, we’ve built that framework. We have developed a core set of outcomes that we are striving for, and indicators that we use to assess our progress toward those outcomes. We collect data in a number of different ways, through surveys and interviews, at different points after our grants close. That data collection is built into our work and occurs throughout the year. We reflect on our progress—and occasionally even change direction—every six months. And we take things a step further, by sharing what we learn on a public web site, http://packard-foundation-oe.wikispaces.com. Our evaluation system didn’t develop overnight. We started evaluating some grants in 2007, put together a comprehensive evaluation plan in 2009, and revised it substantially at the beginning of 2014. Within the OE program, we have embraced evaluation ourselves, so that we are constantly learning from, and hopefully improving, the grants that we make.

One example of this is the Community Leadership Project, a joint initiative of the Hewlett, Irvine, and Packard Foundations. Is anyone here involved in CLP? The Community Leadership Project was created in 2009, with the goal to build the capacity of small and midsize organizations serving low-income people and communities of color in three targeted regions of California: San Francisco Bay Area, San Joaquin Valley, and Central Coast of California. During these first three years, the approach of the CLP was to experiment, learn, and refine. We funded an array of models and approaches, ultimately working with 27 intermediaries to provide more substantial levels of support to 100 small organizations and to offer different of trainings, workshops and leadership development opportunities to an additional 300+ leaders and organizations. As part of this work, we invested in an extensive evaluation to see whether we were meeting our goals of building capacity for these organizations. We brought in our grantees as partners in this evaluation—they helped to design key parts of it, reviewed drafts for us. What the evaluation told us, two years into a three year initiative, was that we were doing a lot right. Organizations were stronger, in key areas like financial stability, strategic planning, and leadership. They also reported that they were developing stronger networks with other nonprofits. But the evaluation also told us that we’d spread our resources too thin—too many intermediaries, too many grantees, too many different types of technical assistance. The evaluators, and grantees themselves, suggested we’d have more impact if we worked with a smaller group and provided them with more intensive support. So we made two key decisions. First, we decided to renew CLP for another three years. Second, we decided to focus our resources for greater impact, working with just 10 intermediaries, serving 57 community organizations. And, of course, we continued the evaluation for Phase II!

Of course, we have an advantage that many other nonprofits don’t have. We have money. Our Trustees try to keep our overhead very low, but still, 10 percent of a lot is…a lot. I can hire evaluation firms to help me assess progress—and believe me, I have. So, I’ll bet you are thinking that it’s fine for me to get up here and lecture you about how you all need to evaluate your impact and learn from both your successes and your challenges—but you have nonprofits to run. You have people in need to serve. And you have no money. Well, I don’t have a witty comeback for that. But I can tell you that evaluation frameworks like what I’ve been describing can be designed and implemented without a lot of money.

One nonprofit in action is the one whose board I chair, the Peninsula Jewish Community Center. The PJCC is a large human services agency in Foster City, CA. It’s got a very broad mission: To build a caring and connected community, develop leadership and strengthen Jewish identity and values in a center with an environment that is welcoming to all people at every stage of life. In service of this mission, the PJCC runs hundreds of programs and events each year, across half a dozen programs. They run preschool, after-school, and camp programs; they provide meals, transportation, and programming to senior citizens; they offer cultural and arts programs; they offer sports and recreation programs; they offer community service opportunities; and they operate a large fitness center. All in all, more than 40,000 people, from every conceivable religious and ethnic community in San Mateo County, participate in PJCC programs every year. It’s such a wide-ranging organization that they had to think long and hard about what success looks like, what is most important for them to measure, and how they could they evaluate their work without hiring a consultant and spending a lot of money? First, they did what everyone should do—they talked to their friends. Specifically, they went to Jewish Vocational Services in San Francisco and asked them how they evaluate their work. They took JVS’ basic evaluation framework and ran with it. That’s the power of having a strong network, and I really hope all of you can be great supports to each other as you go down this road. Next, they decided to make their evaluative work manageable and focused. They emphasized only finding out what was most important for them to know. So, for each program the PJCC runs, they collect data to measure their success in three key areas: 1) Do people find value in the program—are they satisfied? 2) Do people sign up for other programs, either in a particular department or in a different department, after they participate in a program? 3) Does the program meet its core mission? For each of its program areas, the data for #2— re-enrollment cross-enrollment—was easy to find. The PJCC is lucky enough to have a database that captures that information. For #1, program satisfaction and value, the PJCC now issues paper and/or electronic surveys on a very regular basis. The surveys are very short—they can be completed in five minutes or less—and they try to grab people as they are leaving the preschool after picking up their kids, or leaving after a concert. Most folks don’t mind filling out the surveys. The organization also does interviews and focus groups. Question #3—is the program meeting its mission—is a bit harder. What’s the mission of the fitness division? It’s probably very different than the mission for cultural arts. So for this measure, the indicators look a little different in each program—for some programs, meeting mission means increasing connections among the participants. For some, it means increased learning or wellness. What did this system cost the PJCC? Well, it costs the time that staff put in to design and plan it—that’s not insignificant. The PJCC was lucky to have staff on board who already had experience in program evaluation. It costs staff time too to design, print, administer, and analyze survey results. Soon, they’re going to have to invest in new computer software, because they realized that their current, elderly software system wasn’t quite adequate for their evaluation needs or anything else. That will be a big-ticket item, but one they would have eventually needed anyway. What did the PJCC NOT spend money on? A consultant to design their system. An external evaluator to collect and analyze the surveys. A sophisticated new database. They worked with what they had, which was some time, some talent, and a will to get the job done. How has the PJCC used its data? First and foremost, they use it for program improvement. At the PJCC, each department reflects on its data every six months. One thing they learned, for example, is that program satisfaction is strongly connected to staff quality and retention. So they have increased their staff training efforts and increased pay for some positions where they can. At the PJCC, they also use the data to report to the Board every six months on their progress. I can tell you that the board devours those numbers. We feel like we have our finger on the pulse of every program and activity. We can ask questions on why numbers have fallen or risen in certain departments and not others over the past 18 months. We can talk about how, as a board, we should allocate resources based on the results. The PJCC also can use their results in reports to funders. One of their biggest foundation donors requires them to report on how successful their programs are. Because the PJCC was already collecting so much data, they don’t have to generate anything new or different for the funder. It’s all there. And the PJCC can use their results to raise money. They include results, and occasionally anonymous comments from surveys, in grant proposals and marketing materials.

Over the next year, the Monterey Capacity Builders Network will be hosting a series of events to help you build your own evaluation frameworks and use data wisely. As you participate in those, I’d like to leave you with a few tips for success:

First, know what’s most important for your organization to accomplish. Define that and own it. Don’t let anyone, especially not a funder, define it for you.

Second, define how you are going to know if you are getting to success. Decide what kind of information would give you the fullest picture of your work—and what’s practical for you to obtain given the budget you have.

Third, only collect what you plan to use. Don’t build a system that’s too big or unwieldy. Focus on collecting a very limited set of data first, and doing it well. You can always expand over time if you feel the need.

Fourth, actually use your data. Don’t let it sit on a shelf or in a hard drive. Schedule regular meetings with your staff, at least twice a year, to figure out what your results mean and what you want to change because of them.

Fifth, share. Don’t be afraid of your results. Share them with your community, and with your funders. If something is not working, acknowledge it, and say what you are going to try to do differently as a result. Failure is a tough thing in the nonprofit world. We don’t like to talk about it. But please trust me when I say that any funder worth their salt REALLY wants to hear about it. As long as you learn from your mistakes, they can be so much more valuable than sticking to a careful, tried-and-true path.

And finally, don’t be afraid to ask for support as you build a culture of evaluation in your organizations. Part of that support may come in free or low-cost ways, such as the series that kicks off today, or online tools from nonprofits like the Innovation Network. But others will cost money. Evaluation can be cheap, but it isn’t free. There are some great evaluation consultants out there who can help you through the process—and many of them will give discounted rates to nonprofits. Funders can also help, and they should. When you write grant proposals, be sure to state how you plan to assess progress and measure success. And don’t be afraid to budget for that. The worst a funder can say is, “No, we don’t support evaluation.” And that’s an invitation to start a conversation with them—why don’t they support it? Isn’t it important to them? That’s a conversation that everyone in the nonprofit sector needs to be having.

I’m happy to take questions, and I would love to hear some of you share about the evaluation culture and practices that you are already using in your organizations. Let’s make the rest of our time together a real discussion. Thanks for listening to me for so long, but now I’d like to hear from you!

Leave a Reply

Your email address will not be published.

*