Capacity building is notoriously tricky to evaluate. Here in the Packard Foundation’s Organizational Effectiveness (OE) team we evaluate our work several ways, and we are always trying to improve so we can increase our impact. This post provides a brief overview of our monitoring, evaluation, and learning approach over the last three years. We wanted to share it with you and get your input so that we can improve our work in 2018.
Not familiar with OE? In a nutshell we give capacity building grants to existing individual grantees of the Foundation’s four main program areas and we work together with grantees and our program colleagues to design and implement cohort capacity building opportunities. We do this because we believe that when organizations have strong leadership, thoughtful strategy, sound operations and compelling communications, they are better equipped to achieve the change they want to see in the world. The majority of our grants are to a nonprofit organization to work on one to three specific focus areas, such as strategic planning, leadership development, communications planning, etc.
Here are two key questions we look to answer through our evaluation work:
- To what extent do OE grants result in greater capacity, in the short and longer-term? (For example, did the strategic planning grant to City Food Bank result in their improved long-term planning?)
- To what extent do OE grants amplify the grantee’s impact and the impact of the Foundation’s programs? (To follow the same example, did that strategic planning grant that led to improved long-term planning result in greater food security in the community?)
The outcomes we hope to see are obviously that our grants do result in greater capacity and amplify program impact. Here are a few of the signs of progress, or indicators, we use to help us gain a proxy for how well we are reaching those outcomes.
- % of OE grantees that report building capacity in the specific focus area(s) of the OE grant at the end of our funding and two years later;
- % of OE grantees that report direct impact on their program work from the OE grant at the end of our funding and two years later;
- % of Foundation program staff that state OE grants enhance their ability to achieve program goals.
Here’s how we have collected the data to measure our indicators (the ones listed above as well as others not listed):
Our data sources
Systematic review and coding of grantee reports: At the end of each OE grant, we ask the grantee to submit a final narrative report describing what was achieved. When reviewing these reports, we systematically code them for several measures, including whether the project objectives were met and whether there was any evidence of the OE project impacting the organization’s ability to achieve its mission. This allows us to look across all our grants in a year and see what trends emerge. In 2016, we found that 82% of individual grants met or exceeded their OE grant objectives and 15% reported evidence of direct program impact. To read more about our findings, check out my previous blog post on this topic.
Interviews with grantees two years after grant close: The OE team has hired an external evaluator to interview a sample of grantees a few years after their OE grant closes. We wanted to know if the outcomes achieved through the OE grant had a lasting effect on the organization and whether it spurred any new or deeper transformations. In 2016, 95% of grantees interviewed reported increased capacity was sustained or expanded since the OE grant closed and 35% shared evidence of direct program impact.
Grantee Perception Report: Every two years, the Packard Foundation engages the Center for Effective Philanthropy to conduct an anonymous survey of our grantees. We disaggregate the results between OE-grant recipients and non-OE-grant recipients to understand what impact receiving an OE grant may have on an organization. In the 2016 survey, we found that grantees who received an OE grant responded significantly more positively than those who did not across questions including the Foundation’s impact on the grantee’s field, communities, and organization. Read more about the results specifically for OE in my previous blog post.
Partnership Project Evaluation: Starting last year, we commissioned ORS Impact to conduct an evaluation of nine recent cohort-based capacity building projects, which we call Partnership Projects. We wanted to understand the effectiveness of our cohort capacity-building model (for example, a grant given to an intermediary training organization to enhance the capacity of a group of grantee leaders) and learn lessons to help inform future projects. Stay tuned for a blog post on the full results of this evaluation.
OE Service Survey: In 2015, we conducted our first OE service survey, which was an internal survey to Packard Foundation program staff to understand their perception of the impact of OE grants on their program and strategies.
We know we don’t have this exactly right. It’s tricky stuff! You probably noticed we made a number of assumptions and are relying largely on self-reported data.
This year we are in a bit of a transition as we take stock of what we have learned from our evaluations work thus far and make improvements. We plan to refresh our approach in 2018, taking a hard look at our questions, indicators, and data sources. This is where you all come in. We would love to hear about what you have tried in your evaluation work.
What advice do you have for us on how we can better capture progress toward our goal? How might we better share our learning? Let us know in the comments, or on Twitter @PackardOE.