Capacity building is notoriously tricky to evaluate. Here in the Packard Foundation’s Organizational Effectiveness (OE) team we evaluate our work several ways, and we are always trying to improve so we can increase our impact. This post provides a brief overview of our monitoring, evaluation, and learning approach over the last three years. We wanted to share it with you and get your input so that we can improve our work in 2018.
Not familiar with OE? In a nutshell we give capacity building grants to existing individual grantees of the Foundation’s four main program areas and we work together with grantees and our program colleagues to design and implement cohort capacity building opportunities. We do this because we believe that when organizations have strong leadership, thoughtful strategy, sound operations and compelling communications, they are better equipped to achieve the change they want to see in the world. The majority of our grants are to a nonprofit organization to work on one to three specific focus areas, such as strategic planning, leadership development, communications planning, etc.
Here are two key questions we look to answer through our evaluation work:
Our questions
- To what extent do OE grants result in greater capacity, in the short and longer-term? (For example, did the strategic planning grant to City Food Bank result in their improved long-term planning?)
- To what extent do OE grants amplify the grantee’s impact and the impact of the Foundation’s programs? (To follow the same example, did that strategic planning grant that led to improved long-term planning result in greater food security in the community?)
The outcomes we hope to see are obviously that our grants do result in greater capacity and amplify program impact. Here are a few of the signs of progress, or indicators, we use to help us gain a proxy for how well we are reaching those outcomes.
Our indicators
- % of OE grantees that report building capacity in the specific focus area(s) of the OE grant at the end of our funding and two years later;
- % of OE grantees that report direct impact on their program work from the OE grant at the end of our funding and two years later;
- % of Foundation program staff that state OE grants enhance their ability to achieve program goals.
Here’s how we have collected the data to measure our indicators (the ones listed above as well as others not listed):
Our data sources
Systematic review and coding of grantee reports: At the end of each OE grant, we ask the grantee to submit a final narrative report describing what was achieved. When reviewing these reports, we systematically code them for several measures, including whether the project objectives were met and whether there was any evidence of the OE project impacting the organization’s ability to achieve its mission. This allows us to look across all our grants in a year and see what trends emerge. In 2016, we found that 82% of individual grants met or exceeded their OE grant objectives and 15% reported evidence of direct program impact. To read more about our findings, check out my previous blog post on this topic.
Interviews with grantees two years after grant close: The OE team has hired an external evaluator to interview a sample of grantees a few years after their OE grant closes. We wanted to know if the outcomes achieved through the OE grant had a lasting effect on the organization and whether it spurred any new or deeper transformations. In 2016, 95% of grantees interviewed reported increased capacity was sustained or expanded since the OE grant closed and 35% shared evidence of direct program impact.
Grantee Perception Report: Every two years, the Packard Foundation engages the Center for Effective Philanthropy to conduct an anonymous survey of our grantees. We disaggregate the results between OE-grant recipients and non-OE-grant recipients to understand what impact receiving an OE grant may have on an organization. In the 2016 survey, we found that grantees who received an OE grant responded significantly more positively than those who did not across questions including the Foundation’s impact on the grantee’s field, communities, and organization. Read more about the results specifically for OE in my previous blog post.
Partnership Project Evaluation: Starting last year, we commissioned ORS Impact to conduct an evaluation of nine recent cohort-based capacity building projects, which we call Partnership Projects. We wanted to understand the effectiveness of our cohort capacity-building model (for example, a grant given to an intermediary training organization to enhance the capacity of a group of grantee leaders) and learn lessons to help inform future projects. Stay tuned for a blog post on the full results of this evaluation.
OE Service Survey: In 2015, we conducted our first OE service survey, which was an internal survey to Packard Foundation program staff to understand their perception of the impact of OE grants on their program and strategies.
What’s next
We know we don’t have this exactly right. It’s tricky stuff! You probably noticed we made a number of assumptions and are relying largely on self-reported data.
This year we are in a bit of a transition as we take stock of what we have learned from our evaluations work thus far and make improvements. We plan to refresh our approach in 2018, taking a hard look at our questions, indicators, and data sources. This is where you all come in. We would love to hear about what you have tried in your evaluation work.
What advice do you have for us on how we can better capture progress toward our goal? How might we better share our learning? Let us know in the comments, or on Twitter @PackardOE.
Hi Arum –
Sounds like you are already doing some very thoughtful post-funding evaluation work. What wasn’t clear from your description is whether you are getting mid-course feedback from grantees. One of the challenges in learning what’s working (and what may not be) is that folks often have selective memories by the time an initiative is wrapping up.
If Packard is not currently gathering mid-course feedback from grantees, you may want to consider a short survey –perhaps supplemented with a small set of interviews — to better understand grantee perceptions at key points in the process. This wouldn’t be an evaluation but rather an opportunity to hear what’s “top of mind” for grantees. The basic questions you might want to ask include: (1) what’s working…where are grantees seeing progress? and (2) what remains challenging…what are the key barriers they face? In each case you’d also want to hear their explanations of “why” they think this is happening. Perhaps at the 6-month mark would be a good time to get this kind of input.
The value of hearing from grantees as their initiatives unfold is that it will likely give you more concrete and real-time insights than you’d typically get at the close of a grant. Additionally, Packard might be in a position to make mid-course corrections to this grant program, if the findings suggest that this is needed/appropriate for some grantees.
Hope this is helpful…as food for thought!
Cheers, Josh
Hi Josh,
Thank you for your insightful comment! We don’t systematically gather mid-course feedback from grantees. Sometimes we collect mid-course survey data from participants of our Partnership Projects (cohort-based capacity building grants) in order to understand their experience since these projects tend to be longer and involve many grantees. I do think it would be a great idea to gather feedback mid-course from all our grantee partners and get information while it is still fresh, as you point out. Our main challenge is bandwidth to be able to administer the surveys and analyze the results. In addition, we are concerned about grantee burden and not wanting to place additional requirements (although it would be optional). We will continue to reflect on this idea and see how we can incorporate more real-time feedback during the life of the grant. Great suggestion. Keep them coming!
Arum