One of the things I enjoy about working at the Packard Foundation is the emphasis on learning. In my past life, evaluation work was called M&E (Monitoring and Evaluation), but here it is MEL (Monitoring, Evaluation, and Learning). Turns out, the addition of that one letter actually matters. It matters because our evaluation efforts are not primarily about assessing project performance but about what we can learn to help us do differently and better going forward. And to me, that is a lot more fun to think about.
I joined the Organization Effectiveness (OE) team a few months ago and manage our OE evaluation. The OE team makes grants to strengthen the capacity of existing Packard Foundation grantees. We have had two major evaluation efforts in the past few years, which we call “Sharing Learning” and “Lasting Change.” In the Sharing Learning report, our external evaluator, ORS Impact, reviewed 55 OE grant’s final reports. The report found that most grantees do report meeting all project objectives (82%) and an increase in capacity in the project focus area (91%). A smaller percent (15%) report evidence of direct program impacts from the OE project. These results have been fairly consistent year after year.
A sample of grantees are also interviewed by our external evaluator 1 to 2 years after project close for our Lasting Change report to see if capacity increases are sustained. Our 2016 evaluation found that 95% of organizations interviewed shared that capacity built through the OE project continued or expanded after the grant concluded. Thirty-five percent shared evidence of direct impact on the program work funded by the Foundation.
Here is what I learned from OE’s evaluation. For a start, the majority of organizations are accomplishing what they set out to do in their proposal and it resulted in greater organizational capacity. Yay! That is a win for sure and what the OE team expects to see. Also, the capacity increases gained last beyond the project with most organizations. However, I also learned that the results are heavily influenced by how well final reports are written and what information grantees choose to share in the final report. Program impact is hard to discern, particularly at project close since we know it takes time for capacity increases to have programmatic impacts, and sometimes it is not a direct impact (i.e. a focused strategic plan once implemented can help a nonprofit reach more beneficiaries, but that won’t be immediate or directly attributable).
So the next step – the fun part for me – is interpreting how our evaluation work can help us do differently and better going forward. We know OE grants improve organizational capacity. Now there are more questions to explore to do our work better. For example, what are the conditions that make an OE grant successful? Do certain kinds of OE grants have greater impact? How can we make our final report process more helpful to us and our nonprofit partners?
We’re deep into looking at ways to re-design our evaluation efforts to increase our learning next year. We’d love to hear your comments. What do you think is important to evaluate? How can we improve the way we collect data from nonprofits and learn from it?
Cheers to you, Arum and OE team! It’s so cool to see how nearly all the grantees built organizational capacity thanks to OE investments! Plus, the popup stats in the Sharing Learning report are beautiful and make your easy-to-read findings even easier to decipher 🙂
My 2 cents: It would be awesome to evaluate the factors contributing to attainment of objectives for Population and Reproductive Health Programs versus Local Grantmaking Programs. Also, diving deeper into direct program impacts would be fascinating (outside of the timing of the Sharing Learning evaluation). 15% of Grantees asking for help finding consultants are referring to additional consultants outside of their OE consultant, right? If so, it would be interesting to collect data on the effect outside consultants have on direct program impacts.
Thanks Annie for the thoughtful comment! The Sharing Learning and Lasting Change reports were done by ORS Impact, so a big shout-out and thanks to them for the beautiful reports. We’re glad you found them easy-to-read!
The evaluation did look into differences between programs (i.e. population and reproductive health vs. local) but there was nothing glaring as to what makes OE grants in a particular program area effective compared to other programs. We’re hoping in future evaluations we might be able to see those kinds of nuances better.
The 15% of grantees asking for help finding consultants is referring to finding the OE consultant since our grants mostly pay for consultant fees for capacity building projects. For a number of reasons, the Foundation doesn’t make recommendations or provide assistance on that part of the process. For some organizations that is not a problem, but we are hearing that for some it may not be ideal. We’re experimenting with ways to help with that but haven’t cracked the code yet.