One of the things I enjoy about working at the Packard Foundation is the emphasis on learning. In my past life, evaluation work was called M&E (Monitoring and Evaluation), but here it is MEL (Monitoring, Evaluation, and Learning). Turns out, the addition of that one letter actually matters. It matters because our evaluation efforts are not primarily about assessing project performance but about what we can learn to help us do differently and better going forward. And to me, that is a lot more fun to think about.
I joined the Organization Effectiveness (OE) team a few months ago and manage our OE evaluation. The OE team makes grants to strengthen the capacity of existing Packard Foundation grantees. We have had two major evaluation efforts in the past few years, which we call “Sharing Learning” and “Lasting Change.” In the Sharing Learning report, our external evaluator, ORS Impact, reviewed 55 OE grant’s final reports. The report found that most grantees do report meeting all project objectives (82%) and an increase in capacity in the project focus area (91%). A smaller percent (15%) report evidence of direct program impacts from the OE project. These results have been fairly consistent year after year.
A sample of grantees are also interviewed by our external evaluator 1 to 2 years after project close for our Lasting Change report to see if capacity increases are sustained. Our 2016 evaluation found that 95% of organizations interviewed shared that capacity built through the OE project continued or expanded after the grant concluded. Thirty-five percent shared evidence of direct impact on the program work funded by the Foundation.
Here is what I learned from OE’s evaluation. For a start, the majority of organizations are accomplishing what they set out to do in their proposal and it resulted in greater organizational capacity. Yay! That is a win for sure and what the OE team expects to see. Also, the capacity increases gained last beyond the project with most organizations. However, I also learned that the results are heavily influenced by how well final reports are written and what information grantees choose to share in the final report. Program impact is hard to discern, particularly at project close since we know it takes time for capacity increases to have programmatic impacts, and sometimes it is not a direct impact (i.e. a focused strategic plan once implemented can help a nonprofit reach more beneficiaries, but that won’t be immediate or directly attributable).
So the next step – the fun part for me – is interpreting how our evaluation work can help us do differently and better going forward. We know OE grants improve organizational capacity. Now there are more questions to explore to do our work better. For example, what are the conditions that make an OE grant successful? Do certain kinds of OE grants have greater impact? How can we make our final report process more helpful to us and our nonprofit partners?
We’re deep into looking at ways to re-design our evaluation efforts to increase our learning next year. We’d love to hear your comments. What do you think is important to evaluate? How can we improve the way we collect data from nonprofits and learn from it?