By Chris Brakenbury, based on a collaborative project within the Office of Advancement at The Ohio State University
I’m pretty sure we were never explicitly asked to create a gift table, but we knew it was coming. Has anyone ever planned, implemented or evaluated a campaign without looking at a gift table? No.
Taking our lessons from Jurassic Park, we weren’t so much worried with what we could do, but with what we should do. It wasn’t a matter of whether or not we could do a gift table. It was a matter of whether or not that was the best strategy for the organization.
At their most basic level, gift tables tell you simply how many gifts might be required to raise a certain amount of money, and they can tell you some interesting things, but the underlying logic can fall apart if you push too hard at the edges.
Gift tables are typically built around prospect-to-gift ratios, which would really be useful if ratios were consistent and understood for every tier of the gift table (they aren’t), for every unit (nope), and for every gift officer (double nope). Most gift tables also tend to assume that giving is binary. That is, either a prospect makes a gift within their rating level or they do not. Gift tables don’t typically recognize that prospects often behave outside the description of their rating levels.
So, where does that prospect fit? And how do you know that in advance of the campaign?
As modern campaigns have become bigger and longer, the reality is that donors’ actual giving behavior is not easily wedged into a narrow rating band or into a single gift occurrence. Anecdotal stories from development officers and prospect development suggested that many donors give multiple times at multiple levels throughout a campaign.
Finally, we needed to consider how we actually use these things. In the planning phase of a campaign, what decisions are made based upon gift tables? What decisions should be made based upon them?
At The Ohio State University (OSU) Office of Advancement, the answer to that question boiled down to two core considerations: how do we test the feasibility of a new campaign goal, and how do we guide resources in support of that goal? We needed a tool that helped us address both of those needs, and we needed the right people to help develop and build it.
Initially, our team included members of three departments: business intelligence and data science for data retrieval, data analysis and tool development; prospect development for subject matter expertise and systems knowledge; and human resources for resource and hiring implications. Along the way, we found ourselves consulting another campaign planning task force and advancement leadership. The broad engagement throughout proved to be critical in both developing a solution that was relevant and to socializing new concepts that were worlds away from a traditional approach.
Want more insight from author Chris Brakenbury? Check out the recording from his 2018 Prospect Development session on making analytics actionable.
First, a disclaimer. This approach is likely only valid to large organizations that have been through a sufficiently large campaign and have their own data to reference and analyze. OSU’s findings are only valid for us; you can’t simply apply our data to your organization. Small organizations or short campaigns might find more usefulness in the traditional gift table approach where the challenges described above are less relevant. I’ll leave it to you to determine the relative definitions of “large” and “small,” but the point is you need enough of your own data to be confident in what it tells you about your future.
First, by evaluating actual prospect and giving data from the previous seven-year campaign, we confirmed our gut reaction that donors act in a diverse set of giving patterns. Many prospects made multiple gifts throughout the campaign — some within, some above and some below their known capacity range. Some made single gifts well above or below their capacity. And yes, some do make a single gift that falls within their known capacity range. Of course, some prospects didn’t make a financial gift, an important consideration that you can’t overlook.
In this early analysis, we were also beginning to identify what we really needed to know. No longer were we interested in gift instances, but total giving in a campaign period. And by extension, we weren’t interested in prospect-to-gift ratios, but prospect-to-dollar ratios. Instead of answering how many prospects we needed to get a gift, we were answering the much more useful question: How many dollars can we expect from a given prospect over a given period of time? And because we had a sufficiently large prospect count in the previous campaign, we could establish a unique prospect-to-dollar ratio for nearly every tier of the gift table.
The prospect-to-dollar approach now allows us to do very useful things, namely answer those questions about goal feasibility and resources required.
- Given our current assigned prospect pool, how much money can we expect to raise in the next campaign?
- Given our current qualification (prospect acquisition) rates, how much money will we raise from new prospects in the next campaign?
- Working backwards from a proposed campaign goal, how many prospects do we need?
- How many new gift officers do we need to hire in order to qualify those prospects?
With some informed tweaking of those prospect-to-dollar ratios, we can even evaluate what happens if we become more efficient or more effective in our work. For example, how do the requirements change if we can increase the average campaign dollars per prospect by “x” percent? What if our qualification rate increases? These added layers allow us to explore the full implication of resource investments in many different areas.
Not only did this become a powerful planning tool for our campaign, but it also provided a key metric for us to measure moving forward (prospect-to-dollars). As the advancement team at OSU continues to become more sophisticated, we would expect our outcomes and data to reflect that. We can monitor the real-time results during the campaign and adjust resources as necessary, while also comparing this campaign to the last. And, assuming there’s another campaign for OSU down the road (there will be!), we’ll have even more data to apply in those planning efforts.
Key contributors to this project at The Ohio State University include Chris Brakenbury, Chris Delisio, Jon Gerckens, Maureen Henry, Sarah Luckey, Stephanie Mizer, Doug Plummer, Molly Schmied, Stephanie Vaver, Rachel Weiss-Berger, Paul Young, and Jordan Zivoder.
This article relates to the Campaigns domain in the Apra Body of Knowledge.
Are you interested in learning how to elevate your organization’s campaigns? Check out Apra’s Campaign Bundle 1, which includes five educational recordings that will help you plan, support and analyze campaigns.