What Gets Measured Gets Managed

Ohio University successfully raised $500 million for its The Promise Lives Campaign by overhauling its officer metrics process

In the late fall of 2011, Ohio University was about halfway through its The Promise Lives Campaign. Launched in July 2007, the comprehensive capital campaign had a stated goal of $450 million. That ambitious target was more than double that of the campaign’s predecessor, but there was ample reason to be optimistic. Two gifts of more than $100 million each and one that totaled more than $40 million had us well on our way.

We did have challenges, though. Those three gifts would ultimately represent more than half of the (spoiler alert!) $500 million raised by the time the campaign ended in June 2015. In December 2011, they represented nearly 70 percent of what we’d raised to date. We had, in other words, a long way to go to reach goal.

Data was challenge No. 1. Our pipeline was being managed in Excel. Goals were being “monitored” in paper binders. We were trying to measure and manage, but we didn’t have a great handle on the information we needed to do either of those things.

We also had a major pipeline problem, regardless of where it was stored: At that point, more than 60 percent of our major donors to the campaign also had made major gifts in the prior campaign. We had a group of loyal donors who were making this campaign a success, but we knew that we couldn’t necessarily count on them to keep coming through. We’d gone back to that well a lot over the past few years. It was running dangerously close to dry.

One of the ways we breathed new life into the campaign was with a major sea change in our officer metrics process.

Step 1: If it’s not in the database, it didn’t happen.

The first thing we did was get rid of that spreadsheet — with extreme prejudice. We had every officer review every ask they had planned and gave them a one-time free pass: If it’s not real, get rid of it now. Anything that remained was loaded into our customer relationship management platform (CRM) (e.g., Ellucian’s Advance Web Access) and assigned to an officer.

We were lucky in one important way: Contact reports were being entered into the CRM fairly consistently. But we reminded every officer that lapses in report entry were going to be bad for them: If the report wasn’t in there, the visit or the ask or the whatever didn’t happen. Period.

Step 2: Dollars raised are an important thing, but they aren’t the only thing.

When it came time to put together our new metrics system, we knew we needed to focus on more than just the dollars raised. We gathered a small work group — the associate vice president for development, assistant vice president for advancement services, executive director of development and the director of prospect research and management — and we left the office. We would grab a meeting room in the university center or the library, and we’d hunker down there for a few hours at a time to kick around ideas.

We did not involve any leadership from the academic units, nor did we create a large work team as had often been done in the past. These were conscious decisions to avoid the pitfall of trying to come up with a single system that completely pleases everyone.

Step 3: Thank you, Mr. Data. Make it so.

The result of our planning was that we wanted officers to be conscious of how much they were raising, but we also wanted them to make enough asks. We needed to increase the pipeline, not just for this campaign, but for subsequent campaigns as well. We didn’t want one or two large gifts to make for a successful year all on their own. In short, we needed both quantity and quality.

We landed on what seemed at the time to be an elegant solution. We’d award up to 200 points to each officer based on performance in two key areas:

  1. 100 points for the percentage reached of the financial goal ($750,000 for directors of development and $350,000 for assistant directors);
  2. 100 points for the percentage reached of the 20-ask goal (minimum $25,000 for directors and $10,000 for assistant directors).

One key element: Each goal maxed out at 100 so over-performing in one area would not completely offset poor performance in the other. We did this to ensure that each goal remained a priority for our team. (See the example at the bottom at the page.)

Those point totals would then be used to determine whether each officer exceeded (165-200 points), met (150 to 164 points) or fell below (fewer than 150) standards.

Ultimately, all of this data was collected in a three-page report that also included a list of asks made and a list of proposals that were funded during the reporting period. Each report also listed the number of visits made and how that total compared with the team’s average.

But before all of that could be distributed for the first time, we needed to let everyone know that changes were coming. We put together a presentation describing the plan and the motivations behind our choices. It was delivered by the director of prospect research and management at a gift officer meeting and then at a subsequent meeting of our foundation’s board of trustees. That meeting often serves as a de facto executive staff meeting, so it also provided us the opportunity to share with deans and other university leaders.

That All Sounds Fancy. But What Happened?

Generally, this process was a success. We saw increases in each of the following areas: median ask amount, total asks made, asks to recent graduates and first-time asks.

We did encounter two interesting concerns, as well: an increase in the time from first contact to ask and a decrease in the overall dollars raised directly by our officers. Interestingly, first contact to close started at an average of just five to six months and increased to the more accepted standard of 18 months in year one and 12 months in year two. While we have no way of really knowing why those things happened, we can surmise that they resulted from more deliberate cultivation practices and an increase in first time asks; both of which are actually long-term positives for us. In addition, during this same general period, we introduced a matching program for gifts to certain scholarship accounts. This also drove up the number of first-time major gifts as it was an excellent entry point, especially for younger donors.

What Did We Learn?

We reached our campaign goal a full year ahead of schedule and ultimately raised more than $500 million. So, from the 30,000-foot view, it worked. But that doesn’t mean we didn’t make mistakes. We did, and we learned a few things along the way:

  • Find a way to involve the academic unit leaders. While they ultimately bought in, having that earlier in the process would have helped, particularly in dual-reporting situations.
  • Even if you don’t have strict goals around them, don’t forget to monitor contacts and visits. While it’s true that asks will only derive from those things, not including a focus on them can really point your team in the wrong direction.
  • There’s often a temptation when preparing a metrics-based review system to focus on those who appear to be underperforming. Be conscious of that, and be sure to focus instead on encouraging the successful behaviors of those who are already performing well. You’ll reward them and encourage everyone else to emulate their success.
  • Most importantly, think about what your goals are, either for a given period or overall, before you choose what you will measure and manage. You will influence behavior. How you influence it is up to you.

“What gets measured gets managed.” That quote is from Peter Drucker’s 1954 book “The Practice of Management” and is often misquoted as, “You can’t manage what you can’t measure.” We found that the difference between those quotes is where success lies. The wrong version implies that you can only manage what you measure. But the correct version reminds us that we choose what we measure and, therefore, what we manage. Choose wisely.

Joe Pauwels is the assistant vice president for advancement services at Ohio University. He has 20 years of experience in the field, dating back to his time as a student caller for his alma mater, Michigan State University.

Sam Venable is a customer success manager at Reeher LLC. Prior to his current role, he was fortunate enough to work on both sides of the table — as a frontline officer and a prospect researcher — and finds the intersection of data and narrative fascinating.

Recent Stories
Apra Connections Chapter Capture — Q2 2022

Prospect Identification as a Tipping Point to Hire for Research

Willkommen, Bienvenue, Welcome, C'mon In: Onboarding for Success and Retention