Data and analytics are being used around the country to improve enrollment and recruitment, to varying degrees of success. Having a better understanding of the factors that lead to a successful recruitment of a talented student requires analyzing the data of past students. What are the most effective actions recruiters can take to attract the students a university wants? What are the characteristics of those students that are actually relevant?

The answers may be surprising, as the University of Oklahoma found out. In an interesting wrinkle to the story, Oklahoma also found out what happens when those models are abandoned for a year.

How It Started

Prior to 2015, Oklahoma had only 11-12,000 applications each Fall, a low number for a university of its size and research ranking. With a low 40 percent yield rate, the president wanted to be able to continuously grow the freshman class and bring in more tuition dollars.

Initial efforts funneled a lot of money to scholarships to entice students to attend. Recruiters also spent a lot of time on the phone, in emails, or at events trying to convince students to enroll. This was very expensive, and unsustainable.

With increasingly restrictive budgets, recruitment officers needed to focus their limited resources on the students most likely to enroll. In the past, recruiters too often relied on gut instinct and anecdotal stories. Oklahoma needed to take a data-informed approach to predict which students will enroll, and focus their efforts on those prospects. Also, it needed to know which actions recruitment officers should take to entice students to pick their university, and which actions are ineffective.

Leveraging Predictive Analytics 

Oklahoma’s Institutional Research and Reporting Office (IRR) used SAS predictive analytics software to analyze two years of admission data to create separate models based on residency. It took only five weeks to gather, cleanse and prepare the data, and build four models for each residency group.

Pulling data from seven different sources, IRR ended up with 60 variables, most of which were unreliable, missing, or incomplete. By examining the data to find out what really mattered, they were able to pare it down to 20 variables from four sources. Variables included things like ACT/SAT scores, unmet financial needs, scholarships offered and the number and types of recruiting events the students attended.

Analysts created four different predictive models for residents and non-residents using these techniques:

  • decision trees
  • logistic regression
  • forward stepwise regression
  • backward stepwise regression

Insights from those analyses informed recruitment efforts. By narrowing the focus to a smaller list of students, recruitment officers could pursue better prepared students–and use fewer resources to do it.

The models, which achieved 89-92 percent accuracy, drove recruitment strategies in 2015. For instance, the decision trees helped recruitment officers as a visual aid to help them to determine the most appropriate actions. Say, if a student was from Oklahoma and had an unmet need of between $10,500 and $20,000, the recruiter could offer a scholarship of $1500 to increase the student’s likelihood to enroll from 50 percent to 90 percent.

The resulting freshman class was the largest in the university’s history, and the most academically prepared. The class included more students ranked number one in their class, and more with a 4.0 GPA, than ever before. The class contained more National Merit Scholars than any other public of private university.

(Next page: The University of Oklahoma finds out what happens when predictive analytics models are abandoned)

Lessons Learned

However, the year after that record-setting class, due to a restructuring and reorganization of admissions and recruitment, the models were not used. The university fell short of its enrollment goals.

This year, the models are back and Oklahoma has exceeded its enrollment goals, setting a new school record in the process.

There have been other benefits, as well. The project improved the university’s data collection, resulting in far more reliable data.

The system is also helping at the director level. IRR has combined an individual student’s probability and their financial information available with a forecast and scenario analyses so the director of Admissions and Recruitment can know the applications and admissions targets that will lead to the desired enrollment goal. The director can balance workloads by using the predictive model to plan how many students each recruiter will work with.

IRR is now analyzing data from areas such as selectivity, retention, and student satisfaction with the goal of creating a more complete, data-informed view of all the factors affecting a student’s likelihood of enrolling, staying and successfully completing a degree. More and more of this information will be fed into IRR’s reporting dashboard to more easily share performance data across the university.

The models provided surprise insights that turned some assumptions upside down. Oklahoma found that better prepared residents are not more likely to enroll. Perhaps more shocking, large scholarship amounts are not significant.

Less shocking was the lesson about what happens when analytics is left out of the recruitment equation. Learn more about how analytics across the student lifecycle can lead to greater success.

About the Author:

Georgia Mariani is product marketing manager for Education for SAS, an analytics and business intelligence provider with nearly four decades of experience working with educational institutions. A 19-year SAS veteran, Mariani works with customers to share best practices, successes and recommendations that enable education institutions to get the most productive insights from their data.


Add your opinion to the discussion.