Data and analytics are being used around the country to improve enrollment and recruitment, to varying degrees of success. Having a better understanding of the factors that lead to a successful recruitment of a talented student requires analyzing the data of past students. What are the most effective actions recruiters can take to attract the students a university wants? What are the characteristics of those students that are actually relevant?

The answers may be surprising, as the University of Oklahoma found out. In an interesting wrinkle to the story, Oklahoma also found out what happens when those models are abandoned for a year.

How It Started

Prior to 2015, Oklahoma had only 11-12,000 applications each Fall, a low number for a university of its size and research ranking. With a low 40 percent yield rate, the president wanted to be able to continuously grow the freshman class and bring in more tuition dollars.

Initial efforts funneled a lot of money to scholarships to entice students to attend. Recruiters also spent a lot of time on the phone, in emails, or at events trying to convince students to enroll. This was very expensive, and unsustainable.

With increasingly restrictive budgets, recruitment officers needed to focus their limited resources on the students most likely to enroll. In the past, recruiters too often relied on gut instinct and anecdotal stories. Oklahoma needed to take a data-informed approach to predict which students will enroll, and focus their efforts on those prospects. Also, it needed to know which actions recruitment officers should take to entice students to pick their university, and which actions are ineffective.

Leveraging Predictive Analytics 

Oklahoma’s Institutional Research and Reporting Office (IRR) used SAS predictive analytics software to analyze two years of admission data to create separate models based on residency. It took only five weeks to gather, cleanse and prepare the data, and build four models for each residency group.

Pulling data from seven different sources, IRR ended up with 60 variables, most of which were unreliable, missing, or incomplete. By examining the data to find out what really mattered, they were able to pare it down to 20 variables from four sources. Variables included things like ACT/SAT scores, unmet financial needs, scholarships offered and the number and types of recruiting events the students attended.

Analysts created four different predictive models for residents and non-residents using these techniques:

  • decision trees
  • logistic regression
  • forward stepwise regression
  • backward stepwise regression

Insights from those analyses informed recruitment efforts. By narrowing the focus to a smaller list of students, recruitment officers could pursue better prepared students–and use fewer resources to do it.

The models, which achieved 89-92 percent accuracy, drove recruitment strategies in 2015. For instance, the decision trees helped recruitment officers as a visual aid to help them to determine the most appropriate actions. Say, if a student was from Oklahoma and had an unmet need of between $10,500 and $20,000, the recruiter could offer a scholarship of $1500 to increase the student’s likelihood to enroll from 50 percent to 90 percent.

The resulting freshman class was the largest in the university’s history, and the most academically prepared. The class included more students ranked number one in their class, and more with a 4.0 GPA, than ever before. The class contained more National Merit Scholars than any other public of private university.

(Next page: The University of Oklahoma finds out what happens when predictive analytics models are abandoned)


Add your opinion to the discussion.