Is your institution being accurately compared?

1. Input adjustment

According to the brief ‘input adjustment’ means adjusting outcomes to reflect inputs, such as the characteristics and background of entering students.

“Although indicators such as graduation rates, student persistence, and labor market outcomes are commonly used as measures of institutional performance, information about students’ academic preparation and other factors is often not taken into account,” explains the brief. “The failure to account for the characteristics of entering students and institutional mission can lead to misleading comparisons.”

In other words, says the brief, given the wide differences in the characteristics of students who enroll in postsecondary institutions, a true measure of value added during their education needs to take into account their starting points.

2. Determine the right peer group factor for comparison

If the goal is to assess institutional performance, notes the brief, the comparison variables might be different. For example, in order to determine the value added, academic background (SAT/ACT), student financial income (percent receiving Pell), student demographics, and institutional characteristics (e.g. enrollments, Carnegie classification), might be used to calculate a predicted graduation rate for each institution, which can then be compared to actual outcomes.

“Grouping higher education institutions often differs depending on the goal of the classification,” says the brief. “[Yet], it is important not to use too many variables to define the peer groups, both for practical reasons and face validity. However, using a small number of variables can leave substantial differences among potential comparison colleges.”

The brief also emphasizes that those who use PIRS should “have the option of customizable comparison groups, based on comparing particular institutions on region, selectivity, programs offered, and student age.”

Some of the variables currently being considered in other higher-education data comparison groups, including the IPEDS Technical Review Panel (TRP) include:

  • Use of distance education
  • Enrollment size
  • Selectivity
  • Region
  • Level—institutions that have a highest award offered that is different from the majority of degrees
  • Predominant undergraduate credential

3. Diversity can exist even within broad categories of institutions based on mission

Institutions in the same sector and state may vary widely in terms of the characteristics of their programs.

For example, the brief explains that at four-year institutions, the extent of research and proportion of graduate students differs considerably, as does the extent of public service and extension programs.

“Community colleges have outcomes ranging from completion of certificates of varying lengths, completion of associate’s degrees, transfer rates to four-year institutions, and even non-credit work,” says the brief. “As a result of different missions, the mix of programs may differ substantially across colleges, which can distort comparisons even within broad institutional types. The mix of programs of varying levels and types as well as research and other activities will impact outcome measures.”

One strategy for deciding on the “appropriate way” to compare groups might be to use available data on institutional mission as the first cut, then program and student-related factors as the second cut, explains the brief.

For more information on the three considerations, as well as state-by-state institution comparisons, read the brief.