Higher education standardized assessments are coming—but is there a way to turn them from the dark side?
As colleges and universities are increasingly required to “prove” efficacy of teaching and learning, many conversations—especially at the federal level—are circling around developing standardized assessments for higher education.
Naturally, postsecondary stakeholders and faculty worry that these assessments could have a negative impact, and shudder at the prospect of metrics mirroring those of K-12’s. But is it all doom-and-gloom in the standardized assessment realm, or can a postsecondary-specific design work to higher education’s advantage?
Take Our Quick Poll!
In this month’s Symposium, Dr. Fredrik deBoer, a Purdue University scholar and academic researcher, emphasizes that the only way to accurately and fairly assess postsecondary learning outcomes is to:
2. Adjust for Ability Effects.
“We know for a fact that the incoming populations of different colleges are deeply unequal in prerequisite ability, writes deBoer. “The most obvious and strongest reason for this is the very college admissions process itself…Differences in incoming ability effects are troubling, as they potentially represent serious confounds in our effort to sort out how much students are learning at different institutions. This problem is compounded by the fact that the biggest criterion for selecting a college, for the average student, is not its perceived quality but its geography, with most college students choosing to attend schools close to home.
There are several ways to address these issues. First, score results can be normed against incoming SAT scores, an imperfect but powerful means to sort students into ranks of incoming ability. Scores on tests of higher education learning tend to be highly correlated with SAT and ACT results. We can quantitatively adjust scores on the latter to help control for ability effects. Second, test-retest systems, where students are tested in freshman and senior year, can help to determine how much growth has occurred, and can give us scores that are based not on where students end up but on how much their scores have improved during the course of their education. Sometimes, these efforts can take advantage of complex Value Added Models, though such procedures are controversial.”
2. Understand the Testing Industry is Big Business.
“Whether assessments should be developed ‘in-house’ or should be provided by testing corporations and nonprofits is one of the perpetual controversies in the assessment literature,” emphasizes deBoer. “There are clear advantages to developing assessments internally. For one, internally-developed assessments can better adapt to the kinds of institution-specific complexity that I discussed previously. Internally-developed assessments also can better involve faculty, helping them to feel like stakeholders in the process, and in doing so, easing tensions that often result from assessment efforts. Internally-developed assessments also have the advantage of keeping funding within the university community, often resulting in money for graduate assistants and other staff.
But there are major hurdles to developing assessments internally. They represent a significant investment of time, manpower, energy, and money. Also, in many cases, state administrators and accreditors will likely insist on the use of standardized instruments developed externally.
What everyone involved in the assessment process must understand is that the testing industry is just that, an industry, made up of institutions that are primarily motivated by the drive for profits. Those involved in assessment must bear in mind that, when organizations attempt to sell them tests, they are receiving a marketing pitch like any other. Skepticism of the claims of the institutions that develop tests is perfectly warranted.”
[Read deBoer’s full essay with more thoughts on this subject here]