ePortfolio

The dark side of higher ed’s ePortfolios-what really happened


The evolution of the ePortfolio from inspiring to an EAP—and whether or not it will survive.

Pre-dating the widespread use of blogs and personal websites, the 1990s era ePortfolio inspired storytelling about lifelong learning and everything that entails: formal schooling, personal reflection, career planning, presentations of evidence to assist in life transitions, and snapshots of abilities and character. This generalized use–the ability to do many things reasonably well–made the ePortfolio a likely candidate for long-term survival.

But this did not happen.

Going to the Dark Side

The ePortfolio became a tool of higher education’s accountability movement as accreditation agencies began to focus on quality institutional improvement. These previously innocuous agencies were quickly dragged into the insurers of quality assurance at best and at worst consumer protection. What used to be routine ten-year spans between accreditation reaffirmations soon became five. Some institutions must run the gauntlet annually to address deficits that invariably have something to do with weaknesses in assessment planning and outcomes tracking.

The EAP Inauspiciously Emerges

So, rather than the ePortfolio evolving into an all-purpose species, it became a specialized one. In only a few years, the ePortfolio morphed into a convenient drop box to collect high value assignments.

It was then adapted further to provide an embedded assessment interface, including scoring analysis and output, targeting institutional assessment and accreditation. It had become something new: The ePortfolio-enabled Assessment Platform (the EAP).

The early ePortfolio was an obvious choice for this evolution. It was already viewed as a personal, web-based publication medium for learners to document their personal change over time. This would make it much easier to sell to stakeholders as a good thing. Many implementers, working very hard and in good faith, thought that the EAP and the ePortfolio would become close cousins and the dream of demonstrating student learning would be realized.

The actual implementation of the EAP had some negative effects.

(Next page: Negative effects of the new ePortfolio; a silver lining?)

The New ePortfolio’s Negative Effects

First, institutional leadership was generally unwilling to take ownership of the process or the tool. Assessment and accreditation were not popular topics on campus. The massive amount of data that had to be collected and stored for accreditation made the EAP an expensive proposition to handle locally, so responsibility was pushed downstream to the academic departments and outwards to commercial providers.

In an era of scrutiny about student debt, the per student cost of the EAP was not treated as an essential component of higher education. Instead, students were sent to campus bookstores to be hit with the real price, further compounded by the 35-to-45 percent bookstore markup. Students left bookstores more than $100 poorer and with only a card and a code in their hands. It did not take long for students to figure out they had just paid to get their own program of study accredited. “How is this my problem?” they wondered. Some students saw this creature for what it was:  an extension of the Office of Institutional Effectiveness/Research.

It most certainly was not the student’s tool.

There was a Silver Lining for the ePortfolio

Now, let’s just accept that the evolution of the EAP had sustainability issues on shaky moral and fiscal grounds. At least it should have delivered on the public relations and accreditation fronts. Given the environment, the evolution should have been influenced by the rigors of academia and governed by the protocols of inquiry. The EAP should have irrefutably proved that students were learning important skills.

However, in the rush to show end-of-program compliance and to avoid confrontations with faculty, schools lost sight of the object of the exercise–to show instructional effectiveness and impact over time.

Here’s the no-win scenario that has emerged and, unfortunately, is still in existence at many institutions:

Consider a 100-level course. Capable and motivated students exist in every section of the course. “A” grades are possible and frequently attained. This norm-referenced grading distribution is visually represented as a bell curve.

Now consider the same 100-level course. However, this time student work is mapped to an absolute mastery scale ranging from novice (1.0) to exemplary (5.0) with a 4.0 considered to be “competent” (aka graduation-ready).

The resultant image is drastically different. Top scores would not be possible for freshmen.  Our freshman “A” student from above will now score a shocking 1.5.

While the EAP is a tool to help create and establish meaningful benchmarks across the learner’s academic career, it could also help teach reflective practice, resulting in self-aware, more independent learners. The institution would be able to identify at-risk students early and in real time. It’s entirely possible that the now tense “retention” conversation might have become very different.

But a mean score of 1.5/5 on skills would be perceived as a failing grade for many, and for that reason would not fly for students, their parents and many of their instructors. It might discourage students and it might affect the status of professors. Rather than invest the time to educate both students and faculty about the value of a mastery scale, many institutions just avoided the entire conversation and depended on uber-subjective grades for tracking students. It was what people were accustomed to and, quite frankly, it was easier.

Some commercial systems are more than capable of delivering on a progress-over-time system design and reporting but they, too, have been pressured to support the course grading status quo.

The only explanation for the lost opportunity is the complex culture and traditions of higher education surrounding the purpose and beneficiaries of assessment. In any event, using the EAP to only grade student work landed institutions right back where they started.

Survival for an EAP is tough

A specific commercial EAP has a lot of trouble surviving on any campus for very long.

The mean time to the collapse of an EAP has been about three-to-five years. Those who cheerlead and keep it alive eventually move on. With little support from the highest levels of campus leadership, there is no intervention. New people decide the problem must have been the provider of the EAP. They create a new committee with new people and go get a new EAP and then repeat the cycle. It’s not uncommon for commercial providers to find that they are the third system on campus in a decade. Accreditation agencies are left no closer to getting back to their real job of helping institutions with quality improvements.

Will the EAP species become extinct? Not likely so long as accreditors accept outcomes data that may be neither valid nor reliable.

Mere evidence of the collection of data about outcomes may remain good enough. Put another way, for the EAP to do good work with what it has become, higher education also has to evolve.

Can it? Will it? If it cannot, what will replace the doomed EAP? LMSs are trying, but they lack the tools to conduct direct measures of outcomes. The learner may be full but are they fulfilled? We may never know how effective higher education is, or can be. The truth is always scary and uncertain. It’s also easy to avoid.

Sign up for our newsletter

Newsletter: Innovations in K12 Education
By submitting your information, you agree to our Terms & Conditions and Privacy Policy.

Sign up for our newsletter

Newsletter: Innovations in K12 Education
By submitting your information, you agree to our Terms & Conditions and Privacy Policy.