As colleges and universities have turned to online proctoring services to ensure the integrity of exams administered remotely during the pandemic, some students and faculty have raised questions about the use of technology for this purpose. Now, in a move that could shake up the online proctoring industry, ProctorU—a leading provider of these services—has announced that it will no longer offer a fully technology-driven option for institutions.
Effective immediately, ProctorU will only provide online proctoring supported by a trained company employee. The move comes as a growing chorus of people have expressed concerns about the efficacy of a software-only approach to monitoring exams.
Studies demonstrate that artificially intelligent proctoring is less than 100-percent accurate, said Scott McFarland, CEO of ProctorU.
“We believe that only a human can best determine whether test-taker behavior is suspicious or violates test rules,” McFarland said. He added: “Depending exclusively on AI and outside review can lead to mistakes or incorrect conclusions.”
The company is changing its stance after a review of its own data and in consultation with its testing partners.
Using AI-based proctoring was intended to save instructors time. The idea was that instructors could go back and review video footage from testing sessions at key points when the software flagged potentially suspicious behavior. However, ProctorU’s analysis revealed that only about 11 percent of test sessions tagged for suspicious activity are being reviewed by faculty or administrators. An unrelated audit of proctoring at the University of Iowa showed similar findings, with faculty reviewing just 14 percent of test sessions flagged for possible rules violations.
ProctorU’s analysis also revealed that AI-only systems flagged many events that trained humans would easily recognize as meaningless. For example, the software might flag a test session inappropriately if a student repeatedly rubbed his or her eyes or if there was an unusual background noise, like a dog barking. Human proctors are able to discern innocuous actions or sounds.
What’s more, ProctorU determined that AI-based systems unreasonably increased the workload for instructors, when it was meant to have the opposite effect. Reviewing the recordings of just a single, 60-minute exam for a class of 150 students can take nine hours—time that neither instructors nor their teaching assistants can afford to invest.
ProctorU is still letting universities use its software to flag possibly suspicious activity, but the company is no longer leaving it up to customers to review those instances. Instead, a ProctorU employee will review all cases where a testing session is flagged by the software. Alternatively, universities can choose to have a live online proctor instead.
“Faculty want to teach, not be hall monitors. No one becomes a teacher to sit around watching recorded test sessions in an effort to catch cheaters,” McFarland said. “The critical point here is that people can tell when someone is trying to be dishonest, but computers aren’t so good at that.”
Independent studies continue to show that remote human proctoring is an effective way to protect academic integrity by detecting and discouraging misconduct, the company said. However, exam proctoring should be just one part of a larger effort to create a culture of academic integrity. This effort should also include the establishment of an honor code, student supports, well-designed assessments, smaller classes, strong instructor/student relationships, and clearly stated expectations.
Earlier this year, Dartmouth’s Geisel School of Medicine was embroiled in controversy when it accused 17 medical students of cheating based on data from its learning management system, Canvas. The allegations prompted protests from critics who say relying on technology alone is an unreliable way to ferret out cheating, the New York Times reports.