- As AI becomes more sophisticated, educators must ensure academic integrity
- Oral exams are one way to let students prove their own knowledge and mastery
As ChatGPT, Google Bard, and other AI tools are available to the average internet user, academic conversations are common on how faculty and students will accommodate these technologies within the academic milieu.
Plagiarism detection software touts the ability to detect machine-derived prose, while other academics warn that AI detectors have high error rates. In my own tests with AI detectors, student generated work was identified being up to 28 percent machine generated. At the same time, some AI-generated work was returned as 0 percent AI generated. In one case, a document submitted through the learning management system (LMS) was flagged as both 0 percent and 100 percent machine generated.
Some experts seem to warn that the next generation of AI tools will be able to learn from previous works and generate materials in styles indistinguishable from a person’s earlier works. All that will be necessary will be to give the AI engine some samples of the person’s previous work. At that point, it would seem to be nearly impossible for AI detectors to work effectively.
The pandemic’s shift to online learning helped to highlight some of the struggles for students, faculty, and IT staff with lockdown browsers and other attempts to prevent cheating on exams as they migrated from paper to online exercises. The use of proctoring software has raised privacy and equity concerns as well, and added difficulties for those using assistive technologies. An NPR article reported that colleges identified significant increases in cheating across the country, and in one highlighted case, a university reported a 79 percent increase in cheating on exams.