With the unexpected onset of a global health pandemic and a hurried transition to online teaching, we are caught in the crossfires of arguments for and against online instruction.
Although online courses provide access to higher education to students from diverse demographic backgrounds, the majority of colleges are opting for an in-person fall semester. In the following sections, we will examine, in light of the limitations of existing technology in online learning, why effective online instruction demands collaborative work of multi-disciplinary teams including instructors and other methodological experts.
Early in the spring semester, shortly before the onset of the pandemic, I attended a professional development workshop conducted for university instructors by educational technologists. During this workshop, a senior faculty raised a question: “How can online teaching ever be as effective as face-to-face classes? How do I understand whether my students get hold of things?”
We can dismiss this question as resistance from a senior faculty with less experience in online education. It is true that online learning platforms are underutilized in terms of pedagogic implementations by the majority of faculty. Such platforms are used mostly to distribute study materials and/or assign grades to students. However, not all instructors who seem resistant to the idea of online teaching are Luddites. Let us not shy away from addressing the concerns of such instructors.
Could instructors possibly have an idea of whether their students “get hold” of things by merely interacting with them in a face-to-face environment? Yes, it is quite possible because of the tacit feedback instructors get in a classroom from decoding learners’ facial expressions, which enables them to assess learners’ emotional and motivational responses. The spadework for teaching includes assessing learners’ emotional and motivational engagement and identifying gaps in their knowledge. It is not unusual for an experienced teacher to assess from visual cues in a face-to-face teaching environment whether the students are anxious, lost, bored, frustrated, or confused.
As explained above, a frequent complaint faculty have is that online instruction is not interactive and hence they do not get enough feedback from students when compared to face-to-face classrooms. Often, the feedback obtained from online platforms remains limited to learning outcomes measured by students’ grades. In an online learning environment, feedback from students’ interactions can be substituted to a certain extent by monitoring, collecting, analyzing, and deciphering more fine-grained data including students’ emotions, attitudes, and learning behaviors. Such data can be collected from learners’ self-reports via quantitative or qualitative methods, and/or content analysis and natural language processing of written text.
Even if we overlook the response bias, self-reflection bias, manual labor, and complexities involved in many of the methods employed to understand learners’ engagement with online learning, these methods may not provide real-time access to learners’ emotional states and behaviors. In online learning, the closest substitutes for human cognition could be intelligent tutors supported by artificial intelligence (although these tools as of now may not distinguish subtle differences in diverse learners). The cost-effectiveness, detection accuracy of the implemented algorithms, and ethical issues related to bias, privacy, and surveillance delay the widespread adoption of these tools.
The commonly-used online learning tools today can monitor and record data related to learners’ emotions and behaviors. However, the collection and meaning-making of these data require skills beyond digital literacy or digital fluency of online tools. For example, the present-day learning management systems (LMSs) log large volumes of metadata related to student activities. But the dashboards of these platforms typically have built-in monitoring features that report only limited data. The remaining logged data are unavailable and incomprehensible to instructors.
Usually, the information presented in LMS dashboards is simple metrics of students’ frequency of interaction, such as their login trends, postings in discussion threads, number of downloads of study materials, and scores achieved in assessments. These frequency measures do not sufficiently capture student engagement and are not directly correlated to learning. For instance, a higher number of logins does not guarantee that a student is more engaged in learning.
Design and extraction of pedagogically meaningful variables that provide instructors with insights into learner behaviors or attitudes from the logged data demand interdisciplinary skills in cognitive psychology, computer programming, data mining, and machine learning.
Three possible reasons why the complete onus of this effort cannot be on the faculty are identified below.
1. The faculty are overloaded with responsibilities during a regular semester including the pressure to deliver content, cover syllabus, and meet their personal goals and milestones in research and career.
2. The faculty may have limited technical know-how to collect and analyze the data available from online learning platforms.
3. Even in cases where faculty are willing to do the cumbersome work and possess the technical skills in large classes manual efforts may not suffice.
Therefore, for effective online instruction, it is important that institutions encourage interdisciplinary methodological experts (including educational psychologists, technologists, instructional designers, and learning analysts) to work in tandem with the instructors in retrieving, analyzing, and deciphering learners’ engagement within online platforms.