How can higher education institutions harness the potential of generative AI tools while mitigating risks? With AI literacy, of course.

Why we must teach AI literacy in higher education


How can higher education institutions harness the potential of generative AI tools while mitigating risks?

Each year, we share our 10 most-read stories. Not surprisingly, many of this year’s Top 10 focused on generative AI, adult learners, and higher-ed trends. This year’s 8th most-read story focuses on the importance of teaching AI literacy in higher education.

As the dean of the college of university libraries and learning sciences, I have observed an intriguing trend: Since the launch of ChatGPT, an increasing number of students are coming to my libraries with citations of articles that don’t exist. This phenomenon highlights the widespread use of AI writing assistants, which, despite their impressive capabilities, come with inherent flaws.

This reliance on AI tools underscores the urgent need for AI literacy in higher education. Without clear guidelines and understanding, students and faculty must navigate the ethical complexities of AI tool usage. As generative AI tools gain popularity, the question arises: How can higher education institutions harness their potential while mitigating risks?

An EDUCAUSE survey found that only 34 percent of respondents reported that their institution has implemented, or is in the process of implementing, policies to guide generative AI use. This lack of guidance leads to inconsistent AI adoption, leaving institutions vulnerable to security and ethical risks. Additionally, many institutions lack a dedicated staff member to oversee generative AI adoption and usage, exacerbating the challenges.

Both faculty and students face the ethical challenges posed by AI writing assistants in higher education. Concerns over the integrity of student work arise among faculty, while students must decide whether or not to use AI tools in their assignments. In the absence of clear guidance from instructors on generative AI tool usage, students are left to navigate these ethical dilemmas on their own. A recent BestColleges survey showed that although 43 percent of college students have utilized ChatGPT or similar AI applications, 57 percent have no plans to use or continue using AI tools for academic purposes. This hesitancy could be attributed to concerns surrounding academic integrity or the stigma associated with AI-supported learning.

Relying solely on AI writing detection software to identify AI-generated content might appear to be a possible solution initially. However, the rapid advancement of AI technology complicates the detection process. Moreover, overreliance on detection software risks fostering a climate of suspicion between students and faculty. Higher education institutions should focus on promoting AI literacy education, encouraging responsible AI usage, and equipping students and faculty to make well-informed decisions about integrating AI tools in academic work. By integrating AI literacy into faculty development and student education, institutions can cultivate a culture of responsible and ethical AI utilization.

Recognizing the potential benefits of AI tools is crucial. These powerful applications can enhance efficiency and accessibility for all students, particularly those with learning disabilities, by offering personalized learning experiences and instant feedback. Generative AI tools enable students to refine their writing skills and deepen their understanding of the subject matter, all while facilitating a more enjoyable writing process. Moreover, for students with learning disabilities such as dyslexia or ADHD, AI tools prove invaluable in expressing their ideas and overcoming traditional writing barriers.

While AI tools can offer numerous benefits, it is essential to address the potential drawbacks of increased AI usage in higher education. With the increasing reliance on AI tools, there is a risk that students may become overly dependent on them, leading to a decline in critical thinking and writing skills. Additionally, AI tools are often trained on data sets that may contain inherent biases, potentially leading to the spread of misinformation or reinforcing existing stereotypes. Higher education institutions should raise awareness of these biases and emphasize the importance of fact-checking and analyzing AI-generated content for potential inaccuracies or biases. Lastly, concerns about student privacy and data security may arise with the integration of AI tools into educational processes. Ensuring the protection of sensitive student information and addressing potential vulnerabilities in AI systems is critical, with institutions working alongside technology providers to implement robust privacy policies and data protection measures.

Addressing the digital divide is critical. This divide often disproportionately impacts students from low-income families, rural areas, and underrepresented groups, putting them at a disadvantage compared to their more privileged peers. Students without access to AI tools may struggle to complete assignments as effectively, impacting their academic standing, scholarship opportunities, and future career prospects. Faculty members without access to cutting-edge AI tools may struggle to keep up with developments in their field and may be less prepared to guide students in using AI responsibly. Higher education institutions should consider strategies such as providing free access to premium AI tools and digital resources for all students, investing in infrastructure upgrades to improve internet connectivity, and offering professional development opportunities for faculty.

A multi-pronged approach involving faculty, students, and institutional guidance is essential to promote AI literacy in higher education. Faculty should receive guidance on incorporating AI tools in their curriculum and addressing ethical concerns. Higher education institutions should also implement AI literacy programs for students, emphasizing the importance of understanding AI writing assistants’ capabilities and limitations. These programs can include for-credit courses, workshops, online resources, and guest lectures, providing students with the necessary knowledge to make informed decisions about using AI tools in their academic pursuits.

To foster a robust support system for AI literacy among faculty and students across various academic disciplines, interdisciplinary committees should be formed, comprising representatives from diverse fields, information technology experts, and students. This collaborative approach ensures that all stakeholders contribute to the development of effective and inclusive strategies that address the ethical concerns surrounding generative AI tools. Alongside nurturing AI literacy, higher education institutions ought to invest in research that explores the impact of AI tools on teaching and learning. Such research will yield valuable insights into the effectiveness of these tools, informing best practices for their seamless integration into the educational landscape.

Neglecting AI literacy in higher education can have far-reaching consequences, extending beyond academia and into the workforce. Future employers will expect graduates to utilize AI tools intelligently and responsibly, making AI literacy an essential skill for the workforce. Unchecked use of generative AI tools may produce a generation of graduates lacking vital critical thinking and writing skills, leaving them ill-equipped for professional environments that demand adaptability and innovation.

Additionally, the digital divide could exacerbate, with privileged students who have access to advanced AI tools gaining an unfair advantage over their less privileged counterparts. By providing equal access to AI resources, institutions can help level the playing field and promote equity in education. Overlooking AI literacy may also perpetuate biases present in AI-generated content, further marginalizing underrepresented groups. Interdisciplinary collaboration can play a crucial role in addressing these biases by fostering diverse perspectives and input in the development of AI tools, ensuring a more inclusive and equitable future.

Addressing the urgent need for AI literacy in higher education is vital for ensuring the responsible and equitable integration of AI tools in academia. By prioritizing AI literacy, fostering interdisciplinary collaboration, and promoting ethical guidelines, higher education institutions can empower their academic communities to harness the potential of generative AI tools while safeguarding against the pitfalls of uninformed use. It is crucial to act now to secure the future of higher education and maintain a culture of responsibility, integrity, and innovation in an increasingly AI-driven world.

Related:
Are educators using ChatGPT to write lesson plans?
Will advances in AI force a push to oral exams?
For more news on AI in higher ed, visit eCN’s Teaching & Learning page

Sign up for our newsletter

Newsletter: Innovations in K12 Education
By submitting your information, you agree to our Terms & Conditions and Privacy Policy.

eSchool Media Contributors