Key points:
- Generative AI has great potential in higher ed–but many faculty are scared to use it
- Clear and repeated policies can help faculty, students, and staff learn to harness AI’s power for good
- See related article: Generative AI is already used in most college classrooms
By now, almost everyone is familiar with generative AI. Be it ChatGPT, DALL-E, or Google Bard, AI is all the rage. But how is generative AI impacting higher education, and how are faculty, staff, and students balancing AI’s potential with ethical considerations?
A recent EDUCAUSE expert panel sought to explore themes related to generative AI that panelists found critical to invest in and pay attention to. During an EDUCAUSE 2023 session, members of that panel shared thoughts on several of those generative AI themes.
Panelists included Kathe Pelletier, Director, Teaching and Learning Program, EDUCAUSE; Beth Ritter-Guth, Associate Dean, Online Learning & Educational Technology, Northampton Community College; Scott Smith, Instructional Technology, Northern Michigan University; and Brian Basgen, CIO at Emerson College.
“This is all brand-new territory,” said Ritter-Guth. “Many of us know how AI works, but how policies are generated—that’s all new territory for us. At the community college level, we’re preparing the workforce and workers for the AI world. And that world changes every single second.”
Main themes include:
- Using AI critically: Digital literacy for students, faculty, and staff
- Preparing students for the workforce of the future
- Using AI as a copilot: Use cases and best practices
- Ethics and equity: Privacy concerns and parity of access
- Academic integrity in an AI world: What does “originality” mean?
- Augmenting learning with generative AI: Personalization
- Enterprise approaches to AI
- Evaluating AI tools and/or AI functions in other products
“We’re very much invested in learning about AI on a daily basis. We’re all enmeshed in that world, reading, learning, trying to catch up because it’s a deluge,” Basgen said.
Using AI as a copilot
“This is a copilot-like world where you’re seeing generative AI being used to assist your work not necessarily to replace or supplant your work,” Basgen said.
“The panic I see when people talk about copilot is from the English department. But really, we’ve been using AI in Word for years. We all benefit from spellchecker. We have to think differently as faculty about how we teach and what we’re teaching, and we can still teach online,” Ritter-Guth said. “The best thing to do, no matter what kind of class you have, is to have your students load a writing sample into your LMS. That’s your baseline. There’s nothing that will tell you if [an assignment] is authentically human or if it’s AI. Nothing can do that 100 percent. We have to think differently as faculty, as workers.”
In fact, using AI as a copilot may require a change in instructional approaches. Ritter-Guth now asks students to use ChatGPT to write their first assignment. Then, students must find the information that was used to train the AI in its responses.
“It required students to look and see what info is out there. Where does info come from? Where do biases come from? What about copyright?” she said.
Enterprise approaches to AI
While enterprise approaches to AI were raised during the expert panel, the topic didn’t at first come up in a significant way but has since become more prominent, noted Pelletier.
“It’s been interesting to watch CIOs and other folks responsible for enterprise IT at your institutions really start digging in, engaging in how are we using AI on our campuses, are we an AI-first university, do we have strict guidelines? We boosted this topic in priority knowing the conversations have emerged much more loudly since the panel convened,” she said.
“AI as it relates to data is a really big deal,” Basgen noted. “There are very cool things you can do with your data in large language models. If you think about unstructured data at your institution–anything from what you do on social media and those interactions to things like surveys with open feedback fields, to your institutional data–whatever it is, having a large language model go through that can be incredibly powerful.”
“From a faculty perspective, everybody wants the catch-all policy” when it comes to using AI in instruction and how students use it, Ritter-Guth noted. “What do you do now when students don’t work with integrity? How are you handling it now? Policies are only good if there’s some sort of bite at the end when something goes wrong. Who’s monitoring your policy and what are the ramifications of breaking that policy? Think through what’s going to happen if a student violates that policy. What happens if a staff member violates the policy you make? We want to be flexible and able to adapt. On the faculty side, we want to empower them to use AI if they want to but also to manage their classrooms. I don’t know that we’re able to make policies yet because I don’t know that we’re able to catch people who cheat.”
“I pushed strongly for an AI workgroup. We do have people from faculty, staff, and student representation as well. We will be giving recommendations for policy, best practices, and strategizing between departments to share information. We were siloed and people were putting their heads in the sand, but this group really helps bring it to the forefront,” Smith said.
Using AI critically
“Using AI myself has helped me help my students,” Ritter-Guth said. “If you’re scared, go and play with AI tools like ChatGPT or Google Bard. The truth is that we’ve never been able to fully catch cheating. Cheating isn’t new, it’s just different in how we approach it.”
“One thing I think is super important is having AI literacy programs for not just students, but for faculty and staff as well,” Smith said. “Approach faculty first. A lot of that is obvious–terror of academic dishonesty, how do we catch it? There are other challenges as well–faulty and biased info, for example. It’s important for faculty to know that there are lots of opportunities we can leverage as well, like idea generation, brainstorming, creating boilerplate syllabi, course generation. If faculty know this isn’t just a one-way street where it’s something we have to worry about, but is something that can help them, too, that’s super important.”
It’s important not to limit students’ chances to learn how to work with AI and develop competitive workforce skills.
“With students, there’s the obvious aspect of academic dishonesty. They need to learn how they can use AI, too. One of the things I don’t think you can separate from the students is the workforce of the future aspect of this–we want well-rounded humans, critical thinkers, great citizens. Students pay a lot of money to have a career after college. The whole idea of ‘What am I going to get career-wise from this?’ is important,” Smith noted.
“The best way to develop literacy on AI is to use it. Reading about it is helpful, but using it is better. Don’t use it like a search engine–it isn’t,” Basgen said. Use AI tools–try it every day at least once a day. And when you try it ,throw at it–some sort of real problem you’re trying to solve or work through. Do not take the first response. The first response is at best a rough draft. You have to keep interacting with it. See where it takes you, what you can do troubleshooting a problem by giving it more context. That will make you better at prompting it, but it also will help you understand how it’s working, what it’s good at, and what its limitations are.”
“It’s been about 11 months since ChatGPT came out, and we’re already having conferences about it. As crazy as the capabilities of generative AI are right now, I think we’re in demo mode. Think about the future–when this is coupled with VR, with quantum computing–there are so many great possibilities with this, but a lot of challenges and dangers. The next 3-5 years are going to be very exciting,” Smith said.
“In academia, for some time the liberal arts has been used as an example of why academia is problematic, not in touch with the modern world, and how STEM is very important,” Basgen said. “But as it turns out, philosophy is going to be really important with AI regarding ethics. Over next few years as the technology develops, lots of considerations [will arise] around the use, effects, and outcomes of AI. That’s something you really need to consider and when you’re adopting AI on your campus; it’s important to be thoughtful about what the possible outcomes are.”
“AI is the new world, but it’s the same world–we just have to help our students and our colleagues all be agents of good. This technology is transformative. It can be dangerous, but we’re all here on the side of good,” Ritter-Guth said.
- The future of cybersecurity and privacy - October 11, 2024
- The top 10 most innovative U.S. schools - October 10, 2024
- How to Maximize Your EDUCAUSE24 Experience - October 8, 2024