It's becoming clear that AI tools can help meet higher ed's challenges around learning, governance, administration, and operational processes

How higher ed can put the right guardrails in place to leverage AI’s power

It's becoming clear that AI tools can help meet higher ed's challenges around learning, governance, administration, and operational processes

Key points:

Generative AI has quickly moved from an afterthought to top of mind for higher-ed leaders. As the technology has evolved, they are beginning to see the potential for AI to positively impact everything from admissions to marketing to teaching and learning. It comes at a critical moment for higher education as a number of forces put pressure on institutions of all types to adapt to a changing landscape.

Like any new technology, there are risks, and institutions must be thoughtful in considering the ethical application of AI. We’re in the first quarter of a four-quarter game. To provide the appropriate guidance and unlock AI’s full potential, institutions must create distinct policies that ensure its ethical use, setting clear guidelines and centering transparency around AI deployment on campus.

An AI policy framework is one way to do that. It’s a comprehensive guide for higher ed leaders as they develop their own AI protocols that lean into each institution’s distinct culture. By using a framework informed by that culture to both shape their overarching approach to AI and fine-tune it according to their unique needs, institutions will be best positioned to balance the risks of AI with its growing rewards.

AI’s advantages for higher ed

When ChatGPT launched in November 2022, the conversation quickly swung to AI-generated plagiarism. In the moment, determining how best to apply AI’s potential wasn’t among leaders’ top priorities. But as the technology continues to evolve, how AI-facilitated tools can help meet institutions’ challenges around learning, governance, administration and operational processes is becoming clear.

On the operations side of higher education, new generative AI tools are rolling out to make marketing campaigns more personalized, support financial aid management and streamline the work of admissions offices. A recent survey found that 80 percent of colleges will use AI in admissions by 2024, most often to review transcripts and recommendation letters.

In the classroom, faculty members are using generative AI tools to design more creative and engaging courses, and develop assessments, so they have more time to focus on supporting and engaging students.

And, as faculty and staff wade into AI’s uses, students are increasingly demanding it — not just for an English paper, but to ensure they are prepared for the workforce. About 75 percent of college students said their institutions should be preparing them for the use of AI in the workplace, according to a recent survey. They seek both ethical and practical training.

A guide to frame AI policy

But to take advantage of the best uses of AI in higher ed, institutions need the appropriate safeguards. A policy framework can guide a thoughtful deployment.

Guiding principles for AI use in higher ed include ensuring AI use is fair, reliable, accountable, transparent and secure. At the same time, we’ve pledged to make certain that humans are always in control in however we use AI and that our implementation of AI aligns with our values.

For higher education institutions working to build their own AI policies, international standards–including NIST AI Risk Management Framework, the EU AI Act, and the OECD Principles–can serve as a starting point. An AI Policy Framework provides a guide to the questions and deliberations that must take place to follow through on those principles.

The work should begin with input about AI’s potential use from a broad cross section of faculty members, staff, and students across campus. And stakeholders who will likely use AI, benefit from it, or be impacted by it should participate in the process, along with those who may need to manage its risks.

From there, leaders begin defining the institution’s position on AI, including the general attitude on campus about its use, how it’s already being used, and what risks exist. The analysis should include a deep dive into existing campus mandates that may already address its use, including ethics codes, privacy policies, and academic dishonesty rules.

And any finalized policies must define the full scope of an institution’s AI program, considering its application in not just learning or research, but governance, teaching, and administration. These policies should cover everything from potential intellectual property risks to consequences for non-compliance and guidelines for periodic policy revisions as the AI landscape evolves.

Of course, no policy is implemented when it sits on a shelf. Once complete, university leaders must be intentional about ensuring that the policy is not only documented, but that faculty members, staff, and students have the tools they need to operate within its parameters. And, leaders must establish a communications plan and training program and regularly monitor adoption.

None of this work will be easy as higher ed sorts through myriad opinions about AI and its potential uses. But like it or not, AI is here. To be successful, institutions will have to lean on their unique institutional cultures to create policies that fit their needs and environment. We’re at a pivotal moment for university leaders to not only bring those viewpoints together, but develop AI policies that will ensure that however they deploy the technology, they’ll be moving forward, guided by best practices.

Sign up for our newsletter

Newsletter: Innovations in K12 Education
By submitting your information, you agree to our Terms & Conditions and Privacy Policy.

eSchool Media Contributors