There is a critical need for institutions of higher education to adopt a framework to guide both AI policy and practice.

Defining higher-ed policy for AI in teaching and learning

There is a critical need for institutions of higher education to adopt a framework to guide both AI policy and practice

Key points:

The overall approach to the use of AI in higher ed is perhaps best described in terms of three groups: those who want to ban its use outright (often driven by concerns related to loss of employment, and fears of increased cheating and loss of ethical critical thinking abilities by students), those who are content to wait (thereby putting their students at risk of not being employable), and those who are moving forward over a continuum from individuals, and departments, to the entire university.

Given the increasing need for students to be prepared for a data- and AI-driven world, and the tremendous potential for AI to transform higher ed from a “one-size fits all” place- and time-driven archaic system to a personalized and agile knowledge enterprise enabling learning at scale, there is an increasing need for the establishment of institutional-level policies for the development, implementation, and use of AI tools/platforms for teaching and learning.

Although a number of general framework documents already exist,[1],[2],[3] these are largely written from the perspective of independent technology development by third parties, followed by use by institutions of higher education rather than from an integrated approach with agency remaining at the institutional level right from the outset. While the former is based on the situation largely at play currently, the latter is one that needs to be the norm following well-defined levels and steps that are learner- and learning-centered, with enhanced access, and attainment in mind.[4]

Acknowledging that the role of higher education is not just to serve individual aspirations but rather to contribute to the public good, including through the building of socio-cultural understanding and enhancement of socio-economic mobility of the learner, and thence the economic upliftment of the communities in which they reside and work, it is important that any framework for development and implementation of AI in higher education start with the basic consideration of ethics, responsibility, and equity:

  • Ethical AI relates to fundamental principles and values. It provides guidelines that align the use of AI with societal values such as fairness, transparency, privacy, and security. Floridi[5] described it as beneficent, non-maleficence, autonomy, justice, and explicability. Thus, ethical AI is predictive and prescriptive in nature and emphasizes the balance between risk and potential, value and principles, resulting in a need for technical and behavioral standards for the development and application of AI tools emphasizing the tactical aspects of accountability, fairness, and explainability to ensure that real-world implications of bias, discrimination, privacy, malicious disinformation, and harmful decision making are addressed.
  • Responsible AI effectively puts the philosophy and principles of ethical AI into action to assure that the technology developed, and the tools resulting thereof, are robust/reliable, fair, and trustworthy through guaranteed levels of reliability and oversight.
  • Equitable AI builds on the foundations of ethics and responsibility to ensure that the true potential of AI to transform learning and the acquisition, sharing, and development of knowledge is available to all learners irrespective of position in society or station in life. It can be defined as learners having affordable access to technology and AI platforms/tools, the infrastructure needed to use them, adequate levels of training and expertise to assure adoption of tools to meet the specific mission of the institution, the resources to attain AI literacy in that context, and the agency to actively use the tools.

From a systems perspective, ethical AI provides the values, principles, and foundations; responsible AI ensures use of tactics that meet those guidelines; and equitable AI assures the implementation of strategy for the benefits of AI to accrue to all learners, both in terms of gaining access to knowledge and in enabling its use for socioeconomic mobility. Building on a foundation of these three levels, and once the purpose of AI has been determined in the context of the specific type of institution and the nuances of the learner population that is intended to be served, a framework for higher-ed policy can be developed using the four pillars of (1) governance, (2) ethics and accountability, (3) pedagogy, and (4) operations. The prioritization enables emphasis to be on the specific context of the institution through governance, as well as the nuances of mission and the local context in which the tools would operate through pedagogy.

1. Governance: This is a foundational pillar that focuses on value alignment with institutional mission, goals, and societal values and norms. It establishes rules and policies and sets bounds of risk tolerance and standards of oversight. It sets the stage for ethical standards, pedagogical implementation and integration, and operational deployment through aspects such as:

  • Oversight: Clear delineation of the ultimate decision authority to override/modify AI-based processes and decisions to ensure consistent alignment (defined as the process of ensuring that the system operates as intended and is beneficial, rather than harmful, to users).
  • Data Governance: Setting standards for data collection, use, and storage, as well as for the clarity in emerging areas such as IP, copyright, and AI-based course curation, where discussions are just beginning but faculty, students, and staff need transparency and clear direction, even if transitionary.
  • Transparency: In terms of what is implemented through AI, its rationale, and how student learning and records are affected.
  • Monitoring and Retraining: Establishing protocols and procedures for constant evaluation and improvement of AI systems and ensuring adequate levels of personnel capacity to maintain institutional control and agency.

2. Ethics and Accountability: Once governance structures are in place, a focus on ethics and accountability ensures that standards, policies, and procedures are upheld through aspects such as:

  • Fairness: Prioritizing accessibility and inclusivity of use while ensuring that the tools do not perpetuate, or create new, bias.
  • Transparency: Extends from the data on which the tool was trained, awareness of its origins, and whether it is authentic or synthetic, the context and history of its origins, and identification and subsequent mitigation of bias, to the ability to explain decisions made through, or by, the tool, including of criteria and factors used. This is to ensure that users know when they engage with systems incorporating AI and understand both its limitations and their rights.
  • Traceability and Explainability: This focuses on the ability to trace system process from input to output and eschew black-box processes. The institution must be able to not only explain how the system is trained and what domain knowledge was used, but also how a decision is made, and the criteria and factors used. It is critical that the institution be able to verify whether the system is responding as designed and/or if bias or other non-designed facets have been introduced post-deployment.
  • Responsibility and Accountability: Beyond responsibility in the regulatory context, it is important that the institution be responsible not just for deployment, but also of data used for training, output, validation, and continued verification. It is crucial that AI tools not be anthropomorphized and that there be clarity in terms of the chain of responsibility and the awareness of accountability and methods of redress.
  • Privacy and Security: This extends not only to the adequate protection of data used in training and testing, but also that generated through use. In addition to traditional levels of cybersecurity, new levels will be needed to address emerging threats such as M/L attacks and confabulation. In addition, enhanced regulations and security will be needed related to address the collection and use of biometric data.
  • Robustness and Reliability: The tools must operate as intended over the full range of conditions, with minimization of unintended and unexpected harm, and must be resilient against attempts to manipulate analysis and output, as well as the foundational knowledge and datasets.

3. Pedagogy: This pillar relates to the central aspect of teaching and learning and must necessarily address aspects related not just to the acquisition of knowledge, skills, and abilities of critical thinking and reasoning as traditionally defined, but also on the direct implementation of AI tools to strengthen and enhance learning by students. While the emphasis must be on the curriculum through a focus on innovative teaching methods, curated and personalized content and learning plans, curricular integration, and increased emphasis on ethical reasoning and critical thinking, focus must also be on aspects such as:

  • Accessibility and Affordability: Ensuring that AI tools/technologies are not only accessible to all learners, but also that these in turn enhance accessibility and affordability of higher education and its ability to enable greater socioeconomic mobility.
  • Assessment and Evaluation: Employing AI tools to enable greater authentic assessment and real-time feedback at scale. In addition, continuous assessment and evaluation of the outcomes of the use of AI tools, along with feedback and improvement, are essential, as is the encouragement of innovative methodologies of teaching and research to both understand efficacy and to assess student learning behavior, outcomes, and impact. It is critical that use of AI tools should not lead to depersonalization of learning through excessive automation.
  • Engagement and Interaction: Utilizing AI, including in conjunction with AR/VR/XR, to enhance interaction and “learning by doing,” in addition to increasing engagement and providing greater scaffolding and holistic support mechanisms.
  • Data Security: While protection of student data is paramount, transparency regarding its use, including of all data generated through use, must be maintained.
  • Faculty Development and Support: The ability to succeed will depend directly on an institution’s ability to train faculty and staff and support them–not just in the use of AI tools and platforms, but also in the development of specialized tools by including them in discussions with vendors and program developers right from the outset.
  • Regulatory Modifications: Aspects such as regular and substantive interaction, interpretation, and perhaps modification of the credit hour, transferability of courses between institutions, and course accessibility across institutions must be highlighted through a student-centered focus rather than one based on an institution’s convenience or historical practice of exclusion. Whereas the use of AI has the potential to positively transform learning at scale, this will only be possible by re-envisioning processes and modalities, removing artificial barriers, and staying focused on the mission of student success.

4. Operations: While the use of AI tools can provide significant benefits across campus, the current framework focuses on teaching and learning through aspects such as:

  • Monitoring and Retraining: Keeping in mind the rapid evolution of AI, it is critical that the framework include mechanisms for monitoring of performance, and for regular benchmarking and updating of existing systems to not only assure accuracy and effectiveness, but also ongoing validity and viability.
    • Robustness and Reliability: Systems should be stable and able to perform as intended under all expected conditions.
    • Traceability and Explainability: Ensuring that decisions are not only explainable, but also traceable, to the originating datasets and algorithms for purposes of accountability and improvement.
    • Safety and Security: Constant surveillance of security measures across the phygital systems ecosystem to protect operations from malware and attacks.

Given its transformational potential to enhance access and attainment in higher ed, as well as its increasing adoption and use in the workplace, there is a critical need for institutions of higher education to adopt a framework to guide both policy and practice, ensuring agency in shaping a future of greater access and attainability.

[1] OECD. Opportunities, guidelines, and guardrails for effective and equitable use of AI in education, 2023.

[2] Sebesta J, Davis VL. Policies & Practices. Toolkit. Dec 2023.

[3] Brandon E, Eaton L, Gauvin G, Papini A. Cross-campus approaches to building a generative AI policy. Educause Review, Dec 12, 2023.

[4] Karbhari VM, Defining a path to equitable AI in higher education. eCampus News, April 12, 2024.

[5] Floridi L. A unified framework of five principles for AI in society. In Ethics, Governance and Policies in Artificial Intelligence. Vol. 144, pp. 5-17, Springer International Publishing AG.

Sign up for our newsletter

Newsletter: Innovations in K12 Education
By submitting your information, you agree to our Terms & Conditions and Privacy Policy.

eSchool Media Contributors

Sign up for our newsletter

Newsletter: Innovations in K12 Education
By submitting your information, you agree to our Terms & Conditions and Privacy Policy.