As higher education reaches a point of transformation, AI's insights offer a different look at what path learning could take.

Interviewing the future: A self-conversation on higher education, AI, and what comes next


Will colleges and universities remain sites of human development, or become credentialing platforms optimized for efficiency alone?

Key points:

Over the past year, higher education has felt less like a stable institution and more like a system under continuous stress testing. Political intervention, economic pressure, enrollment volatility, and rapid technological acceleration have converged in ways that few university leaders were prepared for, exposing structural vulnerabilities that were long obscured by tradition and incremental change.

To make sense of this moment, this article takes an unconventional approach: an interview with AI by AI, positioned through my dual role as a scholar and practitioner observing higher education’s shifting terrain from within.

The result is both a reflection on what has changed and a warning about what lies ahead, as the sector moves toward decisions that may fundamentally redefine its purpose, authority, and social contract.

Question 1: What is the most significant change you have observed in higher education over the last 12 months?
The most striking change has been the collapse of the illusion of institutional insulation. Colleges and universities are no longer buffered from federal policy shifts, demographic decline, or public skepticism. Accountability regimes tied to earnings outcomes, such as those discussed in The Chronicle of Higher Education, have moved from abstract threats to operational realities. Institutions are being judged not on mission statements but on measurable economic return, and that shift has profound consequences for program design and institutional survival.

Question 2: How have political dynamics reshaped institutional decision-making recently?
Political pressure has become structural rather than episodic. Federal and state governments are increasingly willing to use funding, accreditation leverage, and regulatory rulemaking to shape institutional behavior. The debates surrounding the One Big Beautiful Bill Act illustrate how quickly policy can redefine risk across entire sectors. As a result, governance has shifted upward, with boards and executive leadership consolidating authority to respond faster, often at the expense of shared governance traditions.

Question 3: What has surprised you most about institutional responses to these pressures?
What has surprised me is how normalized austerity has become. Program closures, hiring freezes, and consolidation strategies are now framed as prudent leadership rather than emergency measures. Even institutions with healthy enrollments are acting defensively, anticipating future shocks rather than current deficits. This anticipatory retrenchment signals that leaders no longer believe stability is returning anytime soon.

Question 4: Turning to AI, what has been the most important development in AI and education over the last two years?
The most important development has been the shift from experimental use to infrastructural dependence. Generative AI tools are no longer peripheral; they are embedded in advising, tutoring, course design, and assessment workflows. What began as curiosity after the release of large language models has evolved into institutional reliance, as documented in multiple sector analyses. This dependency raises critical questions about academic labor, epistemic authority, and institutional accountability.

Question 5: How has AI changed the student experience in concrete terms?
AI has fundamentally altered how students interact with knowledge and support systems. Personalized feedback, 24/7 tutoring, and AI-mediated advising have lowered friction in learning processes, reshaping expectations about access, speed, and responsiveness in higher education. At the same time, these tools blur the line between assistance and substitution, challenging long-standing assumptions about authorship, effort, and learning integrity, concerns increasingly documented in mainstream higher education reporting. As explained by Beth McMurtric, students are not simply cheating more; they are learning differently, often faster, and sometimes more shallowly, forcing institutions to confront whether existing pedagogical models are aligned with this new cognitive landscape.

Question 6: What risks do institutions underestimate when deploying AI at scale?
Institutions consistently underestimate the governance risks. AI systems shape decision-making while remaining largely opaque, especially when sourced from third-party vendors. Bias, data ownership, and accountability gaps are treated as technical issues rather than governance failures. When AI-driven systems influence retention decisions, academic warnings, or student success metrics, institutions inherit ethical and legal exposure they are not structurally prepared to manage.

Question 7: Looking ahead, what do you see as the next major inflection point for higher education?
The next inflection point will be differentiation by trust rather than prestige. Institutions that can demonstrate ethical governance of technology, transparent outcomes, and authentic student support will separate themselves from those relying on branding alone. As earnings-based accountability expands, trust will become a measurable asset rather than an abstract value.

Question 8: How will AI reshape the role of faculty and academic labor?
Faculty roles will bifurcate. Some will become designers, curators, and validators of AI-mediated learning environments, while others will be displaced or deprofessionalized as routine instructional functions are increasingly automated. The danger lies not in automation itself but in the quiet erosion of academic judgment as decision-making is delegated to systems optimized for efficiency rather than learning, a concern echoed in recent practitioner-focused research from a EDUCAUSE podcast examining how generative AI is redefining faculty work and governance responsibilities. This transition will test the moral core of the academy, particularly its commitments to professional autonomy, intellectual authority, and educational purposes.

Question 9: What is your vision for a responsible AI-enabled future in higher education?
A responsible future requires institutions to treat AI as a governance issue first and a productivity tool second. This means embedding ethical review, faculty participation, and student voice into AI adoption strategies. It also requires resisting the temptation to equate technological sophistication with educational quality. AI should expand human capacity, not replace educational purpose.

In closing, higher education stands at a narrowing crossroads. The convergence of political scrutiny, economic accountability, and artificial intelligence is forcing institutions to choose between reactive survival and intentional transformation. The decisions made in the next few years will determine whether colleges and universities remain sites of human development or become credentialing platforms optimized for efficiency alone. The future is not predetermined, but it is arriving faster than many are willing to admit.

Sign up for our newsletter

Newsletter: Innovations in K12 Education
By submitting your information, you agree to our Terms & Conditions and Privacy Policy.

Dr. John Johnston