Key points:
- Higher-ed professionals focus on ethical, relational, and professional implications of integrating AI into their workflows
- HBCUs and the potential for disruptive AI innovation
- Universities can’t block AI use in applications–but they can ensure fairness
- For more news on AI in higher ed, visit eCN’s AI in Education hub
Institutions in today’s higher education landscape are under growing pressure to do more with less. Budget cuts and hiring freezes have led many to explore new technologies, particularly artificial intelligence (AI), to build efficiency and streamline operations across various student affairs offices. From admissions to advising, professionals are experimenting with AI to manage increasing workloads, support student success, and make data-informed decisions.
Our research team has been exploring the implications of AI in higher education by conducting interviews with various student affairs professionals in student advising and admissions. Based on 34 semi-structured interviews with student affairs professionals at our institution, we have found that these individuals are responsible for high-volume, high-stakes tasks, making them well-positioned to explore the benefits and challenges of AI adoption.
Through the interviews, we identified several common themes regarding what motivates professionals to use AI in their work and, in particular, what barriers they encounter when attempting to do so. While the contexts of admissions and advising differ, both groups shared overlapping perspectives on AI’s potential and its limitations.
Common motivations across admissions and advising
Across both domains, a key attraction of AI is its potential to improve efficiency and reduce staff workload by automating time-consuming tasks. Professionals described the burden of routine, labor-intensive tasks, such as calculating GPAs or identifying unregistered students, that AI could automate. Others highlighted how AI could assist with documentation by generating condensed, accurate notes that reduce cognitive load and support more informed decision-making. AI also allows minimizing personal bias, particularly in admissions decisions or academic advising interventions like placing students on probation. Interestingly, none of the interviewed professionals feared losing their jobs to AI. Instead, they viewed it as a tool to enhance their effectiveness rather than replace their core responsibilities.
Common barriers across admissions and advising
Despite their openness to AI’s potential, participants cited several significant barriers. Many professionals still lack awareness of what AI can do, and few have had the time or training to explore its capabilities. There is a clear need for professional development opportunities to build digital literacy and confidence in AI tools.
Another significant theme was the importance of human connection. In admissions, professionals emphasized the value of engaging with students and their parents, as well as high school guidance counselors–relationships that are unique to the admissions context and challenging, if not impossible, to replicate through AI. Similarly, advising professionals highlighted the relational nature of their work, expressing concern that increased automation could weaken the personal interactions essential to supporting students effectively.
Resistance to change also emerged, often stemming from departmental cultures and the potential impacts on professional identity. While academic advisors were wary of AI and believed it could not effectively replicate the social and emotional guidance they provided, it was admissions professionals who had a more pronounced resistance towards AI usage. Interestingly, admissions professionals felt that AI threatened the purpose of their work, particularly reviewing applications and making nuanced decisions based on complex data. They expressed greater trust in their colleagues’ judgment than in algorithmic decision-making, citing the need for a holistic understanding of each applicant. For admissions professionals, a key source of resistance arose from professional pride and identity, which many felt were integral to their roles. For some, using AI threatened the core values and skill sets that define their profession, while for advising professionals, AI was only seen as a potential enhancement to their work rather than a replacement for it.
Ethical concerns also played a role; several admissions professionals alluded to the ethics of their position or profession, where relying on AI seemed misaligned with expectations for fairness and integrity. Many felt that relying on AI was a way of shirking their responsibilities. In particular, some admissions professionals expressed discomfort using AI in their decision-making processes while simultaneously discouraging applicants from using AI tools in their application materials.
Implications and next steps
Our findings suggest that professionals in both admissions and advising are curious about AI and see its promise, but significant structural and cultural barriers remain. Notably, while concerns about job loss were absent, participants were more focused on the ethical, relational, and professional implications of integrating AI into their workflows. To move forward, institutions should invest in training, foster cross-departmental dialogue, and create low-risk opportunities to experiment with AI in ways that respect the human-centered nature of student affairs work.
As AI continues to evolve, higher education professionals are increasingly interested in exploring how it can enhance student services. While challenges around implementation and readiness remain, our findings indicate a willingness to innovate, especially when AI is viewed as a tool to support, rather than replace, human connection. With the right training, leadership support, and open dialogue about ethics and professional identity, AI has the potential to complement the relational work that lies at the heart of admissions and advising.