In an AI era where text can be generated instantly, judgment, interpretation, and context remain unmistakably human for doctoral programs.

What doctoral programs must change in an AI-saturated research environment


In a landscape where text can be generated instantly, judgment, interpretation, and context remain unmistakably human

Key points:

Generative AI has moved from novelty to a core tool in a remarkably short period of time. Doctoral students now routinely use AI tools to locate sources, summarize literature, generate outlines, and even draft sections of academic writing.

For many, this is not an ethical dilemma, but a pragmatic response to time constraints and rising expectations. The question is no longer whether generative AI will shape doctoral education. It already has. The more pressing question is whether doctoral programs, particularly practice-oriented Ed.D. programs, will adapt their structures to ensure that what students produce still reflects meaningful scholarship.

The Ed.D. has always occupied a distinctive space in graduate education. Unlike the Ph.D., which is traditionally oriented toward theory development, the Ed.D. emphasizes applied research grounded in problems of practice. Its purpose is to develop practitioner-scholars who can interpret data, evaluate programs, and lead organizational improvement. That applied orientation, however, makes Ed.D. programs especially vulnerable in an AI-saturated environment. When tools can generate plausible literature reviews or synthesize best practices in seconds, the risk is not simply academic dishonesty. It is the gradual erosion of authentic, context-driven inquiry.

One of the most immediate effects of the use of generative AI is the compression of the research process. Like the current use of AI by programmers to speed their development process, academic tasks that once required days of searching and reading can now be completed in minutes. Students can generate summaries of entire bodies of literature with a single prompt. While this efficiency is appealing, it creates a false sense of mastery. Synthesized summaries often flatten disagreement, obscure methodological differences, and present consensus where it is not present. For graduate students, whose work depends on understanding how research applies in specific institutional contexts, this flattening is particularly problematic. The result is often work that appears polished but lacks depth and is not representative of the student’s learning.

AI complicates long-standing assumptions about authorship and originality. In traditional doctoral work, writing has been both a means of communication and a demonstration of thinking. The act of drafting forced students to wrestle with ideas, organize arguments, and clarify their positions. When AI tools generate coherent prose on demand, that cognitive process can be partially outsourced. The challenge for faculty is not simply detecting AI use, which is difficult if not impossible to do. It is determining whether the student’s intellectual contribution remains central to the work.

At the same time, Ed.D. programs are not without advantages in this moment. Their emphasis on local context, field-based inquiry, and practitioner insight provides a natural counterweight to AI-generated generalization. AI can summarize research trends, but it cannot replicate the lived realities of a specific school district, campus, or community. It cannot conduct an interview with a superintendent navigating budget cuts or observe how a policy change affects classroom practice. In that sense, the features that define the Ed.D. can serve as safeguards if graduate programs intentionally lean into them. Doing so, however, requires more than minor policy adjustments. It calls for structural changes in how doctoral work is defined, supported, and assessed.

Changes to the process

First, Ed.D. programs must reconsider what they mean by originality. In an AI-saturated environment, originality cannot be reduced to producing text that has not appeared elsewhere. Instead, it must be anchored in the interpretation of context. A student’s contribution should be judged by how effectively they connect research to the specific conditions of their organization, how they identify nuances that generic summaries overlook, and how they generate insights that are actionable within their setting. This shift moves the emphasis from writing as product to thinking as situated practice.

Second, programs need to make the research process more visible. Traditional dissertation models often focus on the final document, with limited insight into how that document was produced. In an era of AI-assisted writing, this opacity is no longer tenable. Requiring research logs, annotated drafts, or reflective memos can help faculty see how students are engaging with sources, developing arguments, and using AI tools. Transparency does not eliminate AI use, nor should it. It reframes that use within a process that prioritizes intellectual accountability and measures student learning as opposed to measuring the student’s outputs.

Third, Ed.D. programs should place renewed emphasis on primary data collection and analysis. Interviews, observations, and local data sets are inherently resistant to AI substitution because they are grounded in specific contexts. When students are required to engage deeply with primary data, they must move beyond generalized claims and grapple with complexity. This does not diminish the importance of the literature review, but it rebalances the dissertation toward evidence that cannot be easily generated or replicated by a large language model (LLM) or other generative AI tools.

Fourth, AI literacy must become a core competency rather than an implicit expectation. Many doctoral students are already using AI tools, but their understanding of those tools is uneven. Programs should provide explicit guidance on how to use AI to support, rather than replace, scholarly work. This includes instruction on prompt design, source verification, and the identification of bias and hallucination within AI outputs. Framing AI as a research assistant rather than a co-author helps clarify its role while acknowledging its utility.

Finally, Ed.D. programs should reconsider the structure of the dissertation itself. The traditional monograph model, while still valuable, is not the only way to demonstrate doctoral-level competence. Portfolio-based approaches, policy briefs, implementation plans, and other applied artifacts may better capture the kinds of work Ed.D. graduates are expected to do. Incorporating real time oral presentations can further ensure that students can articulate and defend their thinking independently of AI tools.

Implications for faculty

These changes have implications for faculty. Advising in an AI-saturated environment requires a shift from evaluating finished products to coaching inquiry. Faculty must engage more deeply in the iterative stages of research, asking probing questions that reveal how students are making sense of their work. This may increase the time and effort required for supervision, however, it aligns more closely with the developmental goals of doctoral education.

The risks of inaction are significant. If Ed.D. programs continue to rely on structures that assume traditional writing processes, they may find it increasingly difficult to distinguish between superficial synthesis and genuine insight. Over time, this could undermine the credibility of the degree and the confidence of employers who rely on Ed.D. graduates as leaders and decision-makers.

Yet there is opportunity. By re-centering context, process, and practitioner insight, Ed.D. programs can adapt to the presence of generative AI and strengthen their core mission. The goal is not to eliminate AI from doctoral work. That would be both impractical and counterproductive. The goal is to ensure that, in a landscape where text can be generated instantly, judgment, interpretation, and context remain unmistakably human.

In that sense, the future of graduate programs is not threatened by AI. They are clarified by it.

Sign up for our newsletter

Newsletter: Innovations in K12 Education
By submitting your information, you agree to our Terms & Conditions and Privacy Policy.

Steven M. Baule, Ed.D., Ph.D.