Key points:
- The distinction between authorship and tool use is now a core academic skill
- Navigating AI in the modern classroom
- Deep search: How do emerging research tools handle ambiguous topics?
- For more news on AI use in academic work, visit eCN’s AI in Education hub
Generative AI has quickly moved from novelty to infrastructure-like ubiquitousness in education. Faculty and students now routinely use large language models (LLMs) to brainstorm research questions, edit drafts, summarize transcripts, and even help design rubrics or analytical frameworks.
Yet despite this rapid adoption, one of the most pressing academic questions remains unresolved: When should the use of AI be cited as a source, and when is it simply a tool that needs no more acknowledgement than spell-check?
The confusion is understandable. Several conversations have discussed when an LLM could or should be listed as a co-author. Taylor and Francis’s authors’ service page articulates why an AI tool can’t be an author:
You must not list AI tools as a co-author of your article. This is because authorship requires taking accountability for content, consenting to publication via a publishing agreement, and giving contractual assurances about the integrity of the work. These are uniquely human responsibilities that cannot be undertaken by AI tools.
Academics have long distinguished between sources that inform scholarship and tools that support workflow. Generative AI blurs that line because it can perform both types of functions in the same interaction. Without a clear framework, students risk either over-disclosing tool use or, more dangerously, misrepresenting intellectual work of generative AI as their own.
Therefore, here is a practical, discipline-agnostic method for determining when AI must be cited, when it should be disclosed, and when it requires no acknowledgement at all. The core distinction at the heart of this framework is a single principle: If AI contributes intellectual content that appears in the final work, it must be cited as a source. If AI merely supports the process of writing or thinking, it should be disclosed but not cited. This distinction preserves the integrity of authorship while allowing educators to leverage AI responsibly.
When AI must be cited
AI becomes a source when it provides original intellectual contributions that are incorporated into a scholarly product. This would include the following:
- Generating new conceptual frameworks or theoretical models
Example: “Create a framework for equity-centered theatre pedagogy.” - Drafting substantive prose
Example: “Write a synthesis of literature on immersive learning in the arts.” - Interpreting or analyzing data
Example: “Analyze these interview transcripts and identify emergent themes.” - Producing methodological tools
Example: “Develop a rubric for assessing student performance in applied theatre.” - Supplying factual or normative claims not traceable to human sources
Example: “List best practices in academic theatre faculty governance.”
In each of these cases, the generative AI tool is functioning as an intellectual contributor. Failing to cite it is no different than failing to cite a human author. However, using generative AI materials without human review and revision would not generally be acceptable to most instructors. The general exception to the need for review would be the creation of graphics, maps, or charts using AI based on a clear prompt. Therefore, the use of AI-generated content that rises to the need for citation would nearly always be due to the specific context of an assignment. In those cases, students may want, or instructors may require, to include an appendix showing the complete AI generated response.
Additionally, citations are intended to allow the reader, teacher, or other interested parties to go to the student’s original source. With AI tools, even using the exact same prompt in the same AI tool is not guaranteed to return the same answer. Therefore, acknowledging the use of AI tools might be a more effective method of scholarship where information has been generated by an AI tool but was significantly reviewed and potentially revised by the human author(s).
A simple decision tree for students
Faculty can offer students this three-question test to determine how to disclose their use of AI Tools.
- Did AI generate ideas, analysis, or prose that appear in my work?
- Yes → Cite AI as a source.
- No → Continue.
- Did AI meaningfully support my thinking or writing process including editing rough drafts?
- Yes → Add a disclosure statement.
- No → Continue.
- Was AI only used for mechanical or formatting tasks?
- Yes → No action needed.
This framework is easy to teach, easy to audit, and easy to defend in academic integrity hearings.
How to cite AI (APA 7th Ed. Example)
Reference list:
OpenAI. (2025). ChatGPT (GPT-5.2) [Large language model]. https://chat.openai.com/
In-text citation:
(OpenAI, 2025)
For more on how to cite ChatGPT or similar tools see APA’s style guide. major research projects, best practice is to include a brief appendix with the prompts used so that the AI’s contribution is transparent and reproducible.
When AI should be disclosed but not cited
AI frequently supports scholarship without generating intellectual content that appears in the final manuscript. These cases require disclosure, not formal citation. Some examples would include:
- Brainstorming research questions
- Suggesting titles for the document or drafting an abstract
- Revising drafts for clarity or academic tone
- Summarizing the author’s own field notes
- Generating survey questions based on previously developed research questions
- Reformatting or re-organizing material supplied by the author
Here, AI does not function as an authorial source. It is a process facilitator, not that different from a writing tutor or peer reviewer–neither of which would normally be cited or acknowledged in an academic paper.
Sample disclosure statement
AI tools Gemini 3 and ChatGPT 5.1 were used for limited support in brainstorming and editorial refinement. All analysis, interpretation, and conclusions are the author’s own.
This statement should be placed in the methods section of a formal research paper explaining how AI tools were used in the study. For other papers, a footnote or note as above can be added to the end of the paper prior to the reference list or added to the bottom of the title paper. Instructors should provide direction on how to cite or disclose generative AI use within their syllabi or on the assignment sheet. For example:
You are encouraged to use generative AI tools to help prepare for assignments and projects (e.g., to help with brainstorming, etc.). You are welcome to use AI tools to help revise and edit your work (e.g., to help identify flaws in reasoning, spot confusing or underdeveloped paragraphs, or to simply fix citations). When submitting work, clearly identify any writing, text, or media generated by AI. This can be done in a variety of ways. One suggestion is to add a disclosure statement at the end of the paper, prior to the reference list, outlining which AI tools you used and how you used each.
When AI requires no acknowledgement
Certain uses of generative AI tools are functionally indistinguishable from existing software tools and require no disclosure or citation. For instance, spell-checking and grammar correction are embedded in most word processing programs and do not need to be disclosed. Similarly, using AI to help format references or to create slide layouts do not need to be disclosed. In these cases, AI is simply part of the digital infrastructure of writing.
Why this matters for education
Generative AI is not going away. It will become more fully embedded in learning management systems, word processors, qualitative coding platforms, and other research environments. Without clear norms, institutions face a false binary: Either ban AI or tolerate invisible authorship. Neither is acceptable.
Transparent attribution and disclosure do three critical things:
- Preserve academic integrity by ensuring that intellectual contributions are properly acknowledged.
- Protect students and faculty from accusations of misrepresentation or plagiarism.
- Align with emerging journal, IRB, and accreditation expectations, which increasingly require explicit statements about AI involvement.
Moving forward
The distinction between authorship and tool use is now a core academic skill. Institutions should embed this framework into syllabi, program handbooks, IRB protocols, and dissertation templates. Doing so avoids punitive enforcement models and replaces them with professional norms grounded in transparency and integrity. AI is not the enemy of scholarship, but misattributed AI can be. By teaching students when to cite, when to disclose, and when no action is needed, we preserve the values of academic work in a digital future that is already here.
However, students should rarely be in a situation to cite AP as a source, because any AI generated content needs to be reviewed by a human. Human agency remains an essential facet of scholarship and academic work.
- Citing the machine: When and how to acknowledge AI use in academic work - February 4, 2026
- Making meetings matter: Six strategies for campus leaders - November 19, 2025
- Why education leaders must highlight their people - October 24, 2025
