This story originally appeared on KU News and is republished with permission.
Key points:
- Establishing an AI integration task force is key among recommendations
- The rise of AI-native universities: OpenAI’s vision for every student
- From pen to prompt: Navigating AI in the classroom
- For more news on AI use in education, visit eCN’s AI in Education hub
Researchers at the University of Kansas have produced a set of guidelines to help educators from preschool through higher education responsibly implement artificial intelligence in a way that empowers teachers, parents, students and communities alike.
The Center for Innovation, Design & Digital Learning at KU has published “Framework for Responsible AI Integration in PreK-20 Education: Empowering All Learners and Educators with AI-Ready Solutions.” The document, developed under a cooperative agreement with the U.S. Department of Education, is intended to provide guidance on how schools can incorporate AI into its daily operations and curriculum.
Earlier this year, President Donald Trump issued an executive order instructing schools to incorporate AI into their operations. The framework is intended to help all schools and educational facilities do so in a manner that fits their unique communities and missions.
“We see this framework as a foundation,” said James Basham, director of CIDDL and professor of special education at KU. “As schools consider forming an AI task force, for example, they’ll likely have questions on how to do that, or how to conduct an audit and risk analysis. The framework can help guide them through that, and we’ll continue to build on this.”
The framework features four primary recommendations.
- Establish a stable, human-centered foundation.
- Implement future-focused strategic planning for AI integration.
- Ensure AI educational opportunities for every student.
- Conduct ongoing evaluation, professional learning and community development.
First, the framework urges schools to keep humans at the forefront of AI plans, prioritizing educator judgment, student relationships and family input on AI-enabled processes and not relying on automation for decisions that affect people. Transparency is also key, and schools should communicate how AI tools work, how decisions are made and ensure compliance with student protection laws such as the Individuals with Disabilities Education Act and Family Education Rights and Privacy Act, the report authors write.
The document also outlines recommendations for how educational facilities can implement the technology. Establishing an AI integration task force including educators, administrators, families, legal advisers and specialists in instructional technology and special education is key among the recommendations. The document also shares tips on how to conduct an audit and risk analysis before adoption and consider how tools can affect student placement and identification and consider possible algorithmic error patterns. As the technologies are trained on human data, they run the risk of making the same mistakes and repeating biases humans have made, Basham said.
That idea is also reflected in the framework’s third recommendation. The document encourages educators to commit to learner-centered AI implementation that considers all students, from those in gifted programs to students with cognitive disabilities. AI tools should be prohibited from making final decisions on IEP eligibility, disciplinary actions and student progress decisions, and mechanisms should be installed that allow for feedback on students, teachers and parents’ AI educational experiences, the authors wrote.
Finally, the framework urges ongoing evaluation, professional learning and community development. As the technology evolves, schools should regularly re-evaluate it for unintended consequences and feedback from those who use it. Training both at implementation and in ongoing installments will be necessary to address overuse or misuse and clarify who is responsible for monitoring AI use and to ensure both the school and community are informed on the technology.
The framework was written by Basham; Trey Vasquez, co-principal investigator at CIDDL, operating officer at KU’s Achievement & Assessment Institute and professor of special education at KU; and Angelica Fulchini Scruggs, research associate and operations director for CIDDL.
Educators interested in learning more about the framework or use of AI in education are invited to connect with CIDDL. The center’s site includes data on emergent themes in AI guidance at the state level and information on how it supports educational technology in K-12 and higher education. As artificial intelligence finds new uses and educators are expected to implement the technology in schools, the center’s researchers said they plan to continue helping educators implement it in ways that benefit schools, students of all abilities and communities.
“The priority at CIDDL is to share transparent resources for educators on topics that are trending and in a way that is easy to digest,” Fulchini Scruggs said. “We want people to join the community and help them know where to start. We also know this will evolve and change, and we want to help educators stay up to date with those changes to use AI responsibly in their schools.”
- Access to four-year colleges that effectively serve low-income students is uneven across U.S. - September 10, 2025
- My Jerry Maguire summer - September 3, 2025
- Why ‘higher ed’ will thrive as it gets disrupted - September 1, 2025