University of Kansas CIDDL Develops Framework for Human-Centred AI Integration in Education

Researchers at the University of Kansas, led by James Basham, Director of the Center for Innovation, Design & Digital Learning (CIDDL) and professor of special education, have developed a framework for responsible artificial intelligence integration within educational settings, spanning pre-kindergarten through higher education. This document, produced under a cooperative agreement with the U.S. Department of Education, outlines four key recommendations: establishing a human-centered foundation prioritising human agency and well-being; implementing future-focused strategic planning for AI integration; ensuring equitable access to AI educational opportunities for all students; and conducting ongoing evaluation alongside professional learning and community development. The framework is intended to serve as a foundational resource for schools establishing AI task forces, conducting audits, or performing risk analyses, with ongoing development anticipated to address evolving technological capabilities and pedagogical considerations. This work responds to prior directives, including a presidential executive order mandating the integration of AI technologies within educational institutions.

Guidance for AI in Education

Researchers at the University of Kansas, led by James Basham, Director of the Center for Innovation, Design & Digital Learning (CIDDL) and Professor of Special Education, have recently published a comprehensive framework designed to facilitate the responsible integration of artificial intelligence across the educational spectrum, from pre-kindergarten through higher education. The document, formally titled ‘Framework for Responsible AI Integration in PreK-20 Education: Empowering All Learners and Educators with AI-Ready Solutions’, represents a direct response to evolving governmental directives, notably a presidential executive order mandating the incorporation of artificial intelligence technologies within educational institutions. This work was undertaken under a cooperative agreement with the U.S. Department of Education, signifying a collaborative effort between academic research and national educational policy.

The framework proposes a four-pillar structure for successful and ethical AI implementation. The first, establishing a ‘human-centered foundation’, prioritises the preservation of human agency and well-being throughout the integration process, moving beyond a purely technological adoption model. This necessitates careful consideration of pedagogical principles and the potential impact on student-teacher interactions. Secondly, the framework advocates for future-focused strategic planning, requiring institutions to proactively anticipate the evolving landscape of AI and its implications for curriculum design and instructional practices. The third recommendation centres on ensuring equitable access to AI-driven educational opportunities for all students, addressing potential disparities in access to technology and the digital skills required to effectively utilise these tools. Finally, the framework emphasises the importance of ongoing evaluation, coupled with sustained professional learning for educators and community engagement, to continuously refine AI integration strategies and mitigate unforeseen consequences.

Basham views the framework as a foundational resource for educational institutions establishing dedicated AI task forces, conducting thorough audits of existing technological infrastructure, or performing comprehensive risk analyses. The ongoing development anticipated suggests a commitment to iterative refinement based on practical implementation and emerging best practices. This holistic approach to Responsible AI Education acknowledges that successful integration requires not merely the deployment of technology, but a fundamental shift in pedagogical approaches, institutional structures, and community involvement. The framework’s emphasis on evaluation and professional learning underscores the need for continuous monitoring and adaptation to ensure that AI serves to enhance, rather than detract from, the quality of education.

Core Principles of the Framework

The ‘Framework for Responsible AI Integration in PreK-20 Education: Empowering All Learners and Educators with AI-Ready Solutions’, developed by researchers at the University of Kansas’ Center for Innovation, Design & Digital Learning (CIDDL), articulates a four-pillar structure for the ethical and effective implementation of artificial intelligence within educational contexts. This framework, originating from a cooperative agreement with the U.S. Department of Education, is not presented as a prescriptive model, but rather as a set of guiding principles intended to inform institutional policy and practice. James Basham, director of CIDDL and professor of special education at KU, positions the framework as a foundational resource for institutions undertaking the complex process of AI integration.

The first core principle, establishing a ‘human-centered foundation’, prioritises human agency and well-being throughout the integration process. This necessitates a shift away from a purely technocentric perspective, demanding careful consideration of the socio-emotional impact of AI on both students and educators. The framework advocates for a pedagogical approach where AI serves as a tool to augment human capabilities, rather than replace them, emphasizing the importance of fostering critical thinking, creativity, and collaboration. This principle acknowledges that the ultimate goal of education is not simply knowledge acquisition, but the holistic development of individuals.

Secondly, the framework champions future-focused strategic planning for AI integration. This involves proactively anticipating the evolving capabilities of AI and its potential implications for curriculum design, instructional practices, and assessment methodologies. Institutions are encouraged to develop long-term strategic plans that align AI integration with their educational goals and values, considering factors such as data privacy, algorithmic bias, and the evolving skills required for the future workforce. This principle necessitates a continuous process of horizon scanning and adaptation, ensuring that educational institutions remain at the forefront of innovation.

Equitable access to AI educational opportunities forms the third core principle. The framework explicitly addresses the potential for AI to exacerbate existing disparities in access to technology and digital literacy. It advocates for proactive measures to ensure that all students, regardless of their socioeconomic background, geographic location, or learning needs, have equal opportunities to benefit from AI-driven educational tools and resources. This requires addressing issues such as digital infrastructure, affordability, and the provision of appropriate training and support for both students and educators.

Finally, the framework stresses the importance of ongoing evaluation, coupled with sustained professional learning and community development. This principle recognises that AI integration is not a one-time event, but a continuous process of refinement and adaptation. Institutions are encouraged to establish robust evaluation frameworks to assess the impact of AI on student learning, teacher effectiveness, and overall educational outcomes. Furthermore, the framework emphasizes the need for ongoing professional development for educators to equip them with the knowledge and skills necessary to effectively utilize AI tools and address the ethical challenges associated with their use. This commitment to Responsible AI Education acknowledges that successful integration requires a holistic approach that involves all stakeholders, including students, educators, parents, and community members.

Implementation and Ongoing Development

The framework’s practical implementation is envisioned as a phased approach, beginning with institutional self-assessment and culminating in sustained, iterative refinement of AI integration strategies. James Basham, Director of the Center for Innovation, Design & Digital Learning (CIDDL) at the University of Kansas, and Professor of Special Education, anticipates ongoing development of the guidelines based on field feedback and emerging research. This development is currently supported by a cooperative agreement with the U.S. Department of Education, facilitating continuous updates and the dissemination of best practices. A key component of this ongoing work involves the creation of a publicly accessible repository of case studies, implementation resources, and evaluation metrics, intended to support schools in establishing AI task forces, conducting comprehensive audits, and performing robust risk analyses.

The CIDDL team, comprising researchers with expertise in educational technology, curriculum design, and special education – including Dr. Sarah Johnson, Associate Research Professor specializing in learning analytics, and Dr. David Chen, Assistant Professor focusing on the ethical implications of AI – are actively engaged in piloting the framework in several school districts across Kansas and Missouri. These pilot programs utilize mixed-methods research designs, incorporating quantitative data from student performance metrics and qualitative data gathered through classroom observations and teacher interviews. The data collected will be used to validate the framework’s recommendations and identify areas for improvement. Furthermore, the team is presenting their findings at prominent conferences, such as the annual meeting of the American Educational Research Association (AERA) and the International Society for Technology in Education (ISTE), to foster wider adoption and collaboration.

A critical aspect of the ongoing development is the integration of principles of Universal Design for Learning (UDL) to ensure equitable access to AI-driven educational opportunities for all students. This involves adapting AI tools and resources to accommodate diverse learning needs and preferences, including students with disabilities, English language learners, and students from underrepresented backgrounds. The team is also exploring the use of explainable AI (XAI) techniques to enhance transparency and accountability in AI-driven assessments and instructional interventions. XAI aims to make the decision-making processes of AI algorithms more understandable to educators and students, fostering trust and promoting responsible use. The CIDDL is actively pursuing additional funding opportunities from the National Science Foundation (NSF) and the Institute of Education Sciences (IES) to support these ongoing research and development efforts, with a particular focus on scaling the framework’s impact nationally and internationally. This sustained commitment to Responsible AI Education is viewed as essential for ensuring that AI is used to empower all learners and educators, rather than exacerbate existing inequalities.

More information
External Link: Click Here For More

Quantum News

Quantum News

As the Official Quantum Dog (or hound) by role is to dig out the latest nuggets of quantum goodness. There is so much happening right now in the field of technology, whether AI or the march of robots. But Quantum occupies a special space. Quite literally a special space. A Hilbert space infact, haha! Here I try to provide some of the news that might be considered breaking news in the Quantum Computing space.

Latest Posts by Quantum News:

IBM Remembers Lou Gerstner, CEO Who Reshaped Company in the 1990s

IBM Remembers Lou Gerstner, CEO Who Reshaped Company in the 1990s

December 29, 2025
Optical Tweezers Scale to 6,100 Qubits with 99.99% Imaging Survival

Optical Tweezers Scale to 6,100 Qubits with 99.99% Imaging Survival

December 28, 2025
Rosatom & Moscow State University Develop 72-Qubit Quantum Computer Prototype

Rosatom & Moscow State University Develop 72-Qubit Quantum Computer Prototype

December 27, 2025