by Ruth Paterson, Head of Nursing, Edinburgh Napier University; Chair, Innovation & Pedagogy Strategic Policy Group, Council of Deans of Health
The Innovation and Pedagogy group at the Council of Deans is leading the change in developing high-level principles for integrating AI into healthcare education, recognizing its potential to enhance teaching, learning, and student support. As AI tools become increasingly prevalent, institutions must collaborate to balance innovation with ethical considerations, ensuring that academic integrity and professional standards are upheld while fostering equitable access to AI-driven advancements.
Artificial intelligence is at the forefront of academic conversations and the Innovation and Pedagogy group at Council of Deans is prioritising development of high-level principles relating to the application and use of AI in healthcare education. AI is coined as the fourth industrial revolution and the first stage of this is to understand how HEIs delivering healthcare education will use the technology and subsequently how it will inform the development of future and innovative practice in this sphere.
As educational institutes it is important we work collaboratively with each other, students and people who access services to inform the direction of this revolution. A recent analysis of institutional policy guidance and guidelines suggested that AI may be an opportunity to rethink learning and teaching[i].. For example, AI can provide writing support when English is not a student’s first language, ‘tutor’ support for brainstorming or idea clarification and support those students who have additional learning needs. It is purported that AI is a tool that can level and educational playing field, deepen discussions around critical appraisal and overall improve teaching and learning. An excellent resource to support HEIs in development of support for AI is Harvard’s Metalab[ii] where there are educational resources, activities and interventions to develop students’ knowledge of AI. This approach can also support students and staff with collaborative learning models and build confidence in terms of graduate outcomes.
That said, as leaders of programmes that are professionally regulated, our duty is verify that student’s work is their own and we are producing health professionals who are highly skilled in; healthcare delivery, appraisal of research evidence and innovative practice. Alignment to codes of professional practice is essential and ethical debates around governance, and a robust approach to AI in healthcare practice and education is essential. Intellectual dishonesty such as plagiarism has become a growing concern in education and practice and has resulted in HEIs and FE colleges rethinking how they assess students using more authentic methods such as oral examination, in class tests and group discussions. This is welcomed but, we must also evaluate how effective students are in verifying information derived from AI tools thus promoting transparency in AI use.
In a recent report conducted by the Higher Education Policy Institute (HEPI), 88% of students surveyed confirmed the use of generative AI to help support their studies, assessments, knowledge and understanding[iii]. This is a 35% increase in use compared to the same survey conducted by HEPI last year. Providing the appropriate support for students and staff is key, to help spread the effective and balanced usage of AI across education provision but linked with clear guidance and policies. Collaboration between HEIs is also encouraged and this is a space where the Council of Deans of Health will continue to work with members.
We must consider how access to AI tools is fair and equitable and that we listen to those who are keen to embrace and test the capabilities of AI and those who are staunchly opposed. As a collective we should be cognisant of the positive and negative impact of AI in health care education and support our members to navigate a path that is aligned with academic and professional integrity. Moreover, in accordance with UNESCO’s recommendation we need to consider achievement of sustainable development goals relating to equitable access to AI[iv]. This is vital to acknowledge all aspects of the debate protecting ethical integrity, privacy and security in this emerging area. The role of regulatory bodies and professional bodies is also an important consideration, in terms of the standards of education and practice, providing clear guidance for Universities, employers and registrants.
Across academic institutions there has been a plethora of guidance for students and teachers on how to use large language models, such as chat GPT, Co-pilot and deepseek. As a council we will draw on this knowledge and propose high level principles aligned to regulatory frameworks and academic institutions. It is also our intention to provide practical examples of how AI can support innovative approaches to healthcare education. This is intended to add to the evidence base and understand how students, educationalists and healthcare practitioners are applying AI to their profession practice thus advancing knowledge in this area.
The timeline for this work is short and the Innovation and Pedagogy group will lead on development of principles. Drawing on the UK and international literature and consulting with educational experts it is our intention to publish draft guidance on the Principles of Generative AI in Healthcare Education which will be circulated for consultation with our members and regulators by Summer 2025.
[i] McDonald N, Johri, A. Ali, A., Collier, A.H. (2025) Generative artificial intelligence in higher education: Evidence from an analysis of institutional policies and guidelines, Computers in Human Behavior: Artificial Humans, Volume 3,
[ii] About – The AI Pedagogy Project
[iii] HEPI/Kortext survey shows explosive increase in the use of generative AI tools by students.
[iv] UNESCO (2023). Ethics of Artificial Intelligence. Retrieved from https://www.unesco.org/en/artificial-intelligence/recommendation-ethics