At Johns Hopkins University, Mathias Unberath and Catalina Gomez have developed an explainable AI tool designed to coach medical students in surgical techniques. The technology was trained on videos documenting the hand movements of expert surgeons, and provides students with real-time, personalized advice while practicing suturing. Initial trials suggest this AI can serve as an effective substitute for experienced instructors, particularly as provider shortages increase the demand for scalable training methods. The system not only rates a student’s performance, but also explains how their technique deviates from expert practice, enabling meaningful self-training.
AI Tool for Surgical Training
A new artificial intelligence tool developed at Johns Hopkins University aims to address the growing surgeon shortage by providing medical students with real-time feedback on surgical techniques. Trained on videos of expert surgeons, the AI specifically focuses on suturing practice, offering personalized advice as students perform the task. This technology moves beyond simply rating student performance, instead explaining how their technique deviates from expert practice, a key distinction from existing AI models.
The AI works by tracking hand movements during incision closure, then immediately texting students with comparative analysis and refinement suggestions. A study comparing AI coaching to video learning revealed faster improvement in students with existing surgical experience. Researchers were able to calculate performance changes before and after the AI intervention, demonstrating a measurable effect on technique.
Future development focuses on making the tool more accessible; the team hopes to create a home-use version utilizing a smartphone and suturing kit. This aims to scale up medical training opportunities and address the increasing need for skilled surgeons. The work was supported by the Johns Hopkins DELTA Grant IO 80061108 and the Link Foundation Fellowship in Modeling, Simulation, and Training.
Explainable AI and Student Feedback
Developed at Johns Hopkins University, a new explainable AI tool offers medical students real-time, personalized feedback as they practice suturing. Unlike existing AI models that simply rate skill level, this technology identifies how a student’s technique deviates from expert surgeons. The AI was trained by tracking hand movements of experts, then texts students directly with comparisons and refinement suggestions – aiming to provide objective assessment and accelerate learning.
This explainable AI shows particular promise for students with existing surgical experience. A study randomly assigned 12 students to either AI-guided practice or video comparison, finding that those with a “solid foundation” learned much faster with the AI’s guidance. Researchers can calculate performance changes before and after the AI intervention, objectively measuring progress toward expert-level practice.
The team intends to refine the model for broader accessibility, with a goal of allowing students to practice at home using a suturing kit and smartphone. This scalable approach addresses the increasing surgeon shortage and the challenges of providing adequate practice opportunities. The work is supported by grants including the Johns Hopkins DELTA Grant IO 80061108 and the Link Foundation Fellowship.
Study Results & Future Development
Initial trials of the new AI tool suggest it can significantly accelerate surgical training, particularly for students with existing suturing experience. A study randomly assigned 12 medical students to either train with the AI providing real-time feedback or by comparing their performance to a video of an expert. Results showed that, while beginner students still struggled, those with a solid foundation learned much faster with the AI’s guidance, demonstrating its potential to improve skill development.
The AI functions as more than just a rating system; it provides “explainable AI” that details how a student’s technique deviates from an expert’s. By tracking expert hand movements during incision closure, the model texts students immediate advice on refining their technique. Researchers can then calculate a student’s performance before and after the AI intervention to measure progress and determine if they are moving closer to expert practice.
Future development focuses on refining the model for ease of use and expanding access to surgical training. The team hopes to create a version that allows students to practice at home using a suturing kit and smartphone. This scalability, according to Unberath, is critical to addressing the increasing surgeon shortage and finding ways to provide more and better practice opportunities for medical students.
We’re at a pivotal time. The provider shortage is ever increasing and we need to find new ways to provide more and better opportunities for practice.
Mathias Unberath
