As artificial intelligence (AI) becomes increasingly prevalent in healthcare, ensuring its safe implementation and use is crucial. According to a new guidance published in the Journal of the American Medical Association, organizations and clinicians must take steps to prevent potential patient harm. Dean Sittig, PhD, professor at UTHealth Houston, and Hardeep Singh, MD, MPH, professor at Baylor College of Medicine, developed a pragmatic approach for healthcare organizations and clinicians to monitor and manage AI systems.
The guidance emphasizes the need for robust governance systems, rigorous testing, and transparency with patients when AI is used in care decisions. Key recommendations include establishing dedicated committees to oversee AI system deployment, formally training clinicians on AI usage and risk, and maintaining a detailed inventory of AI systems. By working together, healthcare providers, AI developers, and electronic health record vendors can build trust and promote the safe adoption of AI in healthcare.
Ensuring AI Safety in Clinical Care: A Pragmatic Approach
The increasing prevalence of artificial intelligence (AI) in healthcare has raised concerns about its safe implementation and use in real-world clinical settings. To address this issue, researchers from UTHealth Houston and Baylor College of Medicine have published guidance on ensuring AI safety in clinical care in the Journal of the American Medical Association.
According to Dean Sittig, PhD, professor with McWilliams School of Biomedical Informatics at UTHealth Houston, “We often hear about the need for AI to be built safely, but not about how to use it safely in healthcare settings.” This highlights the importance of developing a pragmatic approach to monitor and manage AI systems in clinical care. Sittig and his co-author Hardeep Singh, MD, MPH, professor at Baylor College of Medicine, drew from expert opinion, literature reviews, and experiences with health IT use and safety assessment to develop this approach.
The guidance emphasizes the need for healthcare organizations and clinicians to take a proactive role in ensuring AI safety. This includes implementing robust governance systems and testing processes locally to ensure safe AI and safe use of AI. As Singh noted, “Healthcare delivery organizations will need to implement robust governance systems and testing processes locally to ensure safe AI and safe use of AI so that ultimately AI can be used to improve the safety of healthcare and patient outcomes.”
Recommended Actions for Healthcare Organizations
To ensure AI safety in clinical care, healthcare organizations should take several key steps. Firstly, they should review guidance published in high-quality, peer-reviewed journals and conduct rigorous real-world testing to confirm AI’s safety and effectiveness. This involves evaluating the performance of AI systems in real-world settings to identify any potential risks or limitations.
Secondly, healthcare organizations should establish dedicated committees with multidisciplinary experts to oversee AI system deployment and ensure adherence to safety protocols. These committees should meet regularly to review requests for new AI applications, consider their safety and effectiveness before implementing them, and develop processes to monitor their performance. This ensures that AI systems are thoroughly vetted before being introduced into clinical care.
Thirdly, healthcare organizations should formally train clinicians on AI usage and risk, as well as be transparent with patients when AI is part of their care decisions. This transparency is key to building trust and confidence in AI’s role in healthcare. Clinicians should understand the capabilities and limitations of AI systems, as well as how to use them safely and effectively.
Maintaining a Detailed Inventory of AI Systems
Healthcare organizations should also maintain a detailed inventory of AI systems and regularly evaluate them to identify and mitigate any risks. This involves keeping track of all AI systems in use, including their performance metrics and potential vulnerabilities. Regular evaluations can help identify areas for improvement and ensure that AI systems are functioning as intended.
In addition, healthcare organizations should develop procedures to turn off AI systems should they malfunction, ensuring smooth transitions back to manual processes. This ensures that patient care is not disrupted in the event of an AI system failure.
Shared Responsibility for AI Safety
Implementing AI into clinical settings is a shared responsibility among healthcare providers, AI developers, and electronic health record vendors. As Sittig noted, “By working together, we can build trust and promote the safe adoption of AI in healthcare.” This requires collaboration and communication among all stakeholders to ensure that AI systems are designed and implemented with safety in mind.
Ultimately, ensuring AI safety in clinical care requires a proactive and collaborative approach. By following these guidelines, healthcare organizations and clinicians can help ensure that AI is used safely and effectively to improve patient outcomes.
External Link: Click Here For More
