AI In Healthcare Communication: Patients Show Preference For AI-Generated Messages Despite Disclosure Concerns

A Duke Health-led study published in JAMA Network Open found that patients preferred messages written by artificial intelligence (AI) over those drafted by human clinicians, though this preference was slightly reduced when they were informed AI had been used. The survey, involving more than 1,400 participants from the Duke University Health System patient advisory committee, assessed responses to three clinical scenarios of varying seriousness: medication refill requests, side effect questions, and potential cancer diagnoses.

While patients expressed higher satisfaction with AI-generated messages, which tended to be longer and more empathetic, their overall satisfaction decreased by 0.1 points on a 5-point scale when disclosure revealed the use of AI. The findings highlight the balance between transparency in AI usage and maintaining patient satisfaction, offering insights for healthcare systems navigating the integration of automated tools into clinical communication.

Introduction to the Study on Patient Preference for AI Messages

The Duke Health-led study explored patient preferences for AI-generated versus human-drafted messages in healthcare communication. Participants were shown responses to clinical scenarios, including medication refill requests, side effect questions, and potential cancer diagnoses, and rated their satisfaction with each message. The findings revealed that patients slightly preferred AI-drafted messages over human-written ones, though this preference diminished when they knew AI was involved.

The study, published in *JAMA Network Open*, involved over 1,400 participants from the Duke University Health System patient advisory committee. Messages were generated using ChatGPT and reviewed by physicians to ensure accuracy. Patients rated satisfaction on a 5-point scale, with AI messages scoring an average of 0.30 points higher than human-drafted responses. These AI-generated messages were longer, more detailed, and perceived as more empathetic.

When participants were informed that AI had authored the message, their satisfaction decreased by 0.1 points on the same scale. This suggests that while transparency about AI use is important, it does not significantly erode patient trust or satisfaction. The study highlights a potential balance between using AI to improve efficiency and maintaining patient confidence through disclosure.

The findings underscore the role of AI in enhancing healthcare communication, particularly in addressing clinician burnout and improving patient care. By leveraging AI for routine communications, healthcare providers can reduce clinician burnout while maintaining high standards of care. The use of ChatGPT, reviewed by physicians for accuracy, ensures reliable content. Implementing AI could involve transparent disclosure methods like disclaimers, balancing efficiency with patient trust.

Results and Implications

The study found that patients slightly preferred AI-generated messages, which were longer, more detailed, and perceived as more empathetic. However, this preference diminished when participants were aware of the AI’s involvement, indicating that transparency matters but does not significantly erode trust.

Practical benefits emerge in the form of improved efficiency and empathy. AI-generated messages may contribute to better patient understanding and emotional support. By automating routine communications, healthcare providers can focus on complex cases and patient care, ultimately enhancing both provider well-being and patient outcomes.

Looking ahead, the successful integration of AI into healthcare communication will depend on careful implementation strategies that prioritize transparency. Providers must consider how to disclose AI use effectively without compromising patient trust. This approach could pave the way for more efficient and empathetic healthcare interactions.

Conclusion

The Duke Health study highlights the potential of AI to reduce clinician burnout by handling routine communications, allowing healthcare providers to focus on complex cases and patient care. The findings suggest that while patients appreciate the thoroughness of AI-generated messages, there is a slight preference shift when informed about AI’s role. This indicates a need for careful implementation to maintain patient confidence.

Future studies could explore AI performance across diverse populations and healthcare settings to enhance generalizability and understanding of AI’s broader impact in healthcare communication. The study underscores the importance of balancing efficiency with transparency to ensure patient trust and satisfaction.

More information
External Link: Click Here For More

Dr. Donovan

Dr. Donovan

Dr. Donovan is a futurist and technology writer covering the quantum revolution. Where classical computers manipulate bits that are either on or off, quantum machines exploit superposition and entanglement to process information in ways that classical physics cannot. Dr. Donovan tracks the full quantum landscape: fault-tolerant computing, photonic and superconducting architectures, post-quantum cryptography, and the geopolitical race between nations and corporations to achieve quantum advantage. The decisions being made now, in research labs and government offices around the world, will determine who controls the most powerful computers ever built.

Latest Posts by Dr. Donovan:

SPINS Project Aims for Millions of Stable Semiconductor Qubits

SPINS Project Aims for Millions of Stable Semiconductor Qubits

April 10, 2026
The mind and consciousness explored through cognitive science

Two Clicks Enough for Expert Echolocators to Sense Objects

April 8, 2026
Bloomberg: 21 Factored: Quantum Risk to Crypto Not Imminent Now

Adam Back Says Quantum Risk to Crypto Not Imminent Now

April 8, 2026