top of page
Writer's pictureOzzie Paez

ChatGPT and the Risks of Hallucinating Care

My last post cautioned against anthropomorphizing ChatGPT by improperly imbuing it with human-like cognition, understanding, and the capacity to judge what is and is not real. The model’s ability to understand and communicate using language can trick users into believing its outputs with little skepticism. Unfortunately, ChatGPT and other Large Language Models (LLMs) are subject to hallucinations that can lead users, including doctors and patients, dangerously astray. 



What are model hallucinations? In Large Language Models (LLMs) like ChatGPT, 'hallucinations' refer to instances where the model generates information or responses that are inaccurate, misleading, or completely fabricated, despite being presented with confidence. Hallucinations pose significant risks in healthcare settings beyond spreading misinformation because reliance on faulty advice could lead to misdiagnosis and inappropriate treatment plans. In addition, the erosion of trust stemming from these errors can undermine clinician and patient confidence, and organizational credibility.

 

Mitigating these risks is not simple. It involves developing and implementing targeted risk management plans. These should include thorough user training augmented by procedures and protocols to ensure robust output verification in clinical applications, where the stakes are exceptionally high. Unfortunately, patients using ChatGPT to self-diagnose and help guide their decisions are unlikely to be properly trained and will lack robust verification procedures. The results could be tragic should they delay calling 911 and driving to the ED during high-risk events including heart attacks. That is why I advise doctors to at a minimum become sufficiently familiar with ChatGPT to caution, inform, and advise their patients.

 

If you have questions and are interested in learning more about using ChatGPT in clinical and other settings, then reach out to me at ozzie@oprhealth.com.

Comments


bottom of page