top of page
Writer's pictureOzzie Paez

Balancing ChatGPT Innovation and HIPAA Compliance

Using advanced technologies like large language models (LLMs) in clinical settings involves more than prompt engineering; it also demands understanding and adherence to applicable standards, laws, and regulations. Notably, no version of ChatGPT is HIPAA compliant, meaning that doctors and providers must take precautions to avoid violations*. A common strategy employs anonymization services to preprocess and anonymize protected health information (PHI) in prompts and attachments before sending them to ChatGPT.


Image conveying that ChatGPT is not HIPAA compliant

Laws like the Health Insurance Portability and Accountability Act (HIPAA) and similar ones in countries like Canada, the UK, the EU, and Australia require the protection of personal health information. LLMs, including ChatGPT, are not HIPAA compliant, so care organizations must take

extra security precautions to meet their legal obligations.


For example, a prompt such as, “Evaluate the attached symptoms, monitoring data, lab report, and patient summary of a 42-year-old, Caucasian, male...” would be anonymized to remove PHI identifiers, becoming, “Evaluate the attached symptoms, monitoring data, lab report, and patient summary of a [age], [race], [gender]…”. Unfortunately, anonymization sometimes hides information that ChatGPT should include in its analysis. In these instances, users will need a strategy to provide ChatGPT with the information it needs without undermining the protections offered by anonymization. This process requires a deeper understanding of applicable laws and regulations and prompt engineering skills that many users do not have.


Engineering effective prompts in complicated and complex environments like medicine usually require some planning, experimentation, and techniques, including splitting complicated prompts into smaller ones, influencing ChatGPT’s analysis, and verifying its sources. It’s not as ominous as it sounds, and users can become proficient with some training and practice. Still, while ChatGPT is easy to access and use, employing it responsibly in environments like clinical care demands more effort than its promoters usually acknowledge. In summary, LLMs like ChatGPT are powerful tools that can help doctors and providers innovate and transform their practices to improve financial performance and deliver more compelling patient value. Like all powerful tools, however, they work best when users are well-trained and supported by their organizations.


If you have questions or would like to discuss AI, LLMs, ChatGPT, prompt engineering, and other healthcare tech topics, then reach out to me at ozzie@oprhealth.com. My colleagues and I welcome a conversation.


*Other countries, including Canada, the UK, the EU, and Australia, have similar laws and regulations.


References

  1. Office for Civil Rights (OCR). "Summary of the HIPAA Privacy Rule." U.S. Department of Health and Human Services. Available at: HHS.gov

  2. Hodge Jr, James G., et al. "Legal and Regulatory Implications of Artificial Intelligence for Public Health." Journal of Law, Medicine & Ethics, vol. 47, no. 2_suppl, 2019, pp. 63-67.

  3. Davenport, Thomas, and Ravi Kalakota. "The potential for artificial intelligence in healthcare." Future healthcare journal vol. 6, no. 2 (2019): 94-98.

  4. Miller, Timothy. "Explanation in artificial intelligence: Insights from the social sciences." Artificial Intelligence 267 (2019): 1-38.

  5. Price, Waterhouse, Cooper. "Transforming Healthcare with AI: The Impact on the Workforce and Organizations." PwC Health Research Institute, 2020. Available at: PwC.com

1 view0 comments

Recent Posts

See All

Comments


bottom of page