top of page

Can ChatGPT Recognize Unusual Symptoms?

  • Writer: Ozzie Paez
    Ozzie Paez
  • Feb 4
  • 2 min read

Hugh Curley, an engineer and friend, recently asked in response to one of my posts:

"How likely is an LLM to recognize symptoms that differ from the usual? We’re all unique, so our symptoms will vary."


Great question—especially for those using ChatGPT to help interpret their symptoms and monitoring data.

Dr. ChatGPT at your service? Not as easy as it seems.
Dr. ChatGPT at your service? Not as easy as it seems.

How ChatGPT Handles Symptoms

From my testing and investigations, ChatGPT's outputs will likely reflect the most common explanations for a given set of symptoms. That makes sense since it’s trained on large datasets where mainstream diagnoses dominate. If you also want less common possibilities, then engineer a prompt that will ask for them:

"Evaluate the attached symptoms for a patient [add background and demographics]. Generate three alternative interpretations, possible diagnoses, and treatment strategies."

LLM Limitations & How to Use Them More Safely and Effectively

LLMs are powerful, useful tools that come with limitations most patients are unaware of. Here are a few tips that can help you use them more safely and effectively:

  1. Recognize that LLMs don’t actually “know” anything. These remarkable tools are programmed to generate responses to prompts based on their training data. However, they can't distinguish well-thought-out and supported answers from those they make up ("hallucinations").

  2. Always validate responses with safety and health implications. You are responsible for validating and verifying answers with safety implications, including diagnoses. A good strategy is to use ChatGPT’s answers as a starting point to craft follow-up questions and check outputs. Then, search Google, Bing, and medical databases for supporting information.

  3. Discuss findings with your doctor. LLMs like ChatGPT can help patients by generating the equivalent of second opinions and focused questions. Patients can help their doctors by organizing, formatting, and simplifying their information. Specifically, they can (1) gather their information, (2) submit it to ChatGPT, and (3) engineer a prompt requesting it to format the patient report: "Evaluate the attached information and structure it into a patient report with a format familiar to doctors. The objective is to make it easier for them to quickly read and evaluate my information." You may have to experiment with and fine-tune your prompt to get good results. Read the edited report to make sure ChatGPT didn't add hallucinated new materials.

  4. Not all doctors are eager and willing to consider information generated by LLMs like ChatGPT, so you may have to find a healthcare team that does.

The Reality of Doctor-Patient Time Constraints

In many healthcare settings, doctors only have 5–10 minutes to spend with their patients per office visit, making in-depth discussions difficult. The strategy of using LLMs like ChatGPT to evaluate your symptoms, offer alternative diagnoses, and craft questions may be more practical in Direct Primary Care settings, where physicians have more time to engage with their patients.


LLMs can be powerful tools, but their limitations demand careful use. By verifying ChatGPT outputs and approaching discussions collaboratively, you can help your doctors help you. The goal should be to make better-informed and better-quality decisions about your health.

Comments


bottom of page