menu_open Columnists
We use cookies to provide some features and experiences in QOSHE

More information  .  Close

Take It From a Doctor: It’s OK if Your Medical Advice Comes From A.I.

4 0
17.02.2026

Take It From a Doctor: It’s OK if Your Medical Advice Comes From A.I.

Dr. Rodman is the director of A.I. programs for the Carl J. Shapiro Center for Education and Research at Beth Israel Deaconess Medical Center in Boston.

As a practicing physician, I know that my patients are using artificial intelligence to get medical advice. Sometimes the signs are subtle, like when they bring me lists of suggested tests and potential diagnoses that would put Dr. House to shame. But mostly they just tell me they consulted “Dr. ChatGPT” before seeing Dr. Rodman. Data suggest that over a third of Americans use large language models for health advice.

As an A.I. researcher, I believe that when used appropriately, these large language models are the greatest tool for empowering patients since the invention of the internet. But they also carry new and barely understood risks, like degrading the relationship that patients have with doctors, or pulling people into spirals of anxiety as they pepper a chatbot with questions. As we take part in this exhilarating new phase of health care, here is what I want my patients to know about using A.I. for their health.

Use A.I. to enhance, but not replace, your medical appointments

One of the best ways I see my patients use A.I. is to better prepare for doctors’ visits. The average patient gets only 18 minutes of face time with their doctor every year. The 21st Century Cures Act ensures that patients have access to their medical notes, but the vast majority never look at them. Those who do may have trouble making sense of the jargon or figuring out what’s important. Worse, inaccurate information from a misdiagnosis or a ruled-out condition may still be in the notes, a phenomenon that doctors euphemistically call “chart lore.”

A.I. can help patients navigate this morass. Let’s say you are going to the doctor because of a bothersome cough. Here’s a tip: Pull up your medical notes and remove all identifiable information. Copy those notes into an A.I. tool and give the model a current update of your health and cough concerns. Then ask the chatbot to concisely summarize all this information. Finally, ask the chatbot: “Given this context about my health, please give me three questions I should ask my doctor about my cough during my upcoming visit.”

Figure out what’s important

A.I. tools are capable of giving expert-level medical advice, but their performance is almost entirely dependent on having the full picture of your health, like any health conditions, your medications and what your daily life is like. Doctors learn in medical school what symptoms and descriptions from patients to home in on. To figure out the most effective way to describe your symptoms, you can ask a chatbot to “interview me as if you’re a doctor”; the question-and-answer process can lead to clearer explanations and also help to exclude other conditions that might cause unnecessary alarm.

The tendency of language models to try to please their users is especially troublesome for people using A.I. to answer health questions. Cyberchondria is a phenomenon in which surfing the web for information about benign symptoms can rapidly lead a person into a rabbit hole of scary possibilities. Because large language models are so aligned to your unconscious desires, they can pick up on what information resonates with you most powerfully and expose you to more of it, mistakenly assuming that it’s what you want. They might, for example, nudge a chat about a stress headache toward a detailed discussion of brain cancer. It’s a bit like how social media algorithms can encourage doomscrolling.

Subscribe to The Times to read as many articles as you like.


© The New York Times