Our Fears and Hopes for AI Medicine
Advances in AI are rapidly changing our social and technological landscapes.
Large language models can present false information in satisfying ways that can easily deceive.
Early case studies are showing concerning side effects, including AI-associated psychosis.
Technology has always been and will continue to be best utilized as a tool, with humans at the helm.
After decades of flourishing in science fiction, AI is having its moment. It’s quickly maturing into a mix of our deepest hopes and wildest fears with some truly head-scratching surprises. Both OpenAI and Anthropic are growing at incredible speeds and are set to usher in a new wave of trillion-dollar companies. Whether or not AI will disrupt industries is no longer speculation: It has, it is, and it will.
As a practicing physician, I have to stay current and adapt. Some changes can be slow, and others happen overnight. When the pandemic hit, I immediately became a non-consenting telehealth psychiatrist. More insidiously, my smartphone has become essential for my practice. From two-factor authorization in prescribing controlled medications to HIPAA-compliant document scanning to communicating with staff and patients on the go, the tech in my pocket has simultaneously made me more productive and more distracted. A 2018 study in adolescents showed a correlation with high digital media use and subsequent symptoms of ADHD. That was eight years ago! The technology is even more ingrained in our day-to-day. And then there’s the potential impact on one of a physician and therapist’s greatest assets: Critical thinking. I already know this from years of GPS making me geographically incompetent, but the studies showing our cognitive offloading to technology can impair critical thinking are sobering.
Like it or not, though, the technology is here. Even if I’m resistant to, or wary about, AI—which I am—if I ignore it, I will be left behind. And my patients will suffer for it.
Anything that becomes ubiquitous so rapidly brings with it unforeseen problems. For AI and large language models, problems like hallucinations are well-known. I asked AI not too long ago about myself. The result: “Justin C. Key is a practicing psychiatrist and author (so far, so good) who wrote the movie Get Out” (I didn’t write Get Out). For low-risk asks, like ‘how should I populate my raised garden’, a little fantasy will not do much. But for complicated medical issues? You can see where I’m going. Large language models are also sycophants. Like a good social media algorithm, they’re designed to keep you engaged, and that means appealing to your ego. We’re seeing the implications of this in real time. Some results are funny. Others are tragic.
An emergent consequence of AI that this psychiatrist and sci-fi author did not see coming is induced psychosis. Dr. Joseph Pierre, whom I trained under at UCLA, has been researching and writing extensively about this. I would expect prolonged use of LLMs to steer concerning thoughts in those with previously diagnosed psychotic disorders; however, we’re seeing patients with no previous psych history needing multiple hospital stays to come out of their AI-induced delusions.
If this can happen, what other downstream effects might emerge? What will we know relatively quickly, and what might it take generations to realize? Much like Covid, another global change that hit hard and quick, I suspect AI’s long-term effects on our society won’t be fully understood until it’s time to write our chapter in the history books.
But we can all hope, right? Mine is that society continues with the model of medicine that saw us through from bloodletting patients to transplanting organs: human-led with technology as an ever-evolving tool. As a psychiatrist and therapist, bridging a patient’s past to their present is key in extracting important insights from the hard-earned therapeutic relationship and in making sound medical decisions. The idea of an LLM counseling my patient through their suicidal ideation is scary, but the thought of one reminding me of patterns from a patient’s history, interventions that worked versus those that made things worse, and insights into what the thought of dying meant to this person historically, fed to me in real-time, to inform one human trying to save another, is intriguing. Physician burn-out is real, and a 2018 review identified charting and ‘treating the data and not the patient’ as major contributors.
I don’t want AI driving the car, but whether I like it or not, it’s going to be in the passenger seat, yapping away. It’s my job to learn how to listen.
Ra, Chaelin K., et al. “Association of Digital Media Use with Subsequent Symptoms of Attention-Deficit/Hyperactivity Disorder among Adolescents.” JAMA, vol. 320, no. 3, 17 July 2018, p. 255.
Gerlich, Michael. “AI Tools in Society: Impacts on Cognitive Offloading and the Future of Critical Thinking.” SSRN Electronic Journal, vol. 15, no. 1, 2025.
ÖZER, Mahmut. “Is Artificial Intelligence Hallucinating?” Turkish Journal of Psychiatry, 2024.
Cheng, Myra, et al. “Sycophantic AI Decreases Prosocial Intentions and Promotes Dependence.” ArXiv.org, 2025.
Pierre, Joseph, et al. “You’re Not Crazy”: A Case of New-Onset AI-Associated Psychosis - Innovations in Clinical Neuroscience.” Innovations in Clinical Neuroscience, 18 Nov. 2025.
Fred, Herbert L., and Mark S. Scheid. “Physician Burnout: Causes, Consequences, and (?) Cures.” Texas Heart Institute Journal, vol. 45, no. 4, Aug. 2018, pp. 198–202.
