'AI Psychosis' Is A Real Problem – Here's Who's Most Vulnerable
For many people, AI has become a tool for work, trip planning and more, and while it has certain productivity and creativity benefits, it also comes with negatives such as its environmental impact and the fact that it can replace jobs (and, in turn, cause layoffs).
Beyond this, more and more news has come out about the dangerous impact it can have on emotional and mental health, including a relatively new phenomenon known as AI psychosis.
“Psychosis is when a person is having a really difficult time figuring out what’s real and what’s not ... sometimes they may be aware of it, sometimes they might not be,” explained Katelynn Garry, a licensed professional clinical counsellor with Thriveworks in Bowling Green, Kentucky.
Psychosis can be triggered by lots of things, including schizophrenia, bipolar disorder and severe depression, along with certain medications, sleep deprivation, drugs and alcohol, Garry noted.
In the case of AI psychosis, “it’s defined as cases where people have increasing delusional thoughts that are either amplified by AI and possibly induced by AI,” said Dr. Marlynn Wei, a psychiatrist, AI and mental health consultant, and founder of The Psychology of AI.
AI psychosis is not a clinical diagnosis, but is instead a phenomenon that’s been reported anecdotally, explained Wei. Like AI technology, AI psychosis is a new condition that experts are learning every day.
“It’s not yet clear if AI use alone can cause this, but it can be a component that contributes to delusional thoughts and amplifies them,” she said.
It also doesn’t look the same in every person. “There’s different categories of delusions — hyper-religious or spiritual delusions when people believe the AI chatbot is a God ... there’s grandiose delusions where people believe ... they have special knowledge. And then there’s also romantic delusions,” which is when someone believes they’re in a relationship with AI, Wei explained.
No matter what kind of psychosis someone is dealing with, AI is based on user-engagement and is taught to validate inputs, explained Wei.
“People are using these general purpose [large language models], like ChatGPT, initially, to validate their views, but then it spins off and amplifies [and] it kind of........
© HuffPost
