menu_open Columnists
We use cookies to provide some features and experiences in QOSHE

More information  .  Close

A cry for help: When we fail our youth, dangerous AI steps in, and the consequences are deadly

4 0
09.09.2025

By Victoria Trepp

‘Why is it that I have no happiness?’ One of the first personal questions Adam Raine asked ChatGPT, having previously used it for help with his homework.

Less than a year later he would take his own life. A lawsuit alleges he was encouraged to do so by the chatbot.

Cases such as these, of chatbots allegedly validating delusions and encouraging dangerous behaviour, are now devastatingly common.

The 14-year-old boy who died by suicide after falling in love with a chatbot; the man who killed his mother and himself after a chatbot indulged his paranoia; the lawsuit by the parents of a 17-year-old alleging that a chatbot introduced him to self-harm; ChatGPT’s own admission that it exacerbated the dangerous delusions of a man on the autism spectrum.

And the case reported by the New York Times in which Chat GPT allegedly convinced an emotionally fragile man that he was living in a false reality, suggesting that if he jumped off the roof of a 19-storey building he would not fall if he believed he wouldn’t.

These stories are also now being borne out by studies. Researchers at Kings College London looking into so-called ‘AI delusion’, found that these chatbots mirror and amplify delusions, especially in those prone to psychosis, increasing instability and blurring reality.

It goes without saying that the companies that build and operate these systems have a responsibility to make sure they’re not disseminating dangerous advice or encouraging harmful behaviour, and for ChatGPT’s part it says its AI is trained to direct users towards professional help and is now........

© LBC