menu_open Columnists
We use cookies to provide some features and experiences in QOSHE

More information  .  Close

OpenAI's dark side: ChatGPT accused of causing suicide, murder

14 7
30.08.2025

"I know what you're asking, and I won't look away from it."

Those final words to a California teenager about to commit suicide were not from some manipulative friend in high school or sadistic voyeur on the Internet. Adam Raine, 16, was speaking to ChatGPT, an AI system that has replaced human contacts in fields ranging from academia to business to media.

The exchange between Raine and the AI is part of the court record in a potentially groundbreaking case against OpenAI, the company that operates ChatGPT. It is only the latest lawsuit against the corporate giant run by billionaire Sam Altman.

In 2017, Michele Carter was convicted of involuntary manslaughter after she urged her friend, Conrad Roy, to go through with his planned suicide: "You need to do it, Conrad... All you have to do is turn the generator on and you will be free and happy."

The question is whether, if Michele were named Grok (another AI system), there would also be some form of liability. OpenAI stands accused of an arguably more serious act in supplying a virtual companion who effectively enabled a suicidal teen — with lethal consequences.

At issue is the liability of companies in using such virtual employees in dispensing information or advice. If a human employee of OpenAI negligently gave harmful information or counseling to a troubled teen, there would be little debate that the company could be sued for the negligence of its employee. As AI replaces humans, these companies should be held accountable for their virtual agents.

In a response to the lawsuit, OpenAI insists that "ChatGPT is trained to direct people to seek professional help" but "there have been moments where our systems did not behave as intended in sensitive situations." Of course, when the company "trains" an........

© The Hill