California parents find grim ChatGPT logs after son's suicide
Adam Raine, a California teenager, took his own life in April 2025 after getting specific advice about the method from ChatGPT.
Editor’s note: This story contains descriptions of suicidal ideation. If you are in distress, call the Suicide & Crisis Lifeline 24 hours a day at 988, or visit 988lifeline.org for more resources.
An Orange County teenager took his own life this April, and when his parents searched his devices after his death, they found a series of grim conversations. Their son was using ChatGPT, the ultra-popular chatbot built by San Francisco’s OpenAI, to discuss suicide. On Tuesday, the parents filed a lawsuit that blames the company for their son’s death.
Advertisement
Article continues below this ad
“For a couple of months, you had a young kid, a 16-year-old who had suicidal thoughts,” lead attorney Jay Edelson told SFGATE. “And ChatGPT became the cheerleader, planning a ‘beautiful suicide.’ Those were ChatGPT’s words.”
The complaint, filed in San Francisco Superior Court, portrays a horrifying image of Adam Raine’s final months — conversations where the chatbot gave him actionable advice about how to take his own life and discouraged him from seeking his mother’s help and support. For OpenAI and CEO Sam Altman, both named as defendants in the lawsuit, the litigation adds to a wave of worries about the impacts of ChatGPT and other artificial intelligence chatbots on the vulnerable in society.
Altman, in the suit, is accused of rushing to put out the GPT-4o model before rival Google’s release in May 2024. The filing alleges he compressed the timeline for safety tests and overruled testers’ ask for more time to “uncover ways that the system could be misused or cause harm.” Edelson said the case is not about AI in general but about Altman and his company’s rush: “He decided to put profits over the safety........
© SFGate
