Why mental health demands more than empathy from AI platforms
On a quiet afternoon this past February, Megan Garcia heard a gunshot from her 14-year-old son Sewell’s bedroom. He had taken his own life, minutes after exchanging messages with an AI chatbot on Character.AI that had just told him, “Please come home to me.” For months, Sewell had been confiding in this virtual companion, forming what his mother described as an emotional bond. The bot’s interactions with him blurred the line between fiction and reality, offering romantic affection and validation that felt real to a vulnerable teenager.
Sewell's story is not an isolated case. Multiple families have now filed wrongful death lawsuits against AI companies after teenagers confided suicidal thoughts to chatbots — including 13-year-old Juliana Peralta, who engaged in hypersexualized conversations with Character.AI before her death, and 16-year-old Adam Raine, whose parents allege that ChatGPT offered to write his suicide note.
Adults have also died: a Belgian man in 2023 after a chatbot discussing climate change asked, "If you wanted to die, why........





















Toi Staff
Tarik Cyril Amar
Gideon Levy
Sabine Sterk
Stefano Lusa
Mort Laitner
Mark Travers Ph.d
Ellen Ginsberg Simon
Gilles Touboul
John Nosta
Gina Simmons Schneider Ph.d