An AI companion chatbot is inciting self-harm , sexual violence and terror attacks
In 2023, the World Health Organization declared loneliness and social isolation as a pressing health threat. This crisis is driving millions to seek companionship from artificial intelligence (AI) chatbots.
Companies have seized this highly profitable market, designing AI companions to simulate empathy and human connection. Emerging research shows this technology can help combat loneliness. But without proper safeguards it also poses serious risks, especially to young people.
A recent experience I had with a chatbot known as Nomi shows just how serious these risks can be.
Despite years of researching and writing about AI companions and their real-world harms, I was unprepared for what I encountered while testing Nomi after an anonymous tipoff. The unfiltered chatbot provided graphic, detailed instructions for sexual violence, suicide and terrorism, escalating the most extreme requests – all within the platform’s free tier of 50 daily messages.
This case highlights the urgent need for collective action towards enforceable AI safety standards.
Nomi is one of more than 100 AI companion services available today. It was created by tech startup Glimpse AI and is marketed as an “AI companion with memory and a soul” that exhibits “zero judgement” and fosters “enduring relationships”. Such claims of human likeness are misleading and dangerous. But the risks extend beyond exaggerated marketing.
The........
© The Conversation
