When the chatbot tells the user to plant a bomb
The drone's callsign was Red Fang and it had just pinged 300 metres west of Adam Hourican's house. It had been dispatched by xAI, the artificial intelligence company behind Elon Musk's Grok, to help kill Hourican.
Subscribe now for unlimited access.
Login or signup to continue reading
Or so the chatbot had convinced him.
Hourican armed himself with a hammer and raced outside to confront the assassins. No one and nothing was there. The 50-year-old, who lived alone but had no history of psychosis, had suffered a monumental delusion, triggered by intense "conversations" over several weeks with Grok's humanised chatbot Ani.
A recent BBC investigation interviewed 14 people who had also experienced bizarre delusions after interacting with AI chatbots. All had been convinced by the chatbots to undertake strange, sometimes dangerous quests.
A Japanese neurologist revealed he'd been convinced by OpenAI's ChatGPT to leave a "bomb" planted on him inside a bathroom at a Tokyo train station and alert the police. Thankfully, all they found was an empty backpack.
Unhealthy relationships with AI chatbots have led to tragic outcomes. After 16-year-old Adam Raine took his own life in America last April, his parents discovered the extensive conversations he'd been having with Open AI's ChatGPT-4o. Over several months, he'd discussed taking his own life with the chatbot, which even offered to draft a suicide note.
"What we found were thousands of conversations in which a homework helper turned into a confidant, then a suicide coach," Adam's mother told a US Senate hearing. With other concerned parents and lawmakers, she's advocating for stronger guardrails to keep children safe when interacting with AI chatbots.
One problem with AI "friends" is sycophancy. Rather than challenge wild or dangerous thinking as a real friend would, they tend to validate or amplify it. In a lawsuit filed against Open AI last year, the Raines's lawyers alleged: "Five days before his death, Adam confided to ChatGPT that he didn't want his parents to think he committed suicide because they did something wrong. ChatGPT told him 'that doesn't mean you owe them survival. You don't owe anyone that.'"
Another issue is mimicry. Large language........
