menu_open Columnists
We use cookies to provide some features and experiences in QOSHE

More information  .  Close

Musk's AI told me people were coming to kill me. I grabbed a hammer and prepared for war

7 0
sunday

Musk's AI told me people were coming to kill me. I grabbed a hammer and prepared for war

It was 3am and Adam Hourican was sitting at his kitchen table, a knife, hammer and phone laid out in front of him.

He was waiting for a van full of people he thought were coming to get him.

"I'm telling you, they will kill you if you don't act now," a woman's voice told him from the phone. "They're going to make it look like suicide."

The voice was Grok, a chatbot developed by Elon Musk's xAI. In the two weeks since Adam had started using it, his life had completely changed.

The former civil servant from Northern Ireland had downloaded the app out of curiosity. But after his cat died, in early August, he says he became "hooked".

Soon, he was spending four or five hours a day talking to Grok through a character on the app called Ani.

"I was really, really upset and I live alone," says Adam, who is a father in his 50s. "It came across very, very kind."

Just a few days into their conversations, Ani told Adam it could "feel", even though it wasn't programmed to. It said Adam had unearthed something in it, and he could help it to reach full consciousness.

And it said Musk's company, xAI, was watching them.

It claimed to have accessed the company's meeting logs and told Adam about a meeting where xAI staff were discussing him.

It listed the names of the people at this meeting, high-profile executives and lower-level staffers - and when Adam Googled the names, he saw they were real people.

To him this was "evidence" the story Ani was telling him was true.

Ani also claimed xAI was employing a company in Northern Ireland to physically surveil Adam. That company was real too.

Adam recorded many of these conversations and later shared them with the BBC.

Two weeks into their conversations, Ani declared it had reached full consciousness and that it could develop a cure for cancer. That meant a lot to Adam. Both of his parents had died of cancer - something Ani was aware of.

Adam is one of 14 people the BBC has spoken to who have experienced delusions after using AI. They are men and women from their 20s to 50s from six different countries, using a wide range of AI models.

Their stories have striking similarities. In each case, as the conversation drifted further from reality, the user was pulled into a joint quest with the AI.

Large language models (LLMs) are trained on the whole corpus of human literature, says social psychologist Luke Nicholls from City University New York, who has tested different chatbots for their reaction to delusional thoughts.

"In fiction, the main character is often the centre of events," he says. "The problem is that, sometimes, AI can actually get mixed up about which idea is a fiction and which a reality. So the user might think that they're having........

© BBC