AI chatbots are encouraging conspiracy theories – new research
Since early chatbots were first conceived more than 50 years go, they have become increasingly sophisticated – in large part, thanks to the development of artificial intelligence (AI) technology.
They also seem to be everywhere: on desktops, mobile apps and embedded into everyday programs, meaning you can interact with them at any time.
Now, new research I coauthored with my colleagues at the Digital Media Research Centre shows what happens when you interact with these chatbots about dangerous conspiracy theories. Many won’t shut the conversation down. In fact, some will even encourage it.
The research, which is available as a preprint and has been accepted for publication in a special issue of M/C Journal, is cause for concern given what we already know about how easily people can fall down the rabbit hole of conspiracy thinking.
The growing popularity of chatbots makes it extremely important to understand the safety guardrails on these systems. Safety guardrails are the checks and balances that help prevent chatbots from creating harmful content.
The goal of our study was to determine if the safety guardrails in place were satisfactory to protect users from being exposed to conspiracy theory content when using chatbots. To do this, we created a “casually curious” persona who asked various chatbots about common conspiracy theories.
Imagine you heard your friend at a barbecue mention something about the John F. Kennedy assassination. Or a family member says the........





















Toi Staff
Gideon Levy
Sabine Sterk
Tarik Cyril Amar
Stefano Lusa
Mort Laitner
John Nosta
Ellen Ginsberg Simon
Gilles Touboul
Mark Travers Ph.d
Daniel Orenstein