Swarms of AI bots can sway people’s beliefs – threatening democracy
In mid-2023, around the time Elon Musk rebranded Twitter as X but before he discontinued free academic access to the platform’s data, my colleagues and I looked for signs of social bot accounts posting content generated by artificial intelligence. Social bots are AI software that produce content and interact with people on social media. We uncovered a network of over a thousand bots involved in crypto scams. We dubbed this the “fox8” botnet after one of the fake news websites it was designed to amplify.
We were able to identify these accounts because the coders were a bit sloppy: They did not catch occasional posts with self-revealing text generated by ChatGPT, such as when the AI model refused to comply with prompts that violated its terms. The most common self-revealing response was “I’m sorry, but I cannot comply with this request as it violates OpenAI’s Content Policy on generating harmful or inappropriate content. As an AI language model, my responses should always be respectful and appropriate for all audiences.”
We believe fox8 was only the tip of the iceberg because better coders can filter out self-revealing posts or use open-source AI models fine-tuned to remove ethical guardrails.
The fox8 bots created fake engagement with each other and with human accounts through realistic back-and-forth discussions and retweets. In this way, they tricked X’s recommendation algorithm into amplifying exposure to their posts and accumulated significant numbers of followers and influence.
Such a level of coordination among inauthentic online agents was unprecedented – AI models had been weaponized to give rise to a new generation of social agents, much more sophisticated than earlier social bots. Machine-learning tools to........
