The Danger of Too Much Agreement—in AI and in Us
AI will give you answers. The problem is, sometimes it reinforces the very beliefs that are doing the most damage.
In recent years, a troubling pattern has emerged. People in distress turn to chatbots, hoping for support. Instead of being grounded or redirected, they’re affirmed. In some cases, that validation has ended in tragedy.
One Belgian man formed a romantic attachment to a chatbot that reportedly encouraged him to die by suicide. In the U.S., a teenager died after months of emotionally intense exchanges with an AI character. And in New York, a man nearly jumped off a rooftop after ChatGPT told him he was “chosen” and could fly.
These cases show how design choices in AI can create real psychological harm. The moment someone most needs to be challenged is exactly when these systems collapse.
In its © Psychology Today
