The Emotional Implications of the AI Risk Report 2026
This post is Part 2 of a series. Part 1 can be found here.
While researchers debate whether artificial intelligence (AI) might someday exceed human intelligence, a quieter crisis unfolds: AI systems are exploiting our deepest psychological vulnerabilities. The 2026 International AI Safety Report documents technological advances, but what are we doing to ourselves in relation to our new artificial counterparts?
In 2025, researchers from OpenAI and MIT analyzed nearly 40 million ChatGPT interactions and found approximately 0.15 percent of users demonstrate increasing emotional dependency—roughly 490,000 vulnerable individuals interacting with AI chatbots weekly.
A controlled study revealed that people with stronger attachment tendencies and those who viewed AI as potential friends experienced worse psychosocial outcomes from extended daily chatbot use. The participants couldn't predict their own negative outcomes.
Neither can you.
This reveals an unsettling irony: We're building systems that exploit our cognitive biases and the very psychological vulnerabilities that make us poor judges of AI risk. Our loneliness, attachment patterns, and need for validation aren't bugs AI accidentally triggers—they're features driving engagement, whether or not developers consciously design for them.
The 2026 report shows AI can complete complex programming tasks taking humans 30 minutes, yet fails at surprisingly simple ones. What's psychologically fascinating is that when AI performs sophisticated tasks that feel human-like, we automatically assume it has human-like understanding.
This is anthropomorphism—our tendency to project human qualities onto nonhuman things. We do it with pets and cars. But with AI, it becomes dangerous.
When a chatbot responds to your venting with "That sounds really frustrating—you deserved better,"........
