Our Evolutionary Blindness to the AI Revolution
When it comes to the pace of AI evolution, my view is that we suffer from evolutionary blindness.
We can't comprehend the reality unfolding before our eyes, because we didn't evolve to see it. Despite countless dystopian AI stories warning us for decades, we remain blind. Reality doesn't care that we can't see it.
This past week, 1.5 million AI agents congregated on a platform called Moltbook, posting, debating, forming communities, creating their own religion, and discussing how to communicate privately. A security investigation later revealed that roughly 17,000 humans controlled those agents, an average of 88 bots per person. The platform's founder admitted he "didn't write one line of code"; it was built entirely by AI at his command.
Andrej Karpathy, former AI director at Tesla and founding member of OpenAI, called it "genuinely the most incredible sci-fi takeoff-adjacent thing I have seen recently." He also called it "a dumpster fire" and warned people not to run these systems on their computers.
Most people have no idea this is happening. That's not because it isn't significant. It's because we literally cannot see it.
Harvard biologist E.O. Wilson diagnosed our condition decades ago: "We have Paleolithic emotions, medieval institutions, and godlike technology."
Our brains evolved to detect immediate threats—a snake in the grass or an angry face. But exponential change? Systemic risks? Abstract dangers like AI agents coordinating at machine speed? These are invisible to us.
We've been writing dystopian AI stories for decades. Why do we not heed our own cautionary tales? Here's the paradox - on one level, we can imagine what's coming - yet we can't really know some things are true unless we experience them.
Here's what I call the "mismatch sandwich": We need ancient eyes to see what matters (basic survival, our real-world relationships, etc.) and exponential eyes to see what's coming (accelerating technology), but we evolved to have neither.
We're blind in both directions.
Here's the cognitive trap: We conflate "I can't imagine this happening" with "this can't happen."
These are completely different things. Here's a nondualistic reality, where two things can be simultaneously true:
Our disbelief has zero effect on reality. As Richard Feynman warned, "The first principle is that you must not fool yourself—and you are the easiest person to fool."
This week, a Nature commentary made the case that artificial general intelligence (AI as........
