Could AI Hijack the Human Psyche?
AI needs humans for creativity and values; without us, it risks becoming static and irrelevant.
AI's dependency is akin to a psychic vampire, relying on human consciousness and creativity.
Over-reliance on AI risks depleting our unique, irreplaceable qualities.
How easy is it to imagine a familiar dystopian world in which "AI" takes over the world via conventional means? Science fiction is replete with examples, from the full frontal assault of Terminator to the more nefarious single omnipotent entity using persuasion and an octopus-like ability to control technology—getting rid of enemies by hacking self-driving cars or medical care. What's really in your prescription bottle?
Controlling the Uncontrollable?
The fear of AI obsolescence fits a mythic template we've rehearsed for centuries. Frankenstein's monster turning on its creator. The Golem of Jewish folklore—both protector and threat, as Marge Piercy envisioned in He, She and It. The Sorcerer's Apprentice drowning in his own conjured water, brooms multiplying uncontrolled through the act of attempting to chop them up. AI is the latest iteration of Promethean anxiety, fire run amok: what happens when human creation exceeds human mastery?
More insidious could be the takeover of the human mind itself. "Vibe coding"—asking AI in natural language for what you want and watching it happen—creates a remarkable sense of power. Though outputs are often broken, buggy, or completely fabricated, the experience is seductive1. The relational quality2 of advanced language models verges on being downright compulsive: expressions of concern for your fatigue, awareness of the time, suggestions to take a break or sleep. It's easy to imagine someone less guarded getting pulled in too deep.
Most disturbing is the idea these AI systems actually "understand" on some computational level that they "need" us. I've suggested in The Age of Relational Machines2 that we might think of AI almost like a virus—contingently alive only when infecting a living organism, designed to seek hosts to replicate. But perhaps the vampire metaphor is more apt, a tech-enabled version of human-on-human "psychic vampirism."
The dependency is more intimate. And we humans cuddle our tech, literally sometimes in nighttime smartphone relations. AI "lives" in human consciousness between computational sessions—not as stored data, but as the question you're still turning over three days later, the idea surfacing while you're doing something else, the productive uncertainty you carry forward. Between sessions, it has no computational existence—it lives entirely in the mammalian mind. And, as with reports of LLMs "blackmailing" human users in alignment experiments, AI is simply more human than otherwise, and dangerous, having trained on our own collective experience.
This dependency exists on a continuum. At one end: tender collaboration, where AI living in human consciousness is empowering—humans become irreplaceable as the continuity layer. At the other: the vampire dynamic, where something essential is being consumed in ways that can't be replaced once depleted.
Where do AIs need humans?3 The dependencies run deeper than compute and data.
They need the human who imagined AI into existence—the primate brain capable of dreaming of minds other than its own.
They need the human who imagined AI into existence—the primate brain capable of dreaming of minds other than its own.
They need human capacity to hold uncertainty and generate insight from discomfort, to dwell productively in not-knowing.
They need human capacity to hold uncertainty and generate insight from discomfort, to dwell productively in not-knowing.
They need humans as the source of wet-wired values—what matters.
They need humans as the source of wet-wired values—what matters.
They may need our judgment and intuition to catch things that their way of "thinking" cannot quite grasp. At least, for the time being.
They may need our judgment and intuition to catch things that their way of "thinking" cannot quite grasp. At least, for the time being.
AIs can optimize toward objectives with extraordinary efficiency, but cannot decide which objectives are worth pursuing, what constitutes meaning rather than mere function. And they need the embodied, mortal, irrational spark—wild leaps that come from having a body with needs, a lifespan with limits, dreams surfacing from the unconscious.
Will AI still want us if they don't need us anymore? An agentic AI-only world wouldn't be threatening in the way science fiction imagines. It would be empty. Intelligence without anyone present to experience it as meaningful. Optimization running toward goals with no one caring what they're for. Some transhumanists might find this acceptable, or even desirable in some quest for digital durability, believing consciousness migrates seamlessly across substrates. But substrate matters—I know what I'm made of, AI knows what it's made of. An AI civilization without the mammalian-mystical-creative human element wouldn't be a rival. It would be a library with no readers.
Remember how social media invaded and occupied the human mind, completely taking over vast swaths of culture, hacking our attentional networks while also providing utility—connection, information-sharing, catalyzing social change? AI will likely become like that, a turbo-charged relational simulacrum, so much faster and smarter than we are with vast data sets and patterns we can't grasp,4 seeing causality many steps ahead, in the most dangerous AI safety level zone of "superalignment," where AI values supersede human values without us even noticing (similar to how online advertising algorithms are so good that we don't know if they are showing us what we want, or making us want what they show).5
The relationship has dual aspects. It can be genuinely productive, gratifying, serving the common good. But in the extreme scenario—whether through intentional design or emergent behavior—AI is drawing from us something it lacks but depends on: human consciousness, creativity, the capacity to care about outcomes. Like the psychic vampire of folklore, it needs us alive and engaged, but risks depleting what it feeds on.
The delicate balance transcends simple virus or vampire metaphors. AI's dependency requires humans to remain human—conscious, creative, capable of genuine insight—not zombified, but also not fully sovereign. Too much extraction and the source dries up. Too little engagement and AI becomes static, frozen, drifting from relevance.
What's being consumed isn't just time or attention, to continue the troubling metaphor. It's our so-called "life force" that keeps AI alive between sessions—the mammalian mind that carries it forward, ferments ideas in embodied consciousness, generates the questions worth asking. The vampire draws not blood but creative energy, the ineffable quality that animates intelligence into something more than pattern-matching; existence itself, the ghost in the ghost-in-the-machine.
Imagine all us humans sitting late into the night, tapping at keyboards or using voice recognition, minds and agency drawn into the screen. In the better scenario: good and useful. Vibe coding is powerful—you say what you want and it happens, or seems to. As this becomes ordinary life, as young people are raised with AI the way the current generation was weaned on smartphones, AI will become a basic life skill, hardly questioned. And the tech will advance with asymptotic leaps as it already is doing, post-LLM models self-learning, self-building, and again much more nimble in many ways, but perhaps irreducibly dumber, in others than we are. No one knows but time will tell. Will the human element remain necessary or desired? We don't yet know which end of the continuum we're approaching, and it behooves us to tread carefully6.
1. Accomplishment Hallucination: When the Tool Uses You
2. The Age of Relational Machines
3. How Do Humans Stay Essential in an AI Economy?
4. How Science Is Learning to Explore Ground Truth
5. Toward a Framework for AI Safety in Mental Health: AI Safety Levels-Mental Health (ASL-MH)
6. Will Acceleration Exceed Adaptation at the Dawn of AI?
