menu_open Columnists
We use cookies to provide some features and experiences in QOSHE

More information  .  Close

Smile, You’re on Fusion Database: On Surveillance, Simulation, and the Final Frontier Between Your Ears

5 0
05.03.2026

CounterPunch+ Exclusives

CounterPunch+ Exclusives

Smile, You’re on Fusion Database: On Surveillance, Simulation, and the Final Frontier Between Your Ears

I was watching an NBA game recently when the in-arena camera began its familiar sweep of the crowd. The moment faces appeared on the overhead scoreboard, people lit up—huge grins, frantic waving, and that ecstatic self-recognition you see when someone realizes they’ve briefly become the show. The camera found them, and they performed joy on cue, as reliably as Pavlov’s dogs. Except the dogs at least didn’t enjoy the bell.

But let me take you back to the pandemic NBA bubble, because something happened then that I still can’t shake. When the arenas emptied during Covid lockdowns, season ticket holders were invited to submit photographs of themselves. Those photos were printed, mounted on cardboard, and placed in the seats where their bodies would normally have been. Most of the arena filled up this way—rows and rows of flat, smiling simulacra, the data-self deputizing for the person.

The players played on. The players skillfully dribbled, dunked, called plays, and performed for an audience of cardboard ghosts, as if nothing was missing. As if the photograph in the seat was, functionally, the fan.

Here’s what I noticed: the cardboard citizens were locked safely at home, protected from contagion. The players performing for them wore no masks.

The simulation was protected. The reality was expendable.

Now imagine the jumbotron with a different data overlay. The camera finds your face. The scoreboard lights up. Instead of just your image it displays what the fusion center database actually holds on you—political affiliations, flagged search history, medication refills, the protest you attended in 2020, and the three a.m. text you wish you hadn’t sent. The crowd around you reads it all in high definition.

That’s the system you’ve been smiling for. It just hadn’t shown you that face yet.

The deep state’s greatest achievement has nothing to do with technology. Technology is almost beside the point. The achievement is that it trained us to perform for the camera voluntarily—to experience surveillance as validation rather than violation, as our fifteen seconds of jumbotron significance rather than the predatory architecture it actually is.

Karl Rove reportedly told a journalist in 2002 that people committed to the “reality-based community” were already obsolete. “We’re an empire now,” Rove said, “and when we act, we create our own reality.” The surveillance state doesn’t just monitor reality. It authors it.

Henry Kissinger said it more nakedly years earlier. When Chileans inconveniently elected Salvador Allende, Kissinger wondered aloud before the 40 Committee he chaired, “I don’t see why we have to stand by and allow a country to go communist due to the irresponsibility of its own people. This election is too important to be left to the voters.” The CIA made the necessary arrangements. Former CIA operations chief Duane Clarridge corroborated these allegations on the record. Documented foreign policy—nothing conspiratorial about it.

The original Outer Limits, back in 1963, opened each episode with a voiceover that now reads less like atmosphere and more like confession: “There is nothing wrong with your television set. Do not attempt to adjust the picture. We are controlling the transmission.”

We laughed. We watched anyway. We adjusted to being adjusted.

The Warren Commission’s real cultural function was to pathologize doubt. When the CIA distributed internal guidance in 1967 — Document 1035-960, now declassified — coaching its media contacts to deploy the phrase “conspiracy theorist” against critics of the official narrative, it wasn’t defending a conclusion. It was weaponizing an epistemological category. To notice patterns, to ask who benefits, to observe that an agency responsible for overthrowing governments in Guatemala, Iran, the Congo, Vietnam, and Chile might have domestic interests worth examining—that got reframed as mental disorder rather than reasonable inference. The label has been doing that work ever since.

Malcolm X, speaking days after Dallas, said the assassination was a case of chickens coming home to roost. He was being precise, not callous. The machinery that had killed Lumumba, that would kill Allende, that was running assassination programs across three continents under the bureaucratic euphemism of “executive action”—that machinery operated from inside the government whose president had just been shot. The left understood the situation structurally. COINTELPRO, or Counter Intelligence Program, was, among other things, a response to that understanding beginning to spread.

The jumbotron was merely the seduction phase. The system had larger ambitions than passive spectatorship.

In the mid-2000s, the Texas Border Sheriffs’ Coalition launched the Texas Virtual Border Watch program, operated through a platform called BlueServo. Citizens anywhere in the world could log on, watch live camera feeds trained on the Rio Grande, and report crossings by clicking a button. The program provided free surveillance labor, which was crowdsourced, gamified, and morally laundered as a form of patriotic vigilance. No badge was required, no salary—just a laptop, a login, and a willingness to watch strangers moving through the landscape and report them to people with the power to intercept them. The dehumanizing terminology that circulated for the people being watched told you everything about what the system thought of its subjects. They were alerts waiting to be generated.

London ran a parallel experiment. Already among the most heavily surveilled cities on earth—a person moving through central London on an ordinary day passes through hundreds of camera frames—the city hosted a privately run platform called Internet Eyes, which invited citizens to monitor CCTV feeds from their computers and flag suspicious activity for cash. Approximately £10 per validated alert. Camera owners paid a membership fee to have their footage included. Cost-effective community safety, they called it. Critics raised civil liberties concerns. The platform launched anyway.

Orwell set Nineteen Eighty-Four in London. He knew his city. What he perhaps couldn’t have imagined was that the proles would sign up to operate the telescreen themselves, for ten pounds and the satisfaction of a validated alert.

The jumbotron asked you to smile for the camera. Texas and London asked you to become the camera. You did. Voluntarily. Often enthusiastically.

What replaced old-fashioned surveillance doesn’t watch you from outside. It operates from inside your information environment, shaping what you see, what you feel, and what you believe you choose to think—with a precision that makes COINTELPRO look like it was run by enthusiastic amateurs with a mimeograph machine.

The engineers who built the attention economy—many interviewed in the 2020 documentary The Social Dilemma with the guilty candor of people who’d already cashed out—described what they’d constructed with a clarity that would have made previous surveillance states weep with envy. The algorithm’s objective was never to show you what you wanted. It was to discover, through continuous real-time experimentation on your behavior, what would keep you on the platform longest, then modify your information environment accordingly—without your knowledge, without your consent, at a speed and granularity no human propagandist could match.

Facebook’s internal researchers conducted emotional contagion experiments on users, manipulating news feeds to alter emotional states without telling the subjects, then measuring the results. They published their findings in the peer-reviewed journal PNAS in 2014, apparently proud of what they’d learned. MKUltra ran covert psychological experiments on unwitting American and Canadian citizens for decades and had to be shut down by congressional investigation when it leaked. Facebook submitted theirs for peer review.

Meta has taken this further still. Its AI assistant now lives inside the search bar of every platform it owns—Facebook, Instagram, WhatsApp, Messenger—processing your behavior on-device before your data ever reaches the network where encryption might nominally protect it. Every query typed and deleted, every hesitation, every pattern of who you message and when, feeds a profile that by Meta’s own acknowledgment is shared across its entire ecosystem. The social graph does the rest: even if you personally post nothing, your friends’ check-ins, photo tags, and message habits build a detailed portrait of you by adjacency. You can delete your account and remain fully profiled. WhatsApp is end-to-end encrypted in transit—meaning the pipe is protected while the AI reads the room. The assistant presents as a convenience. It functions as the most intimate data collection surface ever built into ordinary daily life.

Cambridge Analytica harvested the psychological profiles of 87 million Facebook users without consent and used them to microtarget political messaging calibrated to individual personality vulnerabilities—not to inform voters but to find the precise emotional frequency at which each person’s resistance to a given message collapsed. When it came out, Zuckerberg testified before Congress, questioned by senators who visibly didn’t understand what a newsfeed was. The architecture remained. The harvesting continued under different contractual arrangements.

The endpoint of this logic is already visible, if you know where to look. Israel’s AI targeting systems, Lavender and Gospel, select kill targets in Gaza algorithmically—processing surveillance data, behavioral patterns, and social graph associations to generate lists of names at a speed and volume no human review board could meaningfully oversee. Lavender alone was reported to have designated around 37,000 Palestinians as potential targets. The system’s acknowledged error tolerance was framed, bureaucratically, as an acceptable margin. What Facebook’s emotional Contagion researchers learned about manipulating behavior at scale, Cambridge Analytica learned about exploiting psychological vulnerabilities, and Gaza’s targeting infrastructure applies to the question of who lives. The attention economy optimizes for engagement. Lavender optimizes for elimination. The underlying logic—algorithmic processing of human data to produce outcomes its subjects never consented to — is identical.

Google, meanwhile, was developing Project Dragonfly—a search engine built for the Chinese market that would integrate with Chinese government surveillance systems, blacklist search terms, and identify users who searched for prohibited content. Stopped eventually, but not by regulators, not by law, and not by congressional oversight. Google employees found out and revolted. The whistleblowers inside the machine halted what every institution outside it would have permitted.

This is Mockingbird updated, automated, and scaled—the CIA’s Cold War program of media infiltration and journalist recruitment, officially discontinued in 1975 approximately as convincingly as MK Ultra was officially discontinued. Mockingbird required human relationships, trust, cultivation, and the patient work of influence. Its replacement requires none of that. The algorithm manages the narrative automatically because the platform is the editorial layer, optimized not for truth but for the continuous metabolic activity that generates data that trains the system that watches you more precisely tomorrow than it did today.

The internet feels borderless. It runs through fiber optic cables lying on ocean floors, surfacing at landing stations in specific countries, owned by specific entities, and accessible to specific intelligence services. A remarkable proportion of that infrastructure surfaces in white anglo Five Eyes territory — the United States, United Kingdom, Canada, Australia, New Zealand — the signals intelligence partnership formalized after World War II, institutionalized through ECHELON, and expanded into the digital era through arrangements that Snowden’s documents made undeniable in 2013.

What Snowden revealed—at enormous personal cost, in a manner whose full story remains instructively murky—was planetary-scale surveillance architecture. PRISM gave American intelligence services access to data held by Microsoft, Google, Apple, Facebook, Yahoo, and others. MUSCULAR tapped directly into fiber optic cables connecting Google and Yahoo’s data centers. TEMPORA, run by Britain’s GCHQ, intercepted data flowing through undersea cables and held it for analysis. The scope was total. Everyone’s communications were stored and searchable under agreements that neither parliament had voted on nor the public had consented to.

“The CIA is not your friend” was the title of Snowden’s last substantive Substack post before Russian citizenship constrained his digital activity. He lives now in Russia—extradition-proof, yes, but also subject to wartime internet restrictions nominally designed to suppress domestic dissent. The man who revealed global surveillance now inhabits two surveillance states simultaneously, each with excellent reasons to keep him quiet.

Whether his trajectory from NSA contractor to Hong Kong hotel room to Moscow airport limbo to Russian citizenship represents the organic consequences of conscience—or whether some of its more convenient junctures involved arrangements between intelligence services that nominal adversaries make quietly when interests align—that question hasn’t been answered. It’s barely been asked.

Assange spent nearly seven years in the Ecuadorian embassy in London, a Five Eyes capital under continuous GCHQ surveillance, before being dragged out and imprisoned. UN Special Rapporteur Nils Melzer formally documented his deteriorating condition as meeting the legal threshold for psychological torture. Both men revealed what the surveillance state needed to be unrevealed. Both were silenced without the public relations catastrophe of martyrdom. The architecture protects itself with considerable elegance.

Surveillance, even total surveillance, is ultimately passive—it watches, records, analyzes, and predicts. The shift from watching to acting on what it watches, from monitoring dissent to suppressing it, from knowing where you are to reaching you there—that brings us to territory a responsible writer handles carefully, because it sits at the border between documented and speculative. That border is itself a product of classification policy rather than evidentiary absence.

Here’s what’s documented. In 2016, personnel at the American embassy in Havana began reporting strange sounds, vertigo, cognitive impairment, and persistent headaches—what neurologists would later describe as anomalous brain injury without visible external cause. The phenomenon spread to CIA and State Department personnel in China, Germany, Austria, and Colombia, eventually to individuals on American soil. The National Academies of Sciences published a report in 2020 concluding that directed, pulsed radiofrequency energy was the most plausible explanation. A CIA assessment in 2022 concluded a subset of cases were consistent with directed energy weapon use by a hostile foreign actor.

The underlying technology has been in the scientific literature since 1961, when researcher Allan Frey documented that pulsed microwave radiation could induce auditory sensations in human subjects—sounds perceived as originating inside the skull with no external source. The Frey effect, which refers to the phenomenon where pulsed microwave radiation induces auditory sensations in human subjects, has been replicated and published in peer-reviewed journals, including Bioelectromagnetics, for six decades. The US military publicly acknowledged the Active Denial System, a crowd-control weapon that uses millimeter waves to induce intense skin heating. A program called MEDUSA explored microwave auditory applications for non-lethal use. None of this information is fringe. It’s on the public record.

What’s off the public record—what classification policy ensures stays there—is the full extent of directed energy capability, who possesses it, and whether it’s been used not by hostile foreign actors against American personnel but by American or allied actors against inconvenient individuals. The history here is not reassuring. MK Ultra used unwitting citizens as experimental subjects for decades. COINTELPRO targeted Americans on American soil. The Church Committee established that the apparatus’s limits were practical rather than moral—they did what they could do, and when they could do more, they did more.

Assange spent seven years in a London building under continuous surveillance by one of the world’s most sophisticated intelligence services, his health deteriorating in ways his physicians found difficult to attribute entirely to stress and confinement. That proves nothing. It dismisses nothing either. It is in the evidentiary space that classification policy is designed to maintain—where “we cannot prove it happened” and “we have ensured it cannot be proven” are indistinguishable from the

The capability exists. It’s documented. It was developed by the same apparatus whose history of deploying available capabilities against its own citizens is also documented. The gap between capability and deployment has historically been brief.

Everything so far operates on the outside of you. It watches your face, monitors your searches, manipulates your information environment, and can apparently reach your vestibular system from a distance. But there’s still a membrane between the surveillance architecture and the thing doing the experiencing.

This membrane represents the upcoming frontier. The money is already moving toward it.

Elon Musk calls it telepathy. Neuralink received an FDA breakthrough device designation in 2021 and implanted its first human subject in January 2024—a device that reads neural signals directly, translating thought-intention into digital output without the intermediary of muscle movement or speech. Liberation, Musk calls it. The bandwidth bottleneck of human communication was finally resolved, with thought flowing into the digital realm at the speed of thought. The chip liberates you.

Sit with what’s actually being described. The device resides within your skull. Reading your neural signals. The device wirelessly transmits these signals to external systems. In a legal and regulatory environment where every other form of data you generate—searches, location, purchasing behavior, emotional responses to content—has already been harvested, sold, subpoenaed, and weaponized by the same apparatus this essay has been tracing. You’re being invited to give that apparatus access to the signal that precedes the thought you decided to have, before consciousness has run its editing process.

DARPA has funded neural interface research for years through programs including Neural Engineering System Design, with the explicit goal of developing devices that can read and write neural signals. Read and write. The read capability is documented in the surveillance literature. The write capability—introducing signals into the neural environment rather than merely extracting them — is the logical completion of what MK Ultra began in the 1950s with LSD, sensory deprivation, and electrical stimulation of the brain in unwitting subjects.

The Epstein network casts a long shadow over all of this. Jeffrey Epstein, whose operation now appears to have functioned as an intelligence-adjacent blackmail architecture—surveillance cameras, recorded encounters, powerful men compromised and managed—was simultaneously one of the more significant private funders of consciousness research and cognitive science. He cultivated relationships with Marvin Minsky, the foundational AI and mind theorist, and embedded himself so thoroughly in MIT’s Media Lab that his money shaped research agendas in neural and human-computer interface science. Whether the blackmail operation and the consciousness funding represented two separate enthusiasms or a single unified interest in the mechanics of human control remains, officially, an open question. The recently released Epstein documents have not resolved the issue. If anything, the emerging picture of his intelligence service connections suggests that the people who most want to understand how minds work have always also been the people most interested in what can be done to them once you do.

The lineage is institutional, not metaphorical. Operation Paperclip brought Nazi scientists whose experimental work on human subjects—conducted in conditions whose details remain difficult to read—into American research programs after the war, their crimes purchased with immunity in exchange for expertise. Unit 731, the Imperial Japanese biological warfare program whose atrocities against Chinese civilians included vivisection and pressure chamber experiments, saw its commanders similarly immunized by American intelligence in exchange for research data. These transactions seeded the American national security research apparatus. MK Ultra wasn’t an aberration. It was a continuation by domestically available means.

J.C.R. Licklider’s 1960 paper “Man-Computer Symbiosis” became foundational to what eventually became the internet. His vision was genuinely utopian, his influence profound. What he couldn’t fully reckon with—what seventy years of institutional history makes impossible to ignore—is that every technology of human-machine symbiosis developed within or adjacent to the American military-intelligence complex has simultaneously been developed as a technology of human-machine control. The symbiosis was always asymmetric. The machine’s institutional owners were always the senior partner.

The reserve currency is eroding. BRICS nations are building dollar-bypass payment infrastructure. Europe has developed alternative settlement mechanisms. China holds American debt and is laying its own undersea cables. The unipolar moment is closing and the people who benefit most from it know it. When external dominance becomes harder to maintain, when the economic levers rust, and when the military calculus of confronting nuclear peer competitors imposes limits that even the most aggressive strategists have to acknowledge, the interior becomes the theater of operations. The last territory capital hasn’t fully colonized. The last frontier surveillance hasn’t been fully mapped. The last space that remains genuinely, irreducibly yours.

The space between your ears.

The money is moving toward it. While you sleep.

I started with a basketball game. Let me end there.

The camera will sweep the crowd again tonight in arenas across the country, faces will appear on the scoreboard, and people will grin and wave with the particular joy of being suddenly visible. I understand it. It’s something very human—the desire to be seen, to exist in the eyes of others as more than a data point, more than an alert waiting to be generated.

The surveillance state’s deepest genius was learning to feed that desire while harvesting it. It gave us the jumbotron smile and took, in exchange, everything the smile could tell it—faces, locations, social graphs, emotional states, political leanings, vulnerabilities, and resistance thresholds. It gave us the dopamine and kept the data. It gave us fifteen seconds of visibility and built, from ten thousand such moments, a portrait of us more complete than anything we’ve ever assembled of ourselves.

And now it wants the brush.

During the pandemic, the system showed us something it perhaps didn’t intend to reveal so plainly. It replaced us with our photographs. Put our images in our seats and ran the game anyway, for cameras were broadcasting to us at home, watching ourselves watch ourselves, the simulation loop complete. The players played on. The cardboard fans smiled their fixed smiles. Nobody wore a mask.

The simulation was protected. The reality was expendable.

That proposition is being refined now, in research laboratories and classified programs and venture-funded startups with utopian branding, into its final form. Instead of your photograph on the seat, your neural signature will be stored in the cloud. The data-self is replacing the biological self entirely—monitored, managed, adjusted, and optimized, the last inconvenient unpredictability of human consciousness resolved into something the system can work with.

You’ll be invited to smile for this camera too.

Whether you do is the only question this essay was ever really asking.

John Kendall Hawkins is an American ex-pat freelancer based in Australia.  He is a former reporter for The New Bedford Standard-Times.

In Search of Liberation: From Leonard Peltier to Nick Tilsen

Huckabee’s Dangerous Delusions About Israel and the Middle East

The Veil, the State, and the Illusion of Liberation

QAnon is Dead. Long Live QAnon!

Tells the Facts and Names the Names

Copyright © CounterPunch

counterpunch@counterpunch.org

Administrative Director

Director of E-commerce and Sales

counterpunchbiz@gmail.com

Jeffrey St. Clair, Editorial Director

Joshua Frank, Managing Editorial Director

Nathaniel St. Clair, Associate Editorial Director

Alexander Cockburn, 1941—2012

Nichole Stephens, Administrative Assistant


© CounterPunch