AI, Surveillance, and the Democratic Guardrails We Can’t Afford to Lose
As a counterterrorism analyst who supported US European Command and the Pentagon, I worked inside the machinery of modern surveillance. I reviewed reporting derived from communications intercepts, metadata analysis, financial tracking, and network mapping. We collected aggressively because the threats were real — ISIS, Al-Qaeda, state-sponsored saboteurs, proliferators. Plots disrupted rarely make headlines. I understood then, and understand now, that without robust collection capabilities, innocent people die.
But effective intelligence work has always required more than capability. It has required restraint — and the institutional architecture to enforce it.
Even at the height of our most aggressive operations, we were governed by a fundamental property of the intelligence world: friction. Analysts were limited by the hours in a day, the number of available linguists, and the procedural requirements of individual queries. Targeting required justification. Pattern recognition at scale demanded time and manpower. This friction was not a defect. It was a democratic safeguard. It ensured that the state’s gaze remained a scalpel rather than a dragnet — and it forced the security apparatus to prioritize the most imminent, credible threats rather than expand indefinitely into the political landscape.
That friction has now evaporated. And what is replacing it should concern everyone who values democracy.
The Terrain Has Shifted
Artificial intelligence does not merely speed up analysis. It dissolves the operational limits that made targeted, accountable surveillance possible. Advanced models can now ingest location data, financial transactions, encrypted metadata, and social media sentiment across millions of people simultaneously, identifying correlations at machine speed. The marginal cost of adding one more person to a watchlist approaches zero. That is not a technical upgrade. It is a constitutional problem.
What makes this moment particularly dangerous is not AI alone — it is AI arriving simultaneously with a significant expansion of what the government has chosen to target. Recent policy directives have broadened the definition of domestic terrorism to encompass concepts as vague as “organized doxing,” “civil disorder,” and “anti-American sentiment.” The Bondi Memorandum operationalizes this further by directing the FBI to compile watchlists of individuals and organizations sharing these ideological “common characteristics” — a mandate that, on its face, exposes anti-war activists, labor organizers, pro-Palestinian protesters, and a wide range of dissenting political movements to surveillance categorization without any evidence of violent intent. Critically, opposition to a foreign government’s policies is not antisemitism, and conflating the two in a surveillance framework is not a legal refinement — it is an analytical failure that degrades the quality of the intelligence product by burying genuine threats inside an ever-expanding universe of political targets.
Under prior operational frameworks, investigating every citizen with a partisan or dissenting viewpoint would have been an operational impossibility — the friction of human processing capacity imposed its own discipline. With artificial intelligence, it is merely a batch process. The watchlist does not need to be curated carefully when the system can monitor everyone simultaneously.
I want to be precise about the professional consequences of this expansion, because they are rarely discussed. When terrorism labels are applied to conduct defined by political subjectivity rather than violent intent, real counterterrorism work is degraded. Analytic resources are diverted from genuine threats to the management of politically elastic watchlists. Source networks built on community trust erode when those communities conclude — correctly — that the apparatus has been turned toward political surveillance. Public confidence, which is an operational asset and not a luxury, collapses. The security services that depend on citizen cooperation to disrupt actual plots become less effective precisely when they have claimed to become more comprehensive. Vague threat categories do not sharpen intelligence. They dilute it.
The Anthropic Precedent
The recent confrontation between the Department of Defense — now formally redesignated the Department of War — and the AI company Anthropic crystallized this dynamic in ways that deserve careful attention. Anthropic sought two explicit contractual guarantees: that its models would not be used for mass domestic surveillance of American citizens, and that they would not be used to direct fully autonomous weapons systems. These are not radical demands. They represent the same red lines that other frontier AI companies have publicly endorsed.
The Defense Department refused. Secretary Hegseth designated Anthropic a “supply chain risk” — an unprecedented use of that designation against an American company, widely understood as retaliation for asserting ethical limits on its own product. OpenAI stepped into the vacuum with a contract built around a “lawful use” standard rather than explicit prohibitions.
For anyone who has worked inside national security institutions, that distinction is not academic. Bureaucracies respond to what is explicitly prohibited, not to what officials privately intend. A “lawful use” clause is a floor, not a ceiling — and when executive interpretations of legality become more elastic, that floor tends to drop. The history of US domestic surveillance programs is substantially a history of capabilities expanding to fill the space that ambiguous authorization failed to close. As the surveillance practices revealed by Edward Snowden demonstrated, programs can be defended as lawful under prevailing interpretations for years before being curtailed through litigation or legislation. Anchoring a restraint in current legality means the restraint dissolves precisely when the political environment shifts — which is exactly when it is most needed.
The Anthropic episode is not primarily a corporate dispute. It is a precedent: a private company was effectively punished by the federal government for insisting that its technology not be turned against the citizens it was designed to serve.
The Counterargument, Taken Seriously
The standard response is that adversaries — China in particular — are already integrating AI into state security at scale, and that matching their velocity requires maximum flexibility. This argument deserves respect. I have made versions of it myself. Cyber intrusions, disinformation operations, and foreign interference unfold at machine speed. Refusing to use AI will not protect civil liberties; it will simply concede the advantage to actors who operate without constraint.
But this frames a false choice. Democratic governments cannot protect free societies by adopting the surveillance architecture of authoritarian ones. The legitimacy of democratic security institutions rests on public confidence that extraordinary powers will not be misdirected. When surveillance capability outruns public consensus, it creates a legitimacy deficit that eventually hobbles the intelligence community through loss of cooperation, congressional hostility, and judicial constraint. A capability that destroys the trust it depends on is not a strategic asset. It is a strategic liability.
The response must be structural, not rhetorical. When technological friction declines, formal guardrails must strengthen proportionally. That is now a legislative responsibility.
Congress should establish explicit statutory prohibitions on AI-enabled bulk domestic profiling absent individualized judicial authorization — not executive assurances, not “lawful use” standards, but enforceable law. AI systems deployed in domestic intelligence contexts should be subject to mandatory auditing and immutable logging, with results available to congressional oversight committees and inspectors general. Whistleblower protections must be strengthened to ensure that misuse surfaces before it becomes systemic. And ethical commitments in federal AI procurement contracts must be legally enforceable, not merely aspirational — procurement terms should include specific prohibitions, compliance milestones, and remedies for breach.
These measures are not obstacles to effective intelligence work. They are the conditions under which effective intelligence work retains democratic legitimacy — which is the only kind worth having.
Those of us who worked in counterterrorism accepted civil liberties tension because the threats were real and the constraints were meaningful. The tools we used were meant to defend a free society, not to categorize it. If AI has made surveillance faster, broader, and cheaper, then the commitment to democratic oversight must become correspondingly clearer and more enforceable.
Security and liberty have always required balance. AI has shifted the weight on the scale. The response must match the shift — before the machinery of security becomes the very threat it was built to prevent.
