menu_open Columnists
We use cookies to provide some features and experiences in QOSHE

More information  .  Close

Eurasia Review Interviews: AI, Agentic Influence, And Cognitive Security In The Age Of Synthetic Consensus

4 0
12.02.2026

An interview with Dr Benjamin Delhomme, AI policy and cognitive warfare expert, former Senior Expert on AI at the NATO Strategic Communications Centre of Excellence.

Artificial intelligence (AI) is frequently discussed in terms of deepfakes, synthetic media, viral deception, and broader misinformation, disinformation and malinformation (MDM) campaigns. Yet emerging research suggests that the more consequential shift may lie elsewhere, not in individual pieces of misleading content, but in the engineering of social environments themselves.

In this conversation with EurAsia Review, Dr Benjamin Delhomme examines how large language models (LLMs) and multi-agent systems are reshaping the operational logic of influence. He argues that detection-led approaches are structurally inadequate in adversarial environments, and that the core challenge is not distinguishing true from false content, but preserving the integrity of public discourse under conditions of synthetic amplification.

Drawing on his experience within the NATO ecosystem and his analysis of LLM governance, Dr Delhomme explores the limits of human oversight, the risks of identity-based lockdowns, the emerging practice of large language model grooming, and the broader implications for democratic accountability.

Rather than focusing on isolated incidents, this interview approaches influence as an environmental phenomenon in which engineered perceptions of consensus, narrative anchoring, and data governance increasingly shape the boundaries of cognitive security.

Q. Public debate around AI and democracy still tends to focus on fake content — deepfakes, misleading posts, or deceptive media. Recent research on malicious multi-agent “swarms” instead frames the threat as environmental, rooted in synthetic social dynamics that fabricate consensus and distort perceived norms. From your perspective, what fundamentally changes when the unit of manipulation is no longer a message, but the social environment itself?

A: This question hits the mark: most people are still fixating on the obvious: the message that everyone can see, often amplified by bot networks.

Since LLMs entered the public space, I have focused on the impact at the bottom of the social network where human attention is highest. People face a growing threat of manipulation by AI agents within their trusted inner circles. Because this happens in a trusted environment, these agents can fabricate a consensus that looks organic.

When I was working at the NATO StratCom COE, I referred to this as the ‘AI Latent Threat’. Whether we call it that or ‘Malicious AI swarms’ as read recently in a paper, the fundamental change remains the same: we are moving from a battle over the truth of a message to a battle over the perception of reality. You can fact-check a message, but you cannot fact-check a social environment that has been engineered to look like everyone around you has changed their minds. This results in a constant loss of control over our narratives. This is a part of what we now call Cognitive Warfare.

Q. You have consistently emphasised the importance of keeping human oversight central to how AI systems are designed and governed. In practice, what does meaningful human oversight look like when influence operations can be conducted by autonomous, coordinating agents that adapt in real time across platforms? Where do current institutional assumptions about “human-in-the-loop” models begin to break down?

A: First, we have to be careful with the word ‘AI.’ It has become a marketing term, effectively meaning nothing, as there is still no generally accepted definition of intelligence.

If we specifically talk about LLMs, most people still struggle with the fact that these models do not reason; they predict outcomes based on training on reasoning paths. So, where does the ‘human-in-the-loop’........

© Eurasia Review