AI cyberwarfare has outgrown deterrence: Why US national defense doctrine must change
The modern architecture of national cyberdefense is built on a flawed assumption: that deterrence-the same strategic logic that governed nuclear standoffs and conventional conflicts in the 20th century-can be effectively applied to an adversary that does not think, feel, or negotiate. This assumption is not just outdated; it is dangerously inadequate for the realities of AI-driven conflict.
When the White House unveiled its latest cybersecurity strategy, it signaled awareness of escalating digital threats. Shortly afterward, the US State Department announced the creation of the Bureau of Emerging Threats-a move that, on its face, suggests institutional adaptation. Yet beneath these developments lies a deeper issue: the doctrine guiding these efforts remains rooted in a paradigm that no longer applies.
The central flaw is conceptual. Traditional deterrence operates on the premise that adversaries can be influenced-through fear, cost imposition, or negotiation. That logic collapses when the adversary is not a human actor but a self-propagating system. Malware does not fear retaliation. Autonomous code does not respond to sanctions. AI agents, once deployed, do not reconsider their objectives because a diplomatic channel has opened.
This is the defining asymmetry of modern cyber conflict. A human adversary can be targeted, disrupted, or eliminated. But the systems they unleash-adaptive, replicating, and increasingly autonomous-continue to operate independent of their creators. Neutralizing the origin point does not neutralize the threat. In some scenarios, it may even accelerate it.
A glimpse of this dynamic emerged during the so-called “12 Day War of 2025,” where strategic decisions were driven not just by present capabilities, but by projected future intelligence. The targeting of an Iranian AI researcher was reportedly justified not by what he had already achieved, but by what he was expected to develop within months. This reflects a shift toward anticipatory conflict-preemptive actions taken against potential capability rather than immediate threat.
However, such tactics are inherently limited. In a landscape where AI systems can be........
