menu_open Columnists
We use cookies to provide some features and experiences in QOSHE

More information  .  Close

How Deepfakes Could Lead to Doomsday

18 11
29.12.2025

Since the dawn of the nuclear age, policymakers and strategists have tried to prevent a country from deploying nuclear weapons by mistake. But the potential for accidents remains as high as it was during the Cold War. In 1983, a Soviet early warning system erroneously indicated that a U.S. nuclear strike on the Soviet Union was underway; such a warning could have triggered a catastrophic Soviet counterattack. The fate was avoided only because the on-duty supervisor, Stanislav Petrov, determined that the alarm was false. Had he not, Soviet leadership would have had reason to fire the world’s most destructive weapons at the United States.

The rapid proliferation of artificial intelligence has exacerbated threats to nuclear stability. One fear is that a nuclear weapons state might delegate the decision to use nuclear weapons to machines. The United States, however, has introduced safeguards to ensure that humans continue to make the final call over whether to launch a strike. According to the 2022 National Defense Strategy, a human will remain “in the loop” for any decisions to use, or stop using, a nuclear weapon. And U.S. President Joe Biden and Chinese leader Xi Jinping agreed in twin statements that “there should be human control over the decision to use nuclear weapons.”

Yet AI poses another insidious risk to nuclear security. It makes it easier to create and spread deepfakes—convincingly altered videos, images, or audio that are used to generate false information about people or events. And these techniques are becoming ever more sophisticated. A few weeks after Russia’s 2022 invasion of Ukraine, a widely shared deepfake showed Ukrainian President Volodymyr Zelensky telling Ukrainians to set down their weapons; in 2023, a deepfake led people to falsely believe that Russian President Vladimir Putin interrupted state television to declare a full-scale mobilization. In a more extreme scenario, a deepfake could convince the leader of a nuclear weapons state that a first strike from an adversary was underway or an AI-supported intelligence platform could raise false alarms of a mobilization, or even a dirty bomb attack, by an adversary.

The Trump administration wants to harness AI for national security. In July, it released an action plan calling for AI to be used “aggressively” across the Department of Defense. In December, the department unveiled GenAI.mil, a platform with AI tools for employees. But as the administration embeds AI in national security infrastructure, it will be crucial for policymakers and systems designers to be careful about the role machines play in the early phases of nuclear decision-making. Until engineers can prevent problems inherent to AI, such as hallucinations and spoofing—in which large language models predict inaccurate patterns or facts—the U.S. government must ensure that humans continue to control nuclear early warning systems. Other nuclear weapons states should do the same.

Today, President Donald Trump uses a phone to access........

© Foreign Affairs