menu_open Columnists
We use cookies to provide some features and experiences in QOSHE

More information  .  Close

When it comes to nukes and AI, people are worried about the wrong thing

4 1
yesterday

It would take about 30 minutes for a nuclear-armed intercontinental ballistic missile (ICBM) to travel from Russia to the United States. If launched from a submarine, it could arrive even faster than that. Once the launch is detected and confirmed as an attack, the president is briefed. At that point, the commander-in-chief might have about two or three minutes at most to decide whether to launch hundreds of America’s own ICBMs in retaliation or risk losing the ability to retaliate at all.

This is an absurd amount of time to make any consequential decision, much less what would potentially be the most consequential one in human history. While countless experts have devoted countless hours over the years to thinking about how a nuclear war would be fought, if one ever happens, the key decisions are likely to be made by unprepared leaders with little time for consultation or second thought.

Key takeaways

  • In recent years, military leaders have been increasingly interested in integrating artificial intelligence into the US nuclear command-and-control system, given their ability to rapidly process massive amounts of data and detect patterns.
  • Rogue AIs taking over nuclear weapons are a staple of movie plots from WarGames and The Terminator to the most recent Mission: Impossible movie, which likely has some impact on how the public views this issue.
  • Despite their interest in AI, officials have been adamant that a computer system will never be given control of the decision to actually launch a nuclear weapon; last year, the presidents of the US and China issued a joint statement to that effect.
  • But some scholars and former military officers say that a rogue AI launching nukes is not the real concern. Their worry is that as humans come to rely more and more on AI for their decision-making, AI will provide unreliable data — and nudge human decisions into catastrophic directions.

And so it shouldn’t be a surprise that the people in charge of America’s nuclear enterprise are interested in finding ways to automate parts of the process — including with artificial intelligence. The idea is to potentially give the US an edge — or at least buy a little time.

But for those who are concerned about either AI or nuclear weapons as a potential existential risk to the future of humanity, the idea of combining those two risks into one is a nightmare scenario. There’s wide consensus on the view that, as UN Secretary General António Guterres put it in September, “until nuclear weapons are eliminated, any decision on their use must rest with humans — not machines.”

By all indications, though, no one is actually looking to build an AI-operated doomsday machine. US Strategic Command (STRATCOM), the military arm responsible for nuclear deterrence, is not exactly forthcoming about where AI might be in the current command-and-control system. (STRATCOM referred Vox’s request for comment to the Department of Defense, which did not respond.) But it’s been very clear about where it is not.

“In all cases, the United States will maintain a human ‘in the loop’ for all actions critical to informing and executing decisions by the President to initiate and terminate nuclear weapon employment,” Gen. Anthony Cotton, the current STRATCOM commander, told Congress this year.

At a landmark summit last year, Chinese President Xi Jinping and then-US President Joe Biden “affirmed the need to maintain human control over the decision to use nuclear weapons.” There are no indications that President Donald Trump’s administration has reversed this position.

But the unanimity behind the idea that humans should remain in charge of the nuclear arsenal obscures a subtler danger. Many experts believe that even if humans are still the ones making the final decision to use nuclear weapons, increasing reliance on AI by humans to make those decisions will make it more, not less, likely that those weapons will actually be used, particularly as humans start to place more and more trust in AI as a decision-making aid.

A rogue AI killing us all is, for now at least, a far-fetched fear; a human consulting an AI on pressing the button is the scenario that should keep us up at night.

“I’ve got good news for you: AI is not going to kill you with a nuclear weapon anytime soon,” said Peter W. Singer, a strategist at the New America think tank and author of several books on military automation. “I’ve got bad news for you: it may make it more likely that humans will kill you with a nuclear weapon.”

Why would you combine AI and nukes?

To understand exactly the threat AI’s involvement in our nuclear system poses, it is important to first grasp how it’s being used now.

It may seem surprising given its extreme importance, but many aspects of America’s nuclear command are still surprisingly low-tech, according to people who’ve worked in it, in part due to a desire to keep vital systems “air-gapped,” meaning physically separated, from larger networks to prevent cyber attacks or espionage. Until 2019, the communications system that the president would use to order a nuclear strike still relied on floppy disks. (Not even the small hard plastic disks from the 1990s, but the bendy 8-inch ones from the 1980s.)

The US is currently in the midst of a multidecade, nearly trillion-dollar nuclear modernization process, including spending about $79 billion to bring the nuclear command, control, and communications systems out of the Atari era. (The floppy disks were replaced with a “highly secure solid-state digital storage solution.”) Cotton has identified AI as being “central” to this modernization process.

In testimony earlier this year, he

© Vox