Could a Computer Stop the President from Launching Nuclear War?

Could a Computer Stop the President from Launching Nuclear War?

AI is here. 

When Vladimir Putin recently announced the development of Status-6—a nuclear-armed, underwater city-busting drone—he was doing more than brandishing Russia's might.

He was also demonstrating how Artificial Intelligence (AI) is changing the nuclear arms environment, according to a report by RAND. "The drone would be launched from submarines in the Russian Arctic, traverse the ocean at perhaps 100 kilometers (62 miles) per hour, while autonomously circumventing antisubmarine defenses, and deliver its deadly payload to the U.S. coastline, presumably after a villainous American first-strike attack destroying the Kremlin. The difficulty of communicating underwater would require a degree of autonomous capability on the part of the drone that has become possible only recently as a result of progress in AI."

AI has long been a staple of nuclear warfare fiction: an advanced computer almost triggered a nuclear war in the movie Wargames; an AI named Skynet almost wipes out humanity in the Terminator series. Today, there is considerable concern about autonomous drones and their AI, and whether they will be able to kill without human authorization.

During several workshops last year, RAND examined several scenarios involving AI and nuclear warfare. Illustrating the complexity of the problem, "participants appeared to agree that advanced AI could severely compromise nuclear strategic stability and thereby increase the risk of nuclear war," yet they couldn't reach a consensus on how exactly this would happen.

Recommended: 5 Worst Guns Ever Made.

Recommended: The World’s Most Secretive Nuclear Weapons Program.

Recommended: The Fatal Flaw That Could Take Down an F-22 or F-35.

On the one hand, some experts worry that better AI could enable better targeting and tracking of an adversary's nuclear forces. In turn, this would induce fears of a successful first strike, which would make nuclear trigger fingers all the itchier.

On the other hand, others paradoxically argue that intelligent machines would be better able to ensure compliance with arms control treaties. One group of experts "suggested provocatively that a future AI system could essentially be the arms control regime, monitoring compliance and adjudicating violations without human input," RAND said.

Most interestingly, the study raises the possibility that AI could advise leaders during a nuclear crisis. Rather than relying on emotionally or ideologically driven human advisers, AI can present calm, data-driven options regarding use of nuclear weapons.

Then again, advice from a machine is only as good as the machine itself. "Alarmists might be concerned that such a capability could be incorporated before it is sufficiently robust or without fully understanding its limitations," RAND concludes. "If an AI adviser were proven effective, however, it could increase stability by reducing the likelihood of human error and by providing radical transparency, which could reduce the risk of miscalculation. But many experts were concerned by the potential for an adversary to subvert even a very capable AI by hacking, poisoning its training data, or manipulating its inputs."

Michael Peck is a contributing writer for the National Interest. He can be found on Twitter and Facebook.

Image: Wikimedia Commons