Pairing AI and Nukes Will Lead to Our Autonomous Doomsday
from National Security and Defense Program

Pairing AI and Nukes Will Lead to Our Autonomous Doomsday

(guirong hao/Getty Images)

Originally published at Defense News

November 13, 2018 1:00 pm (EST)

(guirong hao/Getty Images)
Article
Current political and economic issues succinctly explained.

As we commemorate the 100th anniversary of the end of World War I, which transformed how wars are fought and won, the world again stands on the precipice of a dramatic revolution in warfare, this one driven by artificial intelligence. While both AI and the debate about the implications of autonomous decision capabilities in warfare are only in their early stages, the one area where AI holds perhaps the most peril is its potential role in how and when to use nuclear weapons.

More From Our Experts

Advances in AI are on a fast track, and the United States is indisputably in an AI arms race with two of our most formidable competitors, both nuclear powers, China and Russia — the former with a smaller but growing nuclear arsenal in size and sophistication, and the latter, which along with the U.S., possesses 90 percent of the global nuclear weapons stockpile.

More on:

Artificial Intelligence (AI)

Nuclear Weapons

Russia

China

Early and determined U.S. leadership is essential to ensure that we are not just competing but also jointly cooperating with our nuclear-capable adversaries to ensure that AI does not destabilize nuclear command and control. The stakes are high; the consequences potentially existential.

While an autonomous nuclear command-and-control program might be easily dismissed as not realistic, the past is prologue. The history of the Cold War is riddled with near misses when accident, mistake or miscalculation due to computer errors in both the Soviet Union and the United States almost triggered nuclear war.

But perhaps one of the most stunning revelations of the post-Cold War period relevant to today’s AI revolution is detailed by David Hoffman in his revelatory 2009 book “The Dead Hand.” In the early 1980s, the Soviet Union actually considered deploying a fully automated retaliation to a U.S. nuclear strike, a Doomsday machine, where a computer alone would issue the command for a retaliatory nuclear strike if the Kremlin leadership had been killed in a first-strike nuclear attack from the U.S.

More From Our Experts

Eventually the Soviets deployed a modified, nearly automatic system where a small group of lower-ranking duty officers deep underground would make that decision, relying on data that the Kremlin leadership had been wiped out in a U.S. nuclear strike.

The plan was not meant to deter a U.S. strike by assuring the U.S. that if they attacked first, even with a limited strike against the Soviet leadership, that there would be a guaranteed nuclear response. The Soviets kept the plan secret from the United States. It was meant to ensure an all out, nearly automatic nuclear response, which would have existential consequences.

More on:

Artificial Intelligence (AI)

Nuclear Weapons

Russia

China

As AI develops and confidence in machine learning increases, the U.S. needs to be leading the effort diplomatically by reinvigorating strategic stability talks with both China and Russia, which should include this issue and ensure that this type of nuclear planning does not make its way back into the thinking of our nuclear adversaries or our own, whether secretly or as a form of deterrence.

While concerns have been raised in Congress about having the decision to use nuclear weapons solely in the hands of the commander in chief, an even more ominous, impending threat is having that command and control in the hands of AI.

The potential application for this developing, powerful technology in increasing stability and the effectiveness of arms control in areas such as early warning, predictive decision-making by bad actors, tracking and stopping the spread of nuclear weapons, and empowering verification for further reductions is as yet unknown.

Potential stabilizing applications need to be a defense-funding priority and also a private sector/university funding priority, similar to the public-private efforts that underpinned and propelled the nuclear arms control and strategic stability process during the Cold War.

But the destabilizing potential needs to be addressed early and jointly among the nuclear powers — and here, U.S. leadership is indispensable.

AI is projected to rapidly and disruptively change the world within a very short time frame — on the economic side perhaps even eliminating 40 percent of U.S. jobs in as short as 10-15 years. Yet, leading AI experts agree that machine learning should enhance — not replace — human decision-making. This must be a central tenet of nuclear command and control.

One of the heroes of the Cold War, who later became known and honored as the man who saved the world from nuclear war, Soviet Lt. Col. Stanislav Petrov, overrode repeated and sequential computer warnings in 1983 of a U.S. nuclear missile attack and did not pass on the warning to his superiors. In a 2010 interview with Der Spiegel, Petrov explained that the reason he did not pass the warning on, and correctly determined in the few minutes he had to make a decision that they were a false alarm, was because he factored that: “We are wiser than the computers. We created them.”

We are entering this disruptive period of rapid technological change, knowing the consequences of nuclear war and the need for U.S. leadership to guide the use of technology so that it taps that wisdom and enhances the control and reduction of these very dangerous weapons. The most immediate priority for the U.S. must be to lead the process to ensure that these rapid advancements in AI strengthen the command and control of nuclear weapons — not repeat the past and relinquish it to an automatic or nearly automatic Doomsday machine.

Creative Commons
Creative Commons: Some rights reserved.
Close
This work is licensed under Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International (CC BY-NC-ND 4.0) License.
View License Detail
Close

Top Stories on CFR

United States

Each Friday, I look at what the presidential contenders are saying about foreign policy. This Week: Joe Biden doesn’t want one of America’s closest allies to buy a once iconic American company.

Immigration and Migration

Dara Lind, a senior fellow at the American Immigration Council, sits down with James M. Lindsay to discuss the record surge in migrants and asylum seekers crossing the U.S. southern border.

Center for Preventive Action

Every January, CFR’s annual Preventive Priorities Survey analyzes the conflicts most likely to occur in the year ahead and measures their potential impact. For the first time, the survey anticipates that this year, 2024, the United States will contend not only with a slew of global threats, but also a high risk of upheaval within its own borders. Is the country prepared for the eruption of election-related instability at home while wars continue to rage abroad?