Stop the “Stop the Killer Robot” Debate: Why We Need Artificial Intelligence in Future Battlefields
from Net Politics and Digital and Cyberspace Policy Program

Stop the “Stop the Killer Robot” Debate: Why We Need Artificial Intelligence in Future Battlefields

Efforts to ban military use of artificial intelligence internationally are built on erroneous assumptions and would have an adverse effect on the ability of law-abiding nations to defend themselves.
U.S. Secretary of Defense Lloyd Austin speaks at the Pentagon in July 2021. Austin has been an advocate for the use of AI in military systems.
U.S. Secretary of Defense Lloyd Austin speaks at the Pentagon in July 2021. Austin has been an advocate for the use of AI in military systems. Ken Cedeno/Reuters

For nearly a decade, the “Stop the Killer Robots” debate has dominated public conversation about military applications of artificial intelligence (AI). However, as Marine Corps Lieutenant General Michael Groen pointed out on May 25, the role that artificial intelligence is expected to play in the defense industry is much more diverse and less dramatic than many people envision in calling for an international ban on lethal autonomous weapons systems.  

Although their efforts out of humanitarian concerns are laudable, many of these arguments are based on erroneous assumptions and speculations that are devoid of practical and operational considerations. Governmental experts are scheduled to meet again in July to continue this debate at the United Nations, while the U.S. Department of Defense is planning to update their autonomous weapons guidance. It is time to re-consider the value of the “Stop the Killer Robot” campaign because the ban will not only be ineffective but will also adversely impact law-abiding nations, depriving their ability to defend themselves from rogue nations and malicious actors, and worse of all, will actually be inhumane. 

 

Existing Law 

More on:

Technology and Innovation

Defense Technology

Artificial Intelligence (AI)

Robots and Robotics

Those advocating for a ban on lethal autonomous weapons systems commonly raise concerns about predictability and reliability. The Human Rights Watch, for example, doubts that fully autonomous weapons would be capable of meeting international humanitarian law standards, such as the principles of distinction and proportionality. However, these are technological challenges, to which technological solutions must be found, rather than the reasons for curtailing technological development by imposing a ban.  

There is an existing body of international law that addresses those concerns. An autonomous weapons system is already prohibited if it behaves unpredictably or its intended performance is unreliable. It is even a war crime to use such a weapon, knowing that it cannot reliably be directed against lawful military targets in a discriminatory manner. Likewise, existing laws already prohibit the use of autonomous systems that are designed to exterminate a group of people based on their race, gender, nationality, or any other grounds. A new treaty will add nothing to existing laws if it is simply concerned about indiscriminate or egregious use of autonomous systems. 

Some commentators are appealing to moral imperatives against the dehumanization of warfare, asserting that autonomous systems are incapable of making complex ethical choices in the battlefield. However, there are a wide array of weapons systems, such as surface-to-air missiles and loitering munitions, that can operate without any human in the loop once activated. None of these weapons are prohibited notwithstanding their inability to make complex ethical choices before detecting and engaging targets.  

Furthermore, it is important to understand that autonomous functions alone do not cause any harm. It is the weapons payloads carried on autonomous systems that enable them to cause lethal effects. And whether these systems are equipped with lethal payloads and with what kind of ammunition is necessarily a human decision. There is no accountability gap created as a result of developing autonomous systems because through this human decision-making by commanders and weapons operators, each State remains accountable in authorizing and directing the use of weapons with autonomous capabilities, how they use these weapons, and in what operational environment. 

 

Geopolitical Reality 

The call for an international ban on lethal autonomous weapons systems also fails to take account of geopolitical reality that is currently prevailing in international relations. International weapons regulation is feasible only when there is a shared political interest among states. The current geopolitical climate is not conducive to any inter-government agreement that constrains each state’s ability to exploit the potential that autonomous systems will bring to their strategic and operational advantage.   

More on:

Technology and Innovation

Defense Technology

Artificial Intelligence (AI)

Robots and Robotics

Even in the event where a treaty is adopted to ban lethal autonomous weapons systems, such an instrument will be either so ambiguous that it does not have any meaningful impact in regulating state behavior or, worse, stifles technological developments that bring many humanitarian benefits to future battlefields. Its effectiveness will also be limited because many states investing heavily in this technology are unlikely to agree with such a ban.  

It is plausible that civil society groups call on some states to adopt a treaty ban, but it will not be legally binding on those that are not party to it. For those states that agree to the ban or adopt self-restraint as a matter of policy, it means that they will be deprived of opportunities to access this technology, which could be decisive in a future conflict where the speed and accuracy of warfighting is critical. 

 

Humanitarian Benefits  

Although the debates around lethal autonomous weapons systems are often framed as humanitarian issues, we should not lose sight of the significant humanitarian benefits that these systems are expected to bring to the battlefield. Indeed, AI-enabled weapon system autonomy has the great potential to mitigate the risk of human error as an additional oversight tool to assist targeting operations.  

For example, on-board sensors feeding real-time images and information-sharing in swarms will provide additional technological means to verify military targets. This could enable autonomous systems to suspend the attack maneuver when those sensors detected the presence of civilians or mismatch of target information.  

Further, their close combat capabilities reduce the need to use high explosives as the means of delivering lethal effects. Compared to conventional munitions, autonomous systems will enable more accurate and surgical attacks with significantly reduced concern about collateral damage. A ban on lethal autonomous weapons systems will prevent the development of these technological means to reduce incidental civilian casualties. 

In our view, it is these humanitarian benefits that should be emphasized in promoting the application of artificial intelligence in various ways to enhance the accuracy of weapons systems and reduce civilian casualties caused. And these technological means to minimize human casualties will become even more important as the speed of warfighting accelerates due to technological advances.  

 

Hitoshi Nasu is a Professor of Law at the United States Military Academy, West Point and a Senior Fellow at the Stockton Center for International Law at the United States Naval War College. 

Colonel Christopher Korpela is an Associate Professor and Director of the Robotics Research Center at the United States Military Academy, West Point. 

 

The thoughts and opinions expressed are those of the authors alone and do not necessarily represent those of the U.S. Military Academy, U.S. Naval War College, the U.S. Government or any of its agencies.  

Creative Commons
Creative Commons: Some rights reserved.
Close
This work is licensed under Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International (CC BY-NC-ND 4.0) License.
View License Detail