The following is a guest post by Taylor Sullivan, intern for the International Institutions and Global Governance program at the Council on Foreign Relations. This post is the second entry in the blog series Transformative Technology, Transformative Governance, which examines the global implications of emerging technologies, as well as measures to mitigate their risks and maximize their benefits.
Leading nations are gearing up for what many view to be the next revolution in warfare. As the United States and other countries prioritize the development of emerging weapons technologies, including sophisticated drones, lethal autonomous weapons systems (LAWS), and swarm technologies, the world’s powers are poised to revamp the way in which they fight wars.
Among these technologies, LAWS are perhaps the most game-changing. Often dubbed “killer robots” by critics, LAWS resist easy categorization, since they incorporate many different technologies and varying elements of human control. While some critics understand LAWS to be “fully autonomous,” meaning that the systems select and fire upon targets sans direct oversight, the United States adopts a wider definition that includes the possibility of human intervention.
Regardless, the technologies behind LAWS appear to be advancing quickly. The U.S., Russian, and Chinese militaries are three among many arguably developing and incorporating LAWS into their forces. As the world’s powers venture into this uncharted territory, debates on the legality, utility, and morality of LAWS are picking up steam. Some experts anticipate that LAWS will help advance international security; others predict that their development and use will pose destabilizing challenges for states. Debates over when and how to regulate LAWS—and even whether to ban these emerging technologies altogether—are occurring in both formal and informal international fora, as nations wrestle with the quandaries before them. Whether these discussions will produce new international law remains uncertain. Such talks are nevertheless indispensable exercises as a multiplicity of actors seeks to build consensus and set norms around these fast-emerging technologies.
Showdown in Geneva
One of the most important—and speculative—debates is whether LAWS comport with the fundamental requirements of international humanitarian law (IHL). Advocates of LAWS argue that these systems can make warfare more efficient and even humane, because they can become less mistake-prone than human soldiers over time. Given their potential for greater accuracy and discernment of combatants and civilians, LAWS promise in the abstract to limit “collateral damage”—or the unintended killing of non-combatants. The United States has even claimed that LAWS could improve compliance with IHL, by substituting for fallible human decision-making in tense situations. Critics, in contrast, focus on the serious challenges that LAWS pose to international law and security. Such weapons systems are arguably more vulnerable to cyber threats than non-autonomous systems. They also create potential accountability gaps by removing or reducing human control. And they push the boundaries of IHL by potentially violating the core principles of distinction (being able to distinguish between civilians and military targets), and proportionality (refraining from attacks that would cause excessive civilian harm in relation to anticipated military gains).
Motivated by these concerns, a transnational coalition of civil society groups has placed on the multilateral agenda a proposal to ban LAWS entirely. In 2016, in response to sustained civil society advocacy, the states parties to the UN Convention on Certain Conventional Weapons (CCW), a framework agreement to prohibit or restrict inhumane weapons, established a group of governmental experts (GGE) to consider the possibility of a new binding protocol on this category of weapons. After several years of debate in Geneva, many outside observers and a number of states parties are getting antsy for results. Some have ramped up calls for new international law, while others have advocated the more politically expedient tack of releasing a political declaration. Problematically, national delegations remain divided over whether and how to define LAWS, a situation that favors discussion over action, and the maintenance of the status quo.
Logic of a Ban
Ahead of the most recent GGE meeting, UN Secretary-General António Guterres urged the group to pursue restrictions on the development of LAWS, proclaiming that these “morally repugnant” weapons should be “prohibited by international law.” His call for a ban echoes the Campaign to Stop Killer Robots, a transnational civil society coalition established in 2012 to coordinate global opposition to LAWS. The campaign calls for a complete ban on the “development, production, and use of fully autonomous weapons” and demands that artificial intelligence (AI) researchers and developers “pledge to never contribute to the development of fully autonomous weapons.” With a membership encompassing more than 100 nongovernmental organizations from 54 countries, the group transcends national boundaries. Indeed, the campaign notes that twenty-eight national governments have expressed support for a ban. Likewise, a July 2018 pledge not to build autonomous weapons has garnered signatures from 3,253 individuals and 247 organizations, including Google DeepMind, a world leader in AI research.
Conspicuously absent from the coalition against LAWS, however, are the United States and Russia, which have voiced strong opposition to regulation of autonomous weapons. They are joined by other major military powers, such as the United Kingdom and Israel. Given the reservations of these crucial actors, the GGE’s failure to reach consensus on a binding legal instrument—or even agreement on the definition of LAWS—is wholly unsurprising. In short, a comprehensive ban on LAWS is unlikely in the foreseeable future.
Despite these obstacles, working toward a ban can still be a productive endeavor. Even if a legal instrument proves elusive, the anti-LAWS campaign can help build norms on the global level. Past transnational efforts directed at multilateral prohibitions, such as that of the International Campaign to Abolish Nuclear Weapons (ICAN), provide opponents of LAWS with a model of how civil society groups can effect change. ICAN, a transnational coalition of nongovernmental organizations, helped persuade the UN General Assembly to adopt the Treaty on the Prohibition of Nuclear Weapons in 2017. A nuclear weapons-free world, of course, is unlikely to materialize any time soon, given lack of accession to the ban from nuclear weapons states, signs that the nuclear taboo is eroding, and the weakening of existing nuclear agreements such as the Intermediate-Range Nuclear Forces Treaty.
Still, ICAN’s efforts have not been fruitless. Beyond winning a Nobel Peace Prize in 2017, the group has drawn attention to the humanitarian costs of using nuclear weapons, put pressure on governments to consider disarmament, and helped change the normative climate surrounding nuclear weapons, which could one day lead to their elimination. Similar results could emerge from efforts to ban LAWS.
The Road Ahead
In its coming meetings, the GGE should prioritize defining LAWS. Though the United States has stated that such efforts by the GGE would be “counterproductive” because time would be diverted from “understanding the issues to negotiating what would be covered,” this logic is questionable. Efforts toward a working definition will lead to clearer thinking within the GGE, because only when states agree on what they are actually discussing can the risks and benefits of LAWS be understood more thoroughly. The “general understanding of the characteristics of LAWS” that the United States claims it wants the GGE to reach is too fuzzy to be effective in fully understanding the potential issues posed by LAWS; what it would do, notably, is leave the United States freer to develop autonomous weapons down the road.
A promising starting point, then, might be to define LAWS on the basis of a certain level of human control (or lack thereof) over lethal force. This could mean narrowing GGE’s focus to conversations on either semiautonomous weapon systems, where, as defined by Paul Scharre in Army of None: Autonomous Weapons and the Future of War, a human makes the decisions to engage targets identified by automated systems, or supervised autonomous weapon systems, where the weapon detects and engages a target by itself while a human with the ability to intervene observes. Narrowing the GGE’s scope to LAWS that fall within either of these categories could be a more efficient way to reach broad agreement on the steps forward because all parties would be on the same page.
If agreeing upon a working definition of LAWS proves too challenging, proponents of restrictions on the weapons could switch to a piecemeal rather than comprehensive approach. That is, they could establish specific instruments for specific uses of pertinent technologies, through the CCW or other processes (such as restricting interactions between autonomous technologies and, say, nuclear weapons). Once the conversation is narrowed, either through establishing a working definition or laying the groundwork for a piecemeal approach, states and other actors seeking to restrict LAWS could begin to make real progress in governing autonomous weapons.