- Blog Post
- Blog posts represent the views of CFR fellows and staff and not those of CFR, which takes no institutional positions.
The following is a guest post by Kyle Evanoff, research associate, international economics and U.S. foreign policy, and Megan Roberts, associate director of the International Institutions and Global Governance program at the Council on Foreign Relations.
Whoever assumes leadership in artificial intelligence (AI) will rule the world. At least, that was Vladimir Putin’s message to Russian students returning to school last week. Putin mused that drone battles might one day determine the outcome of wars, and that the losing side might surrender upon the destruction of their final autonomous combatant. No single entity, he warned, must be permitted to gain a monopoly on AI.
The Russian president joins a swelling global chorus worried about AI’s geopolitical implications. Last month, SpaceX and Tesla head Elon Musk, along with 115 other leaders in AI and robotics, warned in an open letter to the United Nations that lethal autonomous weapons systems could “permit armed conflict to be fought at a greater scale than ever, and at timescales faster than humans can comprehend.” The letter implored the high contracting parties to the UN Convention on Certain Conventional Weapons (CCW), a set of protocols restricting or banning the use of inhumane weapons, “to find a way to protect us all from these dangers.”
Musk and his cosigners were responding to the cancellation of the first meeting of the Group of Governmental Experts (GGE) on Lethal Autonomous Weapons Systems (LAWS). Last December, eighty-nine CCW states parties agreed to establish the expert group to begin formal talks on a LAWS protocol. They scheduled two sessions for 2017—one in August and one in November. The first fell through after numerous CCW members failed to meet their financial obligations. Although the November meeting is still scheduled, experts worry that further delay could undermine international agreement, as advances in the relevant technologies outpace global regulations.
The world confronts a tipping point in its efforts to ensure that AI breakthroughs enhance rather than threaten humanity’s well-being. To date, international efforts to address AI’s potential implications have been limited and reactive, despite the technologies’ immense transformative potential. If the lackadaisical response continues, the yawning gap between the frontiers of technology and the mechanisms of global governance will only widen.
In the few instances in which multilateral institutions have focused on AI, they have done so at the urging of private or civil society actors. The first formal discussions on autonomous weapons, for instance, followed sustained activism by groups like the International Committee for Robot Arms Control and the Campaign to Stop Killer Robots. The X-Prize Foundation, a Silicon Valley philanthropy, likewise partnered with the UN’s International Telecommunications Union to organize June’s AI for Good Global Summit, which examined how emerging technologies can advance the Sustainable Development Goals. (Far less common have been independent multilateral initiatives like the Centre for Artificial Intelligence and Robotics, launched by the UN Interregional Crime and Justice Research Institute.)
To ensure that AI works for good, governments must cooperate in its development and deployment. Two areas ripe for deeper collaboration are the global environment and global health. Already, researchers are using machine learning to predict deforestation in the Democratic Republic of the Congo. AI has also yielded more efficient ways to diagnose and treat malaria. Greater analytic capacity would likely lead to similar results for other diseases, and might reduce harms from pandemics like Ebola and Zika Virus.
At the same time, the dual-use nature of AI means that technological advances present risks as well as benefits—and governments need to work together to tamp down the attendant dynamics of insecurity and vulnerability. Autonomous weapons are a prime example. In recent years, militaries—led by China and the United States—have spent billions of dollars developing LAWS, in hopes of gaining or denying to rivals tactical and strategic advantages. This raises the specter of new arms races, particularly since AI can amplify cyberwarfare and disinformation operations.
As computing costs decrease and algorithmic literacy rises, more actors—state and nonstate alike—will gain access to AI and its attendant capabilities. Adoption and innovation will alter power dynamics among nation-states—and between states and individuals, to say nothing of the role of other actors such as large corporations. AI will blur distinctions between cyber and physical space and intensify divisions between the digital haves and have-nots. Beyond increasing the potential for conflict, trends in AI raise tricky questions of fairness, accountability, and transparency. Ensuring that machines serve the interests of humanity as a whole will require a multistakeholder approach that considers (even if it cannot always accommodate) the diverse ethical, cultural, and spiritual values found within any cosmopolitan society.
Given the enormous transnational opportunities and risks that AI presents, countries need multilateral rules of the road. Negotiating these is urgent, as states already possess different expectations about international limits on the uses of AI. China, for instance, recently released a national plan to integrate AI into all aspects of society, drawing on vast reams of citizen data to power national AI advances. Regulators and the private sector in Europe, meanwhile, are poised for a showdown over AI and privacy rights.
To date, the Trump administration has paid little attention to how AI is likely to affect Americans—or the world writ large. Treasury Secretary Mnuchin has cavalierly dismissed concerns that automation will displace U.S. workers; the Office of Science and Technology Policy lies in shambles; and the State Department’s science envoy recently resigned while calling for the president’s impeachment. Given his distrust of multilateral entanglements—and the natural temptation for the United States to lock in its early lead in autonomous weapons—President Trump is unlikely to champion global governance of AI anytime soon.
Given the detachment of the Trump administration, the U.S. Congress has a critical role to play in setting a national AI agenda and advocating multilateral regulation of new technologies. Legislators should adopt a balanced approach, recognizing the potential of these new technologies to contribute to the global good and the unprecedented security challenges they pose. Congress’s goal should be to adopt flexible national regulations and promote a global regime that can adapt to innovations as they occur.
Putin’s remarks should serve as a Sputnik moment. AI must now factor into the geopolitical calculus. Inaction is no longer an option.