The Inaugural AI for Good Global Summit Is a Milestone but Must Focus More on Risks
The following is a guest post by Kyle Evanoff, research associate for International Economics and U.S. Foreign Policy.
More on:
Today through Friday, artificial intelligence (AI) experts are meeting with international leaders in Geneva, Switzerland, for the inaugural AI for Good Global Summit. Organized by the International Telecommunications Union (ITU), a UN agency that specializes in information and communication technologies, and the XPRIZE Foundation, a Silicon Valley nonprofit that awards competitive prizes for solutions addressing some of the world’s most difficult problems, the gathering will discuss AI-related issues and promote international dialogue and cooperation on AI innovation.
More on:
The summit comes at a critical time and should help increase policymakers’ awareness of the possibilities and challenges associated with AI. The downside is that it may encourage undue optimism, by giving short shrift to the significant risks that AI poses to international security.
More on:
Although many policymakers and citizens are unaware of it, narrow forms of AI are already here. Software programs have long been able to defeat the world’s best chess players, and newer ones are succeeding at less-defined tasks, such as composing music, writing news articles, and diagnosing medical conditions. The rate of progress is surprising even tech leaders, and future developments could bring massive increases in economic growth and human well-being, as well as cause widespread socioeconomic upheaval.
More on:
More on:
This week’s forum provides a much-needed opportunity to discuss how AI should be governed at the global level—a topic that has garnered little attention from multilateral institutions like the United Nations. The draft program promises to educate policymakers on multiple AI issues, from sessions on “moonshots” to ethics, sustainable living, and poverty reduction, among other topics. Participants will include prominent individuals drawn from multilateral institutions, nongovernmental organizations (NGOs), the private sector, and academia.
More on:
This inclusivity is typical of the complex governance models that increasingly define and shape global policymaking—with internet governance being a case in point. Increasingly, NGOs, public-private partnerships, industry codes of conduct, and other flexible arrangements have assumed many of the global governance functions once reserved for intergovernmental organizations. The new partnership between ITU and the XPRIZE Foundation suggests that global governance of AI, although in its infancy, is poised to follow this same model.
More on:
For all its strengths, however, this “multistakeholder” approach could afford private sector organizers excessive agenda-setting power. The XPRIZE Foundation, founded by outspoken techno-optimist Peter Diamandis, promotes technological innovation as a means of creating a more abundant future. The summit’s mission and agenda hews to this attitude, placing disproportionate emphasis on how AI technologies can overcome problems and too little attention on the question of mitigating risks from those same technologies.
More on:
This is worrisome, since the risks of AI are numerous and non-trivial. Unrestrained AI innovation could threaten international stability, global security, and possibly even humanity’s survival. And, because many of the pertinent technologies have yet to reach maturity, the risks associated with them have received scant attention on the international stage.
More on:
One area in which the risk of AI is obvious is electioneering. Since the epochal June 2016 Brexit referendum, state and nonstate actors with varying motivations have used AI to create and/or distribute propaganda via the internet. An Oxford study found that during the recent French presidential election, the proportion of traffic originating from highly automated Twitter accounts doubled between the first and second rounds of voting. Some even attribute Donald J. Trump’s victory over Hillary Clinton in the U.S. presidential election to weaponized artificial intelligence spreading misinformation. Automated propaganda may well call the integrity of future elections into question.
More on:
Another major AI risk lies in the development and use of lethal autonomous weapons systems (LAWS). After the release of a 2012 Human Rights Watch report, Losing Humanity: The Case Against Killer Robots, the United Nations began considering including restrictions on LAWS in the Convention on Certain Conventional Weapons (CCW). Meanwhile, both China and the United States have made significant headway with their autonomous weapons programs, in what is quickly escalating into an international arms race. Since autonomous weapons might lower the political cost of conflict, they could make war more commonplace and increase death tolls.
More on:
A more distant but possibly greater risk is that of artificial general intelligence (AGI). While current AI programs are designed for specific, narrow purposes, future programs may be able to apply their intelligence to a far broader range of applications, much as humans do. An AGI-capable entity, through recursive self-improvement, could give rise to a superintelligence more capable than any human—one that might prove impossible to control and pose an existential threat to humanity, regardless of the intent of its initial programming. Although the AI doomsday scenario is a common science fiction trope, experts consider it to be a legitimate concern.
More on:
Given rapid recent advances in AI and the magnitude of potential risks, the time to begin multilateral discussions on international rules is now. AGI may seem far off, but many experts believe that it could become a reality by 2050. This makes the timeline for AGI similar to that of climate change. The stakes, though, could be much higher. Waiting until a crisis has occurred to act could preclude the possibility of action altogether.
More on:
Rather than allocating their limited resources to summits promoting AI innovation (a task for which national governments and the private sector are better suited), multilateral institutions should recognize AI’s risks and work to mitigate them. Finalizing the inclusion of LAWS in the CCW would constitute an important milestone in this regard. So too would the formal adoption of AI safety principles such as those established at the Beneficial AI 2017 conference, one of the many artificial intelligence summits occurring outside of traditional global governance channels.
More on:
Multilateral institutions should also continue working with nontraditional actors to ensure that AI’s benefits outweigh its costs. Complex governance arrangements can provide much-needed resources and serve as stopgaps when necessary. But intergovernmental organizations, as well as the national governments that govern them, should be careful in ceding too much agenda-setting power to private organizations. The primary danger of the AI for Good Global Summit is not that it distorts perceptions of AI risk; it is that Silicon Valley will wield greater influence over AI governance with each successive summit. Since technologists often prioritize innovation over risk mitigation, this could undermine global security.
More on:
More important still, policymakers should recognize AI’s unprecedented transformative power and take a more proactive approach to addressing new technologies. The greatest risk of all is inaction.