PrintPrint EmailEmail ShareShare CiteCite


Don’t Ban Robot Weapon Systems

Authors: Matthew C. Waxman, Adjunct Senior Fellow for Law and Foreign Policy, and Kenneth Anderson
October 17, 2013
New Republic Online


What if armed drones were not just piloted remotely by humans in far-away bunkers, but they were programmed under certain circumstances to select and fire at some targets entirely on their own? This may sound like science fiction, and deployment of such systems is, indeed, far off. But research programs, policy decisions, and legal debates are taking place now that could radically affect the future development and use of autonomous weapon systems.

To many human rights NGOs, joined this week by a new international coalition of computing scientists, the solution is to preemptively ban the development and use of autonomous weapon systems (which a recent U.S. Defense Department directive on the topic defines as one "that, once activated, can select and engage targets without further intervention by a human operator"). While a preemptive ban may seem like the safest path, it is unnecessary and dangerous.

No country has yet publicly evinced plans to use fully autonomous weapons specifically designed to target humans. Some countries—including the United States—have long used near-autonomous weapon systems targeting other machines, such as defensive systems aboard some naval vessels, without which, for example, a ship would be helpless against a swarm of small missiles coming at speeds far faster than human reaction times.

View full text of article.

More on This Topic

Research Links

Global Governance

Research links on Global Governance resources, such as news, background, data sources, treaties, organizations, international law and more.