Article

PrintPrint EmailEmail ShareShare CiteCite
Style:MLAAPAChicagoClose

loading...

Law and Ethics for Autonomous Weapon Systems: Why a Ban Won’t Work and How the Laws of War Can

Authors: Matthew C. Waxman, Adjunct Senior Fellow for Law and Foreign Policy, and Kenneth Anderson
April 13, 2013

Share

Public debate is heating up over the future development of autonomous weapon systems. Some concerned critics portray that future, often invoking science-fiction imagery, as a plain choice between a world in which those systems are banned outright and a world of legal void and ethical collapse on the battlefield. Yet an outright ban on autonomous weapon systems, even if it could be made effective, trades whatever risks autonomous weapon systems might pose in war for the real, if less visible, risk of failing to develop forms of automation that might make the use of force more precise and less harmful for civilians caught near it. Grounded in a more realistic assessment of technology—acknowledging what is known and what is yet unknown—as well as the interests of the many international and domestic actors involved, this paper outlines a practical alternative: the gradual evolution of codes of conduct based on traditional legal and ethical principles governing weapons and warfare.
A November 2012 U.S. Department of Defense policy directive on the topic defines an "autonomous weapon system" as one "that, once activated, can select and engage targets without further intervention by a human operator." Some such systems already exist, in limited defensive contexts and for which human operators activate the system and can override its operation, such as the U.S. Patriot and Phalanx anti-missile systems and Israel's Iron Dome antimissile system. Others are reportedly close at hand, such as a lethal sentry robot designed in South Korea that might be used against hostile intruders near its border. And many more lie ahead in a future that is less and less distant.

View full text of article.

More on This Topic