Article

PrintPrint EmailEmail ShareShare CiteCite
Style:MLAAPAChicagoClose

loading...

Law and Ethics for Robot Soldiers

Authors: Kenneth Anderson, and Matthew C. Waxman, Adjunct Senior Fellow for Law and Foreign Policy
December 1, 2012
Policy Review

Share

A lethal sentry robot designed for perimeter protection, able to detect shapes and motions, and combined with computational technologies to analyze and differentiate enemy threats from friendly or innocuous objects — and shoot at the hostiles. A drone aircraft, not only unmanned but programmed to independently rove and hunt prey, perhaps even tracking enemy fighters who have been previously "painted and marked" by military forces on the ground. Robots individually too small and mobile to be easily stopped, but capable of swarming and assembling themselves at the final moment of attack into a much larger weapon. These (and many more) are among the ripening fruits of automation in weapons design. Some are here or close at hand, such as the lethal sentry robot designed in South Korea. Others lie ahead in a future less and less distant.

Lethal autonomous machines will inevitably enter the future battlefield — but they will do so incrementally, one small step at a time. The combination of "inevitable" and "incremental" development raises not only complex strategic and operational questions but also profound legal and ethical ones. Inevitability comes from both supply-side and demand-side factors. Advances in sensor and computational technologies will supply "smarter" machines that can be programmed to kill or destroy, while the increasing tempo of military operations and political pressures to protect one's own personnel and civilian persons and property will demand continuing research, development, and deployment. The process will be incremental because nonlethal robotic systems (already proliferating on the battlefield, after all) can be fitted in their successive generations with both self-defensive and offensive technologies. As lethal systems are initially deployed, they may include humans in the decision-making loop, at least as a fail-safe — but as both the decision-making power of machines and the tempo of operations potentially increase, that human role will likely slowly diminish.

View full text of article.

More on This Topic

Article

The Power to Threaten War

Author: Matthew C. Waxman
Yale Law Journal Online

Matt Waxman shows that congressional influence operates more robustly—and in different ways—than usually supposed in legal debates about war...