Killer Robots and Autonomous Weapons With Paul Scharre

Killer Robots and Autonomous Weapons With Paul Scharre

An Airspace Systems Interceptor autonomous aerial drone flies during a product demonstration in California. Stephen Lam/Reuters
from The President's Inbox

More on:

Defense Technology

Robots and Artificial Intelligence

Wars and Conflict

Paul Scharre, senior fellow and director of the technology and national security program at the Center for a New American Security (CNAS), discusses autonomous weapons and the changing nature of warfare with CFR's James M. Lindsay. 

James M. Lindsay

Senior Vice President, Director of Studies, and Maurice R. Greenberg Chair

LINDSAY: Hi. This is Jim Lindsay. This episode of The President’s Inbox is sponsored by Foreign Affairs magazine. It’s the best in the business, and an essential resource for anyone who wants a deeper understanding of world affairs and American foreign policy. To get a taste of what Foreign Affairs offers, you can have its newsletter sent straight to your inbox. Just go to ForeignAffairs.com/PresidentsInbox. That’s ForeignAffairs.com/PresidentsInbox.

Now, onto the show.

(Music plays.)

Welcome to The President’s Inbox, a CFR podcast about the foreign policy challenges facing the United States. I’m Jim Lindsay, director of studies at the Council on Foreign Relations.

This week’s topic is “Killer Robots.” With me this week to discuss what are euphemistically called autonomous weapons or emerging weapons technologies is Paul Scharre. Paul is senior fellow and director of the Technology and National Security Program at the Center for a New American Security, otherwise known here in Washington as CNAS. Paul previously worked in the Office of the Secretary of Defense, where he played a leading role in establishing policies on unmanned and autonomous systems and on emerging weapons technologies. Before joining OSD, Paul served as a special operations reconnaissance team leader in the Army’s 3rd Ranger Battalion and completed multiple tours to Iraq and Afghanistan. He is the author of a terrific book, Army of None: Autonomous Weapons and the Future of War.

Paul, thanks for joining me here today.

SCHARRE: Thanks for having me.

LINDSAY: Let’s begin with the question: What exactly are autonomous weapons?

SCHARRE: Conceptually, the idea is very simple: it’s a weapon that makes its own decisions about who to kill. And so, you know, we can envision a world going forward where we now see weapons that are making their own decisions on the battlefield. That’s not the case today. Most drones and military robotics systems still have humans in charge. But as we see the technology advancing, there is no question that the technology will put this—make this possible, and militaries will have to face the question of whether to cross this line and delegate lethal decisions to machines.

LINDSAY: When we talk about this, you say we have unmanned drones, for example, that operate, but there is—there’s always a human in the loop. That is, a human being, maybe it’s somewhere at an airbase in Nevada, who’s looking at incoming video feed and deciding what to do. So you’re talking about coming to a period in which there is no human who’s saying now I’ll press the button.

SCHARRE: Right. So there are at least 90 countries that have drones today, and 16 countries and counting that have armed drones, including many non-state groups. Many of these are, from a—from a robotics standpoint, not very sophisticated. A lot of them are remotely controlled or teleoperated. We’re seeing more autonomy incrementally creeping into these vehicles. Just like we’re seeing in cars features like self-parking, automated braking, intelligent cruise control, we’re seeing more autonomous features come into military vehicles that will allow them to do things like navigate on their own. But for now—

LINDSAY: So a self-driving tank?

SCHARRE: Well, there’s a robotic vehicle that Israel has deployed to the Gaza border called the Guardium which reportedly does drive on its own, but Israel has said that humans will be in charge of the weapons onboard because it is armed.

LINDSAY: OK. So we have these lethal autonomous weapons. I think it’s sometimes abbreviated LAW. You say we don’t have them right now, at least fully autonomous. How close are we to the day in which we will have them?

SCHARRE: Certainly, technologically we’re not that far away at all. We don’t see autonomous weapons in widespread use by militaries, but there are a couple exceptions that I should note. One is automated defensive systems that operate under human supervision. There are at least 30 countries that have weapons of this type, things like the U.S. Navy’s Aegis Combat System or the Patriot air-and-missile defense system.

LINDSAY: Could you—I’ll just stop you right there. Could you explain what the Aegis missile defense system is for the Navy?

SCHARRE: So what this is, basically, it’s an automated weapon system. It’s sort an intelligent brain of the ship on U.S. destroyers that nets together the radars and the weapon systems, and is sort of a command-and-control computer, if you will. In fact, the Navy calls the computer component of it command and decision. And it’s highly programmable. It’s certainly not thinking on its own or making its own decisions. You wouldn’t really think of it that way. It’s basically like the Navy programs in certain statements about in certain conditions they can activate different modes to allow automated defenses to kick in. So if there might be a flood of missiles coming in right off the surface of the water at very high speeds, the Navy can activate a mode that will allow the ship to defend itself automatically.

LINDSAY: Because it’s happening too fast, you can’t actually have a human being hitting the fire mechanism.

SCHARRE: That’s right. So you could imagine situations where the speed of engagement and the saturation level is so much that humans cannot cognitively comprehend what’s happening and respond quickly enough. And if you’re talking about something like the safety of the ship, you don’t want any missiles to get through at all.

LINDSAY: OK, so this is different than, let’s say, if you watch film of naval battles in World War II, where you would see a sailor manning the antiaircraft guns or firing off the guns against other targets.

SCHARRE: Yeah. Certainly, things like the tracking of enemy targets and aiming are all automated today.

Now, in most cases, humans are still in the loop for making these firing decisions. But like I said, there are at least 30 countries that have these sort of automatic mode(s)—it’s kind of wartime modes—that they can turn on on these systems that are either land-based, air-and-missile defense systems, or they’re on ground vehicles or ships, that allow this automatic protection bubble to kick in.

LINDSAY: OK. So we have certain defensive systems where there is a—again, a mode. You could hit the button, it sort of goes automatic. But that’s now how these things would operate 24/7.

SCHARRE: No, no, no. Typically, I mean, in peacetime, they would have humans in charge. It’s sort of a wartime mode that you might need in a really intense combat environment.

LINDSAY: So we look at lethal autonomous weapons, which countries are on the lead in producing them? I assume the United States has invested heavily in this type of technology. Who else is pretty good at it?

SCHARRE: You know, the real leading military robotics developers in the world are—in no particular order—the U.S., China, Russia, Israel, the U.K., France, and South Korea; all doing really pretty interesting and sophisticated things, and all approaching this question of delegating lethal force a little bit differently. Pentagon leaders have been very clear that their intention is to keep humans in charge. They’ve acknowledged that they might shift if other countries cross that line, that they might have to as a result.

We’ve seen, for example, very different statements from Russia. Russian generals have talked about building fully-roboticized units in the future that are capable of conducting independent operations. And, in fact, Russia just recently—a few weeks ago—deployed an armed ground robot to Syria, the Uran-9, which is a very sizeable ground robotic system armed with a heavy machinegun and rockets.

LINDSAY: So do you see it as inevitable that we are going to end up with robot killers replacing soldiers or supplementing soldiers? I say this because, in reading your book, it seems like the trend is increasing use of technology to assist/supplement what it is that human beings do. We’ve moved into the situation now where systems can actually do it and it’s just a human in the loop saying, OK, now you can do it. Are we going to move toward I guess I’ll call it the Skynet future for “Terminator” fans?

SCHARRE: I mean, that is really the question. It’s the question that the book grapples with. And there’s no question that the technology is bringing us up to this point, where militaries will face that decision whether to delegate those lethal decisions to machines.

Now, right now no country has come right out and said we’re going to build autonomous weapons. Many countries have said they don’t want a ban on autonomous weapons, but that many of them have been quite murky about what their intentions are going forward.

I don’t—I don’t know. I don’t think it’s clear. I think you certainly see a lot of alarm by advocates who are calling for a preemptive ban on autonomous weapons, that they’re concerned that we might get there. One of the arguments against a ban actually is, look, this is inevitable, it’s going to happen; you know, let’s not fight against the inevitable, let’s find ways to maybe regulate the technology. I don’t know.

The history of attempts to control weapons is really mixed. And it’s something I try to grapple with in the book, looking back at examples dating back to ancient India. And you see many examples of failures, but also some successes. And so I think—one of the things I try to struggle with in the book at the end is could you envision some successful restraint against autonomous weapons and a couple different options.

I don’t know what the future holds. I will certainly say that it would be hard—it would be very difficult to restrain this technology because it is so widely available.

LINDSAY: I want to talk about efforts to potentially constrain the development of autonomous weapons in a moment. But I just want to come back before that and really explore the question of would developing these technologies be good or bad. The presumption, obviously, I want to control them, is that it’s bad. And I just—maybe it’s probably worthwhile to say is it, in fact, bad? Obviously, if you’re a “Terminator” fan, it’s a very bad idea. You don’t want Skynet. But, I mean, what are the arguments made for autonomous weapons, robotic killers, by people? Or is it simply it’s going to happen and so we just have to live with it?

SCHARRE: All of the above. I mean, you do certainly see some arguments in favor of autonomous weapons. I think the arguments against them are easier to intuitively understand. We have all seen that movie, whether it’s “Terminator” or “Westworld” or something else. We’ve seen these science fiction stories where people—you know, people’s dangerous creations slip out of control, and that is one of the concerns about autonomous weapons.

There are certainly many arguments in favor of adding more intelligent features into weapons. Could we build intelligent sensors that allow the weapon to detect whether civilians are present and maybe avoid them? I think the answer is clearly yes. The same technology that would be used in self-driving cars to say avoid hitting pedestrians could be used by militaries to avoid civilians in war.

One of the arguments in favor of taking humans fully out of the loop is that humans are not perfect. Humans make mistakes. Humans commit war crimes. And maybe we could build machines that could do better. And I think cars are really a good example here of certainly it will be possible—we’re not there yet, but probably not too far off—to build cars that could drive better than humans, in part because humans are actually terrible drivers. And—

LINDSAY: That’s clear every day here in the Washington, D.C. area.

SCHARRE: Yeah, particularly the D.C. area, it would seem. And getting worse because of things like cellphones, really. So maybe we could do the same in war, and we could—we could do better than humans.

But there are a whole host of other issues that the technology raises, like what would it mean to take away human moral responsibility for killing in war, and what are the second- and third-order effects of that, that I think are much—are much harder to grapple with and are not about the specific tactical decisions, but are about some broader, longer-term effects about how we think about the role of humans in war.

LINDSAY: Have people begun to think about this issue of autonomous weapons being hacked? I mean, it would seem that, obviously, it runs on software. I haven’t seen any software that hasn’t been hacked yet. There’s always somebody who can figure out some sort of—

SCHARRE: (Laughs.) Right.

LINDSAY: —back door to go in and make it operate. I mean, is it—as we’re sort of planning these things, how do you avoid the potential that your lethal autonomous system is compromised?

SCHARRE: It’s a huge concern. And, you know, it may be that at the end of the day it is this concern about adversaries hacking into your systems or manipulating them or sending spoofing attacks with false data to trick them that actually drives militaries to keep humans involved, because humans are more resilient against these kind of attacks. Humans can be tricked, but it’s harder—you know, if you send an order over the radio to say attack all friendly forces, humans are going to go, well, I guess they’ve hacked our radio net, right? They’re not going to follow the order. Machines aren’t going to know any better.

LINDSAY: Machines are very literal in that sense.

SCHARRE: Right. They’re going to do what they’re programmed to do, whether it turns out to be catastrophically stupid or not. And so that is a major problem.

Now, look, any digital system, as you pointed out, is vulnerable to hacking. Just putting a human, you know, on the vehicle doesn’t mean you can’t hack it. But what automation does is it concentrates power in the machine. So, for example, we have seen people hack automobiles today. We’ve seen people do that remotely and disable brakes and steering. You know, having manual control of the vehicle doesn’t change that vulnerability. What happens is if you have an autonomous car, it changes the scale at which you could do this—you could hack many at the same time—and then what you could do with them. You could, you know, redirect an entire fleet of self-driving cars to drive to some new area. And so the consequences if an adversary were to get in could be much, much more severe, and that’s probably a good reason for militaries to think twice about this. If you’re not confident about your cybersecurity measures—and, frankly, no one should be; if you have people like the NSA being hacked, we all should be—should be worried here—then you might want to think twice about building autonomous weapons.

LINDSAY: So you have the potential that you could not only fail, but fail hard. I mean, it would be a widespread failure.

SCHARRE: That’s right. And sometimes what you’ll hear people say in this space is, well, look, humans aren’t perfect. Let’s just see. You know, if we can build better than humans, then we should—we should automate this. The problem is that humans and machines fail differently. They make different kinds of mistakes. So, sure, you have humans do horrible things and commit war crimes. You have people make mistakes and commit fratricide. These tend to be idiosyncratic events, and so the same person in the same situation may not—may not commit a fratricide there. But you—autonomous weapons open up the potential for mass failures at a catastrophic level. They might continue failing and not know that they’re making a catastrophic mistake. And so the scale of accidents could be much, much larger when you have more automation.

LINDSAY: Just one related question on this. Given where technology is today, how hard is it for non-state actors—terrorist groups—to develop these kind of weapons? You earlier on referenced non-state actors when you were talking about state activity in this area. I know at least some terrorist groups have tried to develop drone technology, which again is not quite at the level of talking about lethal autonomous weapons. But sort of how do we think about the potential for non-states to use this technology?

SCHARRE: Yeah, I mean, the reality is is that non-state groups are actually at the forefront of innovation in drone technology. They’re building off-the-shelf drones in Iraq and Syria, using them for attacks against Iraqi forces, against Syrian and Russian forces, and U.S. allies. We’ve seen this on a variety of all kind of sides. There was an attack earlier this year by Syrian rebel groups using 13 drones in an attack against a Russian airbase. There’s no evidence that they were using what we think of as swarming behavior or cooperative behavior where their drones are talking to one another to communicate their actions, but that’s certainly coming.

One of the things I wrestled with in the book is I tried to understand, well, how hard would it be for someone to build them in their garage, and the answer was it’s terrifyingly possible. You know, you could buy a drone online for a couple hundred bucks that be used as a platform. You can put a weapon on a drone. Certainly, guns are widely available in the United States. People have already done this. There was a teenager up in Connecticut who put a flamethrower on a drone and—

LINDSAY: A flamethrower on a drone?

SCHARRE: A flamethrower. He basted a turkey for Thanksgiving and put a video on YouTube. Fascinatingly, in the United States it is not illegal to put a weapon on a drone. It’s not illegal to put a gun on a drone.

LINDSAY: So you’re not breaking any laws doing that?

SCHARRE: You’re not, at least not any federal laws. People have done this, put videos up online, and the FAA and the FBI have investigated. As long as it stays below, you know, 300 feet and is on private property, totally legal in the U.S.

Now, the real—the real, you know, trick in building an autonomous weapon is the brains of the weapon, putting in the pieces to be able to make its own decisions. It took me about three minutes online to find a trained neural network that has humans as an object that you could use, download for free.

LINDSAY: Explain that, a trained neural network.

SCHARRE: So in the past couple years we’ve seen really amazing gains in machine learning to use large datasets to train machines to do various tasks, including, for example, image recognition. And we now can build machines using neural networks that can identify objects better than humans. They can beat humans at benchmark tests. So, once you have a trained—now, you need lots of data, maybe tends of thousands of images, to train this neural network. Once it’s been trained, you don’t need that anymore, and you don’t need as much computer processing power.

And so you can find online—for free you can download trained neural networks that have been built and programmed by someone else to do a variety of different functions. And in this case you can find them that identify objects, including people, and you could use that for a whole interesting variety of purposes. Maybe you put it on a quadcopter and use it to take hobby photography, which is great. It’s not so great if you have—you know, if you’re a terrorist and you want to cause harm with it, because this technology is widely available.

LINDSAY: So you could theoretically code it or use it to identify, track, and potentially kill specific individuals?

SCHARRE: Yeah. Unfortunately, the technology is available to do that today, and it doesn’t take a lot of programming knowledge or sophistication. It’s probably within the realm of a reasonably capable, you know, computer science undergraduate.

I went—in researching the book I went and talked to teachers down at Thomas Jefferson High School, so science and math magnet—

LINDSAY: One of the best in the country.

SCHARRE: Yeah, here in the D.C. area. And, you know, these are not typical high school students. They’re really, really bright students. But they’re programming with neural networks. Like, using neural networks to do vision recognition is a class they have at T.J. They’re still high school students. So, you know, this is—this is not necessarily something that requires advanced degrees to do.

LINDSAY: This is troubling that you can have this technology.

SCHARRE: It is troubling.

LINDSAY: So we talked a little bit earlier about having talks about sort of state regulation of this, but it seems likely some—potentially you could do it internally, national law. I mean, the United States could pass a law saying that if you built a drone with a gun you go to jail, that sort of thing. But how do you—how are we going to deal with this march of technology, particularly since so much of this is dual-use? I mean, the neural networks may be designed for some really nice things that we would applaud people for doing, and we watch their videos on YouTube and we say really cool, but that same technology can be used to create great harm.

SCHARRE: Yeah. I mean, I think that when you think about different attempts to control, regulations, something like a nonproliferation regime that tries to restrict access to the underlying technology is just not going to work in this case because it is so diffuse, because these tools are available for free online.

In fairness, that is not actually what those who are calling for a ban on autonomous weapons are calling for. And one of the arguments I’ll sometimes hear against, you know, a ban is people say, well, you can’t stop this technology. That’s actually not what ban proponents want. They’re not asking for something like the Nuclear Non-Proliferation Treaty that takes the access to the underlying technology out of people’s hands. They’re looking at models more like bans on landmines and cluster munitions, where countries are technically capable of building landmines and cluster munitions; it’s that they opt not to do so, and in a comprehensive fashion that countries, you know, pledge to not research, not develop, not procure, not stockpile these weapons.

Now, that’s possible in this case to envision. That’s technologically doable. Would countries sign up to it politically? That’s a whole other matter. Certainly, we don’t see that kind of momentum towards a ban right now. There are a handful of countries that are signed up to a ban, but none of them are leading military powers or robotics developers.

LINDSAY: Well, it would seem that the sort of security dilemma operates here, which is that countries worry that other countries are developing this technology and so they want to have it because they don’t want to end up in a situation where their enemies have better technology than they do.

SCHARRE: Yeah, it’s—and this technology is no different than many others that people have tried to ban throughout history, dating back to poison and barbed arrows, or the crossbow, submarines, aircraft, poison gas.

LINDSAY: On the crossbow, in your book—

SCHARRE: Yes.

LINDSAY: —you talk about I forget which pope it was—I probably should know this—

SCHARRE: Two popes.

LINDSAY: Two popes, OK, basically saying it was impermissible to use crossbows, and armies still used crossbows because they worked and were effective.

SCHARRE: Yeah. There were two papal decrees in the Middle Ages banning crossbows, and the crossbow was widely seen throughout the Middle Ages as diabolical. And we see in paintings and things things like images of devils holding crossbows, and it was seen as an inhumane weapon, as an unchivalrous weapon. As far as we can tell by historical accounts, these decrees had zero effect on the spread of the crossbow across medieval Europe. And so it is often an example that is held up by opponents of a ban as well—you know, we tried to ban the crossbow and it didn’t work.

The answer is just that that it’s not a clear-cut case when you look at all of these historical cases together. You see a really mixed bag of some successes and some failures. I think the more important question is: Why do some bans succeed and some fail? Why do we see, you know, pretty good success on attempts to ban chemical and biological weapons? It’s not universal; obviously, we see Bashar al-Assad using them in Syria today. But, you know—

LINDSAY: But he’s the exception to the rule.

SCHARRE: It is an exception, right? And so we’ve seen a pretty good case there. And yet, you see the crossbow fail so miserably.

LINDSAY: So how do we proceed, given sort of where you see technology going, the limitations you see on certain types of bans, and the sot of different ethnical and moral considerations? Are we going to eventually get there no matter what?

SCHARRE: You know, one of the things that I am really encouraged by is that there have been discussions going on internationally for five years now at the United Nations. These discussions that are being held include members of civil society and NGOs that come and participate. They include experts from the academic community. There’s been a large expression of concern from AI researchers and scientists, signing varieties of open letters calling for either bans on autonomous weapons or expressing that the U.N. take some involvement. I think we could see more involvement from the scientific community. It would actually be helpful in educating countries. But I think, you know, to some extent it is really positive that there is this conversation going on preemptively, before we build these weapons, and not after the fact. It’s just not clear where this will go.

One of the things that I like in these conversations is there’s been more of a focus on the role of the human. I think that is really an important dimension to this. We need to understand the technology—what it can do, what are the benefits and risks. But also there’s sort of a deeper, more fundamental question, which is: If we had all the technology in the world, what role do we want humans to play in warfare, and why? And actually, that if we can answer that question, that that will help guide us to think about how we use this technology going forward.

LINDSAY: How would you answer your own question?

SCHARRE: I think that there is tremendous advantage in humans to understand the broader context for what’s going on in war and for humans to be morally responsible for what’s happening. General Paul Selva, the vice chairman of the Joint Chiefs of Staff, has actually expressed a desire to keep humans responsible morally and accountable for lethal decision-making in war, and I quote him in the book on this. He’s spoken about this publicly a number of times, including before the Senate. And I think that as a guiding principle we want to keep humans morally responsible for what’s going on on the battlefield and keep humans involved as a—as a failsafe; that if we use autonomy, that autonomy should be bounded in such a way that if it does fail we can accept the consequences, that they are not catastrophic.

LINDSAY: How do you keep human beings morally involved when at least some of the direction of technology separates the warrior from the consequences of his or her actions? I mean, you go back to the crossbow—at one point war was very personal, and still is personal in some settings—to today, where someone at Nellis Air Force Base is pressing a button and somebody is dying 8,000 miles away.

SCHARRE: Yeah, and in some ways the crossbow was a major paradigm shift in warfare, moving from an era where people fought hand-to-hand, up close and personal, to now killing at a distance, which is certainly one of the reasons why it was reviled at the time.

I am less concerned about the physical distance. I see the evolution of robots and drones as just merely one more step in a long arc of technological innovations increasing physical distance. I’m much more concerned about the psychological distance that we—

LINDSAY: But doesn’t geographical distance at some point create and make it easier to have that moral distance? Because you’re not really witnessing it. It seems like it is something outside of you.

SCHARRE: Well, I think that in a pre-digital era maybe, right? So certainly, if you’re looking down the scope of a rifle, you know, the further you are away the more that the human just becomes a silhouette, right? And if you’re looking down a bomb sight in World War II, you’re not seeing humans at all.

I think that digital technology and drones actually compress that psychological distance, and so you see things like post-traumatic stress from drone operators because they actually can see up close and personal the effects of what they’re doing. They can see that there are people on the other end. And, in fact, they in some ways may have much more intimate knowledge of what’s happening on the battlefield than someone in a fighter jet overhead. They’re watching the after effects. They’re watching the family members come and pick up the bodies, and that certainly takes a psychological toll on people.

I think when we start thinking about adding in automation, we should think very hard about how we’re framing these choices to people, what the human-machine interfaces look like. I give some examples in the book of some interfaces that are probably not good examples. There’s one of a testing algorithm that had one of these little sort of paperclip assistance like we’d seen in Microsoft Office from years ago—

LINDSAY: Right, yep.

SCHARRE: —being used to help cue up decisions for targeting to people. Now, this wasn’t actually used; it was sort of in a mockup.

LINDSAY: Right.

SCHARRE: But that’s, like, totally the opposite of what you’d want to do, that we don’t want people to morally offload the burden of killing to the machine. Part of that’s the technology, but a lot of it how we use it, and then the doctrine and training that goes into educating the human operators.

LINDSAY: You were an Army Ranger, so you’ve jumped out of planes and things like that. You were deployed in Iraq and Afghanistan. How did you as a soldier feel about your technology? Because on one level it’s critical to your mission, saves your lives, saves the lives of your colleagues; but also, there is this issue of sort of distancing yourself from what it is that you’re doing. Your ability to create harm increases.

SCHARRE: Yeah. I mean, certainly, in the types of wars that we were fighting at the time and the role that I had as an infantryman in a Ranger regiment, you know, there was—there was a very intimate relationship between you and the enemy. And I talk about in the book one instance where I was on a mountaintop in Afghanistan, and looking down my rifle scope, and watching another person, and I was unable to determine at the time whether he was a combatant or not. I was trying to determine whether he had been stalking us—he was coming up to the mountain—whether he had seen our reconnaissance team and was trying to sort of sneak up on us, or he was just a goatherder in the wrong place at the wrong time. And I talk in the book about sort of the moment of realizing that I hear him singing to his goats and he’s just a civilian. And so I think, you know, at least for people in that kind of role, you have this very clear relationship between the gravity—the understanding of the gravity of what you’re doing, the decisions you’re making.

The irony, in some respects, of the role I play now looking at emerging technologies is that I actually bring to this job a lot of the grunt’s skepticism of new technologies. I mean, the last people who really want newfangled technologies are the grunts down in the muck and the mud who’ve got to have stuff that’s robust and that works. And they don’t want something that brings a lot of battle reason, that’s got a lot of new stuff that’s going to break and wires that aren’t going to work.

I don’t mention this in the book, but I was in one instance where we were in a near-ambush. We were in a firefight, and the radio didn’t work. And the reason why it didn’t work was there was a thin wire that was used to snake from the radio on our radio operator’s backpack to a control unit on the arm, and just the wire had gotten kinked and it didn’t work. I had used it in the past. I kind of knew it was a little finicky. I had actually told the radio operator I wasn’t a fan of it and he used it anyways. And sort of—like, I sort of appreciate that things really need to work well in combat. But at the same time, we don’t want to be Luddites, and we want to make sure that the troops on the frontlines do have access to some of the latest and greatest technology, and we’re giving them these opportunities to better protect our own servicemembers.

LINDSAY: Just sort of thinking about that, sort of the stress of combat on human beings, obviously, you mentioned post-traumatic stress syndrome with drone operators, but it’s also clearly for soldiers on the front line. Is there an argument for these lethal autonomous weapons in the sense that you could potentially have fewer people on the front lines experiencing the stress that you felt in firefights and other combat?

SCHARRE: Yeah, I mean, there’s certainly an argument for physical distance, and I think that’s really quite, quite clear, right? I think the argument for moral separation, for psychological distance, is interesting because it is there. You don’t hear it as prominently in conversations about autonomous weapons.

But when you start talking about these moral issues of reducing human moral accountability, I think it’s very important to acknowledge—I’m glad you brought it up—that what you’re basically doing is making an argument for post-traumatic stress and moral injury because that is the actual consequence of humans bearing the moral burden of these wars. And I don’t want to be flippant about that at all. It’s a—it’s a terrible tragedy. These wars linger on in many servicemembers’ minds for decades. I have personally lost friends to PTSD through suicide. We have a just horrible problem of suicide among veterans in the United States, and it’s—certainly, we need to do more as a society to help servicemembers who are struggling with these issues. So I don’t want to suggest that it’s clear-cut or black and white.

There is, I guess, an argument there to say, well, maybe we’d be better off if no one felt these moral consequences. I think it’s just worth us asking the question of what would that do to war. As a consequence, would we see less restraint in war? Would we see more killing? And what would that mean for us as a society if no one slept uneasy at night? What would that say about our virtues and how we think about our own humanity?

LINDSAY: On that note I’ll close up The President’s Inbox for this week. Paul, thank you for a terrific conversation.

SCHARRE: Thank you. Thank you. I mean, I think there are no easy answers here. I really thank you for the conversation and the discussion.

LINDSAY: And I also want to thank you for your service.

SCHARRE: Thanks so much.

LINDSAY: Paul’s book is the Army of None: Autonomous Weapons and the Future of War, out from W.W. Norton. I highly recommend it.

Please subscribe to The President’s Inbox on iTunes and leave us your review. It really helps. Opinions expressed on The President’s Inbox are solely those of the hosts or our guests, not of CFR, which takes no institutional positions. Today’s episode was produced by Senior Producer Jeremy Sherlick. Dan Mudd was our recording engineer. Special thanks go out to Audrey Bowler, Corey Cooper, and Gabrielle Sierra for their assistance.

This is Jim Lindsay. Thanks for listening.

(Music plays.)

(END)