National Security and Silicon Valley

Tuesday, January 15, 2019
Brian Snyder
Speakers
Christopher Kirchhoff

Founding Partner, Defense Innovation Unit X

Eleonore Pauwels

Director, The AI Lab, Woodrow Wilson International Center for Scholars; Research Fellow on AI and Emerging Cybertechnologies, United Nations University 

Mary Wareham

Arms Advocacy Director, Human Rights Watch

Presider

Panelists discuss the widening schism in Silicon Valley between employees and their host companies over the national security applications of the frontier technologies they create, and the ethical concerns surrounding the debate. 

SANGER: Well, good afternoon. I’m David Sanger from The New York Times. And Happy New Year. And welcome back to the Council. Good to see such a big crowd on a snowy day. And also good that we’ve got such a crowd for such a really fascinating topic—one that doesn’t get discussed in foreign policy circles enough. So I’m delighted to see the Council increasingly focusing on this.

Our topic today is the strange relationship between national security and Silicon Valley, which has been in the news increasingly, and is a subject of significant tension that is likely to get to be significantly more tension in the next year. And we have the perfect panel for discussing this.

Let me start at the far end. My old friend Chris Kirchhoff, who was one of the founding partners of the Defense Innovation Unit X. The X was for experimental. And in the Trump administration, they have dropped the X. (Laughter.) So whatever you experimented with, they’re doing, Chris.

KIRCHHOFF: It’s over. It’s over.

SANGER: That’s it. Chris has had a number of fascinating jobs at the National Security Council, at the Pentagon. But he seemed in his element when I would go visit him on—I was working on a book called The Perfect Weapon about cyber. And they would—his offices were in—is that an old Air National Guard unit?

KIRCHHOFF: Yeah, Moffett Field.

SANGER: That is just outside Moffett Air Force Base. And the real reason that they were located there was that the Pentagon couldn’t afford the rents in Silicon Valley. (Laughter.) So it wasn’t the most luxurious start-up operation you ever saw, but it was really interesting. And Google’s—one of Google’s buildings sort of loomed over it, from the side.

Eleonore Pauwels is the director of the AI lab at the Woodrow Wilson Center. But she’s also working at United Nations University these days as a research fellow on AI and emerging cyber technologies. And has done some of the most interesting early writing about what the U.N.’s role could be, and should be, in dealing with regulating emerging technologies.

And my friend Mary Wareham is the arms advocacy director for Human Rights Watch. But she also runs something called—make sure I have the title of this right—the Campaign to Stop Killer Robots. Is that right?

WAREHAM: Mmm hmm.

SANGER: Which has got to be the most grabbing NGO title I can think of. (Laughter.) So we will—the topic will turn to killer robots at some point soon. The Council is not yet buying its own, but I’m sure they’re going to budget for it in the future.

So, Chris, let me start with you. So this has been a big period of tumult in recent times, for reasons that we will get into—from the decisions made at Google, and so forth, and so on. But DIUx began really after it became clear, following the Snowden disclosures, that Silicon Valley and the national security world were going in completely divergent directions—particularly after it was revealed that the NSA had gotten into Google’s—or, between Google severs, I should say—abroad and so forth. But let me start you with the sort of most basic question. You’ve often made the argument that there has been no time when the private sectors’ inventions have been more critical to national security than today. So take us in that direction for a bit. Tell us why and what you were trying to do at DIUx.

KIRCHHOFF: For sure. Well, first of all, it’s a privilege to be on a panel with David. As a young staffer at the Pentagon, the worst thing that could happen was David Sanger calling you. (Laughter.) And you know, the most sensitive national security programs that we run are called special access programs. And there’s a sort of by-name list as to who gets to know. And the joke at the office was that David was an honorary member of all the lists. (Laughter.) If he was calling you, you know, it can only go south. And it’s also, actually, a privilege to be on a panel here with Mary. The Pentagon, for its employees, runs a program where you can donate some of your salary to charity. And I was delighted each year to donate to Human Rights Watch, which is a wonderful organization. And it was sort of a deliciously subversive thing to do as a Defense Department employee.

The story of technology is—over the last two generations—is really a dramatic one, in terms of the shift from the traditional institutions of technology development in the government—the national labs, DARPA—to the West Coast, to the commercial technology sector. So it used to be the case that you could presume that the Department of Defense had a monopoly on advanced technology. And for sure you can no longer presume that. And in fact, the pendulum has swung so far that Google and Apple, if you look at their market capitalization even now, are more than twice the size of the entire U.S. defense sector combined. So what that means is that the locus of innovation has shifted from east to west, if you will. And if the Pentagon isn’t focused on what technologies are being developed in Silicon Valley, then it is simply not any longer on the cutting edge.

And just another stat to make this real, you know, one of our most advanced systems in the military is some of our advanced electronic warfare systems, particularly those that are on the Aegis Cruiser. And if you look at the technology that actually is a part of those systems, 96 percent of that technology is available today on the global commercial technology market. Only 4 percent is sort of produced for defense applications. So what that means is not only is the locus of technology shifting from military-specific into the commercial world, but it also means that our adversaries have access to technology that really levels the playing field in a way that is historically unprecedented for the United States and for our allies.

So for that reason, Secretary of Defense Ash Carter thought it was time that the Pentagon shifted its view and sought to engage with the technology world, and the startups that are a part of it, that by and large have no interest whatsoever—for very good reason—for engaging with us. And that’s the story of how I got sent out to the West Coast to help start Defense Innovation Unit Experimental.

SANGER: And just a word or two about your view of the resistance that you encountered in Silicon Valley. Some of it was simply financial. A lot of startups are told: Whatever you do, don’t deal with the government. You know, they make you work forever. They pay small amounts of money, and they pay it late. (Laughter.) Or, if you’re a furloughed employee, they just don’t pay it at all, right? But there was also an increasing resistance that you were running into.

KIRCHHOFF: Yeah. So, I mean, you know, first of all, you know, imagine explaining to a startup that you just signed a contract with what a government shutdown is, or what a continuing resolution is. That means we can’t pay you what we just agreed we’d pay you. So you can imagine—you know, that’s actually, in a way, the most difficult divide, I found certainly, is just making the government machinery speed along at the pace that Silicon Valley moves at. You know, Defense Innovation Unit Experimental, it’s not even three years old at this point. And they’ve already worked with about 100 companies who had never before worked with the Department of Defense, and have contracts now that are approaching, you know, almost a billion dollars of value.

And so what we found certainly was that—a large number of startups, actually, that were very interested in working with the military on a variety of missions—everything from humanitarian disaster relief to one project actually on cancer detection. Not all of our projects were necessarily about kinetic warfighting. So certainly there is a debate going on in our nation around technology, and technology firms in Silicon Valley. But I would say that DIUx was largely unaffected, actually, by those debates, except in a couple specific instances.

SANGER: OK. Mary, you spent some time looking at what happened at Google in the past year as the debate took off about their involvement in something called Project Maven, and then the continuing debate there about whether they should bid on providing the Pentagon with cloud services and so forth. Tell us a little bit about what you learned as you worked with the 4,000-or-so Google employees who seem to object here.

WAREHAM: Thanks, David. And thanks to the Council on Foreign Relations for convening this discussion, and to my panelists. I think a perfect panel actually on this topic would be one with representatives from the companies—both from the managements as well as the tech workers themselves. And I don’t want to speak on behalf of either of those groups, but I think the story of Project Maven and what happened with the Google employees last year is a—is an interesting one from our perspective at Human Rights Watch. Mainly with respect to the debate over autonomous weapons and the Campaign to Stop Killer Robots, which I’m coordinating.

I think a lot of people probably here knew that the Pentagon had embarked on this project with Google, but it was out of the public eye until early last year when a reporter at Gizmodo called Kate Conger, who’s now over at The New York Times, started to publish a series of pieces about this project that Google was undertaking to, from what we understand, identify objects from video footage that had been shot by the surveillance drones owned by the U.S. Department of Defense. And the idea, from what I understand, was to use the machine learning and AI to sift through all of that data.

I wrote a letter to the heads of Google after reading that article just to seek more information on Project Maven and to find out how they were going to avoid a slippery slope whereby identifying objects could then turn into identifying targets for lethal attacks. At the same time, we also invited Google to make a public commitment not to contribute to the developments of what we call fully autonomous weapons, and what the governments internationally call lethal autonomous weapon systems. And that, for me, sparked a kind of a series of conference calls with some of the human rights staff and others at Google. But at the same time, I was also starting to receive email messages from Google employees themselves saying that they were very concerned about this particular project, and that they wanted to support the Campaign to Stop Killer Robots.

What really came to light in April last year through The New York Times reporting was this open letter that more than 4,000 Google employees signed. And it was addressed to their bosses. And it demanded that the company commit to never build, quote, “warfare technology.” A few weeks later, at the beginning of June, Google issued the first set of ethical principles for the company. And these are quite detailed principles. And I looked through them to see if they had heeded the call from the Campaign to Stop Killer Robots. And the principals commit the company not to design or deploy artificial intelligence for use in weapons or in technology that causes overall harm to humans.

That was more than we had called for. That was a pretty big commitment. The trick will be in how it is operationalized, but we welcome that commitment. And a few days before issuing these ethical principles, Google also indicated that it no longer be participating in Project Maven once it expires—once that contract expired this year. And then a few months later in October, Google announced that it would not be bidding on this much bigger Pentagon cloud computing contract, which I understand is worth billions of dollars, and other major tech giants are still bidding on that.

So that's the story in a nutshell of Project Maven, and the Google employees actions, which just last week were acknowledged with the Arms Control Award by the Arms Control Association. It will be—it was given to these anonymous Google employees. So they’re going to hand it over in April. I’m not too sure how to hand it over. And in the meantime, we’ve been continuing from the campaign our engagement with interested tech workers. And these are tech workers not only in Silicon Valley—the Google ones included—but in Europe, and other parts of the world. People were contacting me from Google offices in Zurich, and Dublin, and Japan, and elsewhere.

It’s—I think it attracted a lot of attention because it was perhaps one of the first visible examples of that tech workers really starting to organize. And now we see thousands of employees at these big tech companies pushing back against projects and personal decisions that they considered unethical. I could go more into the campaign and how we’re working with the tech workers, but maybe just to close with another development from last year which concerns the United Nations. We meet at the United Nations for the diplomatic talks. But last November on the 100-year anniversary on the end of World War I in Paris, secretary-general of the United Nations called for a prohibition on fully autonomous weapons.

And I want to just read to you what he said. Imagine the consequences of an autonomous weapon that could, by itself, target and attack human beings. I call upon states to ban these weapons, which are politically unacceptable and morally repugnant. So he’s taken the very pure stand that machines that have the power and the discretion to take human lives are politically unacceptable and morally repugnant and should be prohibited by international law. So those are a couple of examples of some of the things that happened last year with respect to the debate over autonomy in weapon system. And I think they show the momentum is building towards the creation of new international law.

SANGER: Well, we’ll come back to that in a minute. But I think you perfectly teed-up Eleonore on the question of what role might the U.N. have here? Because when you think about the early and major arms control agreements of the world, most of them were not done within the confines of the U.N. Some were, but most were not. And yet it is interesting that the secretary-general sort of laid out this marker. So what are you seeing at the U.N.? Is this area for U.N. commissions? Is an area for U.N.-sponsored treaties? What's the landscape?

PAUWELS: Thank you so much, David. Good afternoon. It’s a pleasure to be here in such a good company. I wanted to start by sharing a few words of warning and wisdom from the U.N. secretary general that are part of a report called Securing Our Common Future. It was published this summer. And he’s saying, we are living in dangerous times. New combination of weapon technologies are increasing risks, including from the ability of nonstate actors to wage attacks across international boundaries. And this quote often makes me think of another sentence from George Orwell, which says: It was an exceptional life that were living, an unusual way to be at war, you could call it war.

And so, indeed, at the U.N. governance level there is an acute understanding that we are facing a new form of tech convergence which is simply too powerful for humankind to have use, but they’re so inherently dual use, with drastic and long-term implications for global security in cyberspace. And so there is therefore a conviction that we need foresight—we need foresight and prevention for the cyber era. Think of deep-learning systems that are able to drastically intensify the nature and scope of cyberattacks, or design deep fake simulations based on biometrics that could initiate and fuel conflicts.

The same algorithm can be used to optimize genome editing for biodefense applications. And—(inaudible)—relying on cyber networks of intelligence, are already changing the nature of and conflict, but could also help in the future perform precision attacks on populations and biotech infrastructures. We are also building, slowly but surely, an Internet of genomes, bodies and minds, where surveillance and data manipulation may become pervasive. And so increasingly there is an acute understanding at the U.N. of this new permeability between civil and military technologies which is disrupting prevention, denying attribution, and atomizing responsibility, and therefore creating new risk for civilian security.

The diagnosis doesn’t stop here. As we witness complex and ambiguous tech disruptions, we are also facing inside the U.N. a significant trust deficit disorder. And that’s mentioned in almost every speech of the U.N. secretary-general. There are major tensions and dissenting interests between superpowers, such as the U.S. and China, that are engaged in a fierce competition over who controls AI, cyber tech, and biotech. And then other states are now testing new ways of waging cognitive-emotional conflicts remotely. Inequality is also widening for those countries that are tech takers instead of being tech leaders. And some of those are on the verge of being cyber-colonized for their biological data. And others are quickly becoming weak security links in cyberspace. So that’s a diagnosis made at the U.N.

Finally, we are also facing a new corporate order, which is not made of diplomats and lawyers like we’re used to but made of CEOs with global reach. And these large private tech platforms, which have often applied a model of permissionless innovation—we break and we innovate—these platforms are now becoming our last line of defense as dual-use technologies are converging in cyberspace. And in turn, some of these platforms, such as Microsoft, are increasingly interested in defining cyber norms and policies for cyberspace. So in the face of this divide, was role can the U.N. play in bridging national security and the private sector interests I the global governance of dual-use technologies?

Within the U.N., there is an increasing commitment to better engage with the private sector from different regions, including Asia and the Indo-Pacific, in the governance of AI and emerging technologies. And you see this through the high-level panel on digital cooperation that was launched by the U.N. secretary-general this year, and his agenda on innovation. So there is a sense of urgency that the multilateral system needs to help build a new social contract to ensure that AI and converging technologies will be deployed safely. And—and this is, I think, very important—but align with the ethical needs of a globalizing world. So there a realization that the U.N. is in unique position to articulate an ethical proposal, an ethical proposition for the global governance of dual-use technologies. And so help states, the private sector, and other stakeholders find common grounds, define their interests, and define their policy options, so that learning about governance happens without crisis and failures.

So with this in mind, the U.N. increasingly is trying to play a government role, which is deeply needed at the international level. And this role is defined under the following words: To promote inclusive foresight, to anticipate with a diversity of expertise the unknown implications of the combination of powerful dual-use technologies, to be honest broker in discussion over norms and oversight—which doesn’t really exist at the international level—to monitor and ensure consistency between national, regional, and private sector actors. And finally, maybe the most important, to be—to be a safe space, where we can report governance anomalies, including the misuse—the misuses of converging technologies, so that there is a feedback loop to improve normative governance, normative guidance.

So it’s an agenda that’s about foresight and prevention for converging technologies, prevention for the cyber era. And I think it makes sense for the private sector, because we are the crossroad. The next generation of scientists and entrepreneurs will we have to manage the powerful tension between the need for secrecy and security and an ethos of openness and sharing in AI, cyber tech, and the life sciences.

SANGER: Great. Well, thank you.

Chris, you were among a group of people in Silicon Valley who taught me, as I was doing book research and another reporting, how complicated this gets really fast, right? So let's start with something simple, like Project Maven. Some of the people who were working on Maven will tell you that most of the software could be used for target discrimination. Which is to say, to make sure that a farmer standing in the field with a farm implement is not mistaken by a drone operator to be a terrorist standing in a field with a rifle. And that you’re using this in an effort to go reduce casualties, which is—which overtime in—if you look at American drone attacks, it appears that they've gotten better, using better technology, at reducing—not eliminating, but certainly reducing—the amount of collateral damage.

In a program like running the Pentagon’s cloud services, some of that’s going to used, no doubt, for war purposes. But a huge amount of it is just going to be used for ordinary information sifting, the kinds of things that IBM used to build years ago for the Pentagon, and others, in the day of big mainframe computers. So walk us a little bit through how cloudy this gets, and how quickly.

KIRCHHOFF: Sure. Well, I mean, David, you really, you know, got right to it. I mean, you know, the technology that Project Maven was about is actually not dissimilar from what Google has its Nest security cameras, which is a set of algorithms that can identify activities that the camera sees that might be relevant for a human operator to observe. And this is, again, where—you know, there’s a long spectrum, I think, of how dual-use technology gets employed. And I think it's one thing to sit back and notice that, you know, artificial intelligence is part of a family of technologies that can be used, in fact, in systems that are kinetic. But that doesn’t mean they will be. It doesn’t mean that the purposes that they’re deployed for will actually be anything other than force protection or defensive.

So one of the things that was interesting, for those of us in the Department of Defense, watching the Project Maven controversy play out, is for us to sit back and realize, hey, you know, this is actually not targeting technology in the way that we think of targeting technology. It has many other uses. And, you know, I’m actually—I’m really proud of the Google employees for taking the stand they did, and for beginning to wrestle with the nuanced debate that we’re all in about technology. Because there’s a spectrum of people that have responsibility for technology. And it starts at the people that are inventing it. But it ends, ultimately, at the people that use it on behalf of our government, our elected officials. And I think everybody needs to be a part of this conversation.

SANGER: Now, one of the arguments you hear frequently in the Pentagon was—and I’ll ask Eleonore and Mary to come back and comment on this as well, is it’s one thing for us to set a good moral line about how we’re going to use this or not. But unlike in the nuclear arms race, these are mostly technologies where you have no ability to verify how they’re being used. And at the time they’re being developed, the people developing them may not know how they will ultimately be used.

So how do you design a set of principles here that you would have any confidence the Russians, the Chinese, the Iranians, the North Koreans, anybody else who the U.S. might define as an adversary, would be making use of them? And how do you answer those at the Pentagon who would say: If AI shortens the timeframe for an adversary to react, we have no choice but to use it to be able to shorten our timeframes as well, or you’re essentially disarming?

KIRCHHOFF: Well, we’re a democracy. And we have made commitments to human rights and to ways of waging war that are—that are—keep those principles front and center. So that’s, you know, number one. Number two is that, you know, the Pentagon is actually now in the process of developing its own set of principles on AI, because it realizes—I mean, people in the military realize that this is going to be an issue with us for a long time. But I think it’s going to be difficult.

I mean, if you look at Google’s AI principles—which is a really noble first attempt to articulate a set of ideas about how we should think about AI—the first principle that Google listed was that AI should be socially beneficial. And I think that’s right. But, you know, it turns out that security is actually a public good, socially beneficial. And AI is going to be a huge part, in the future going forward, of ensuring security, both domestically and abroad. So, you know, in a way, Google’s own principles, if you read them, at least to my mind, would actually call for Project Maven to go forward, not to be stopped.

SANGER: Interesting. So, Mary, when you look at this—I mean, this is the ultimate dual-use technology, as we discussed with Maven. You could use it to avoid civilian casualties. You could use it to improve targeting. And you could be doing both of those at the same time.

WAREHAM: Yeah. I mean, maybe just to be clear, the Campaign to Stop Killer Robots calls for a preemptive ban on fully autonomous weapons. Preemptive means future weapons systems, not the current ones today. So, you know, prohibiting armed drones is not within the scope of the of the Campaign to Stop Killer Robots. And different members of the coalition have got different opinions on the impact of drones, which some, I guess, call semi-autonomous—

SANGER: But it would stop a full-autonomous drone—a future, fully-autonomous—would it, or?

WAREHAM: The tech?

SANGER: Under your principles?

WAREHAM: What we’re seeking is international—we’re seeking new law. And we’re hearing a lot of talk at the moment about principles. Principles are fine. You know, Google’s issued its principles. We spent last year at the United Nations in August with the diplomats while they negotiated principles from the talks on lethal autonomous weapon systems. And they came up with a big set of nonbinding principles. And one of the first kind of findings with that is that international law applies to new weapons systems, including lethal autonomous weapon systems. And it also affirmed the importance of the human element. And for us, that showed that governments are willing and ready to negotiate on the topic of killer robots. The problem is that they were negotiating nonbinding principles for a final report of a conference, and not the international treaty that we seek.

We seek new international law and national laws because that’s way more binding and permanent than ethical principles, which can be forgotten, like policy overwritten, you know, disbanded, and that kind of thing. We know that there are all sorts of challenges involved with verification and monitoring of such an international treaty, but that’s no reason for not attempting this. And it has been done before, as you might have seen from many of the open letters that have been issued since 2015 from the—from the technology experts in those—and the AI experts in those communities. They talk about how they’re a little bit like the nuclear scientists and the chemists of last century, who wanted to continue their work in those fields but did not want to see it weaponized. And that resulted in the creation of the chemical weapons convention. We now have the treaty on the prohibition of nuclear weapons. And these provide the normative frameworks to deal with some of the challenges that have been raised.

SANGER: Great. So Eleonore, last question for you before we open this up to everybody else. So the U.N.’s first sort of toe in the water in this area was the creation of a council of experts who were looking mostly at cyber, less at artificial intelligence issues. And for a year or two, they came up with some good principles. The U.S., the Chinese, the Russians all signed on. Then last year the efforts seemed to kind of fall apart as they met again, tried to push these forward, and basically sort of backtracked—even on some of their initial agreements. What lessons are to be learned from how the U.N. has approached this kind of thing before?

PAUWELS: Mmm hmm. Increasingly I think the strategy will not necessarily be to get a consensus at the interstate level, but actually have the U.N. invest in almost a new ethos of foresight and prediction—so prediction for prevention. I had a conversation with engineers at DeepMind this summer. And the most interesting part of our direction was this notion that we as a species are really bad at anticipation. We just don’t do foresight. We don’t do it well. And I think most—maybe the most crucial at this point would be to be able to engage the private sector, national policymakers, experts in academia and civil society in that process of foresight. So anticipating the implications of dual-use technologies.

And I say so because one element that I think is the most interesting here is this permeability between civilian and military technologies and the increasing collusion between civil and military contexts. So for example, we are facing a drastic phenomena of decentralization in converging technologies, where you can, these days, use automated biolabs. I call them intelligent and connected. There are biotech labs in the cloud where you can learn about 50 type of bio experiments just from your computer anywhere in the world. And you can basically harness those processes to piece together benign DNA fragments into sequences that could lead to the production of bio agents of concern. And using those systems, you can, to some extent, evade some of the norms that we have developed at national levels and even international levels.

So this decentralization is really showing you how, you know, technological experts will not be able to truly opt out of the security consequences of what they design. And so we need to have them upstream in the process, in a conversation with policymakers and other experts, doing foresight. And if we can then shape the design of those dual-use technologies so that we have a form of—you know, a form of non-contamination, a way to be able to target the decentralization process, I think that would be very interesting.

SANGER: Well, we’ve got about 25 minutes for all of your questions. There are microphones. Please wait until the microphone comes to you. A couple of things to know: First of all, as I should have noted earlier, all of this session is on the record. So that means your questions are also on the record. And when you do get the microphone, please stand, and tell us your name, and your affiliation, and really try to make sure that it’s one question.

Great. So who would like to start off here? Yes, sir.

Q: Thank you. John Yochelson, Building Engineering and Science Talent.

I wonder whether you could compare the issues that surround culture and the issues that surround values as the—as the key divide between our national security establishment and Silicon Valley. It sounds, from this conversation, is that it’s about values. And we haven’t said much about culture.

SANGER: Who wants to pick that up? Chris?

KIRCHHOFF: I mean, you know, the culture of tech is—(laughs)—you know, awful different than the culture of the Pentagon, that’s for sure, right? I mean, we don’t have dogs, and espresso bars, and, you know, free laundry in the Pentagon, last time I checked. But you know, there’s actually a lot that the two communities have in common. They’re very mission-driven. And people are there to do big things, and to do things they care about, to do things that are important. And one of my regrets, having now lived in both cultures, is now infrequently they actually meet. Which I think speaks to an even large challenge for the nation that I think Mary and I would both very firmly agree on, which is that more dialogue, not less, is needed. And we need to invent ways to make sure that dialogue happens at a much more rapid pace.

Because the reality is all the issues that we face as a nation are becoming much more bound up with technology. And many of the people here in Washington that are in charge of making policy on those issues don’t have the benefit of having on their staffs or in the room people that really have had lived experience with that technology. So in that sense, you know, DIUx has been a wonderful experiment, right, to take employees from Google and add them to our staff, and then to send them back, and to open some of those doors for conversation between the cultures.

WAREHAM: Maybe just to—

SANGER: Sure.

WAREHAM: I just wanted to read maybe a little extract from what one of the tech workers said at an event we did at the U.N. last August, which is that, you know, the concerns that Google employees raised were not Google-specific but affected the entire industry and workers. And they weren’t just about the military applications of the tech and its use but about use in domestic policing, and surveillance, and in other circumstances. And the demands that the workers had were for transparency, for public oversight, for employee representation, for decision-making power. You know, they wanted to stand up for the rights of all people, and to give those on the receiving end of this technology control over their own lives.

And those are quite lofty—you know, that’s quite a lofty message to give. But I think it shows the breadth and depth of the concerns here. It’s not just about the military and the national security side of it. It’s really much, much broader than that. And it’s not just Google. It’s across the board.

SANGER: I would just add that while it’s across the board, every company’s got its own distinct culture. So you’ve seen Google do what Google did. Microsoft turned out a set of principles that said it would deal with the Pentagon, but that it would also call them out if it believed the technology was being used wrong. And then Amazon, which currently does cloud services for the CIA and others—is probably the leading contender for the Pentagon project—has been much more silent on these issues.

WAREHAM: Mmm hmm.

SANGER: Some other questions. Ma’am, right here.

Q: Thanks. Hi. I’m Jen Leonard with PACE Global Strategies.

You touched on it a bit by expanding the conversation beyond Google’s role in this. You mentioned Microsoft, Amazon, et cetera. So help school me. Some of this is new to me. What does it say about the industry, the sector, that at least in the case of Maven, which you cited, the conversation began with basically an entrepreneurial journalist, probably some source documents, that really spurred a very important discussion that you walked us through the chronology? I mean, these are engineers who see a problem and want to fix it. But the ability, the capacity, the willingness to put that problem-solving into a political context, an ethical context, are they wired to do that? Is this learning in real time?

And we’ve also focused on a very U.S.-centric conversation. Can you—can you broaden the aperture and talk about—I mean, I had the pleasure of going to the symposium in Silicon Valley a few weeks back that CFR organized. Lots of talk about China and the race and the competition there. So where does this dialogue fit in that global context? Thanks.

SANGER: Who wants to take that first? Sure.

PAUWELS: So and I think I will bridge between two questions, the previous one and this one. There is an intrinsic tension between the need for secrecy in national security and this notion that we need to share—be open, share algorithms, source codes, training data sets in a global tech industry. So I think that is what you are facing there in terms of value and mission. And increasingly, we see that in the tension between Silicon Valley and China, where there is to some extent a willingness from Silicon Valley to go work with other companies in China, and with engineers, and do a form of knowledge—transfer of knowledge, and knowledge sharing. And at the same time, you have on the China side a much closer market where data sets are not shared. There is not the same form of reciprocity in terms of flows of data.

And also, an aggressive strategy, really, you know, reinforced by the government, to the form of cyber colonization, and get as many genomics data, biological data, data about ecosystems in different places of the world for economic gain in China. And so there you have, again, this tension between technology for public good, for global public good, and a technology that serves the interests of one nation with strategies of acquisition and predation.

SANGER: Anybody else on this one?

WAREHAM: I mean, perhaps just to say that the Campaign to Stop Killer Robots last year wrote to both Microsoft and Amazon seeking to have a similar dialogue to the one that we had with Google. And we’ve not received any replies to those letters yet. But we follow carefully and look at what Brad Smith is saying in all of his blog posts about his desire—you know, the concerns that AI raises for weapons in warfare. And he’s actually talked about the need for new law and policy. On the Amazon side, we’ve seen Bezos describe the prospect of fully autonomous weapons as “genuinely scary,” and he’s suggested a new treaty to regulate them. That was why we wrote, you know? So we’re kind of very much a single-issue campaign here, as you can tell. But we are seeking to have a meaningful dialogue, not just with the tech workers but with the—from the—from the very top-down as well.

There’s been a few questions about Russia, about China, about other countries that are investing heavily in AI and military applications of AI. All I can say is that those countries, they’re also participating in the international diplomatic talks over killer robots. And—but none of them are terribly progressive. But the campaign—we just have been reviewing a new public opinion survey from 26 countries around the world—including China, Russia, Israel, South Korea. And in almost all of those countries—the results will come out in a week, so I’m kind of giving you a sneak preview—but almost all of those countries, a majority of the people who were polled, oppose the developments of lethal autonomous weapon systems. And the ones who were opposed, opposed on—the most kind of popular reason why was that they did not want to see machines take human life, the moral argument, and the fact that such weapons would be unaccountable. So I think we’re starting to make inroads into public opinion, not just in the United States but around the world.

SANGER: Sir. Right there on the aisle. Yeah.

Q: Thank you. Ted Kassinger with O’Melveny and Myers.

You know, this discussion is obviously far broader than killer robots, as the Google principles refer. And your organization endorsed those—and it was more than you asked for. It’s also broader than weaponization. It’s law enforcement, companies being resistant to unlocking devices of a terrorist, for example, or producing documents. It’s a broader debate. If Google’s principles are the ethical principles, what is unethical? How do you draw the line? And why is it more ethical to refuse to support endeavors in the national security community than it is to support those?

SANGER: Chris, do you want to take a first shot?

KIRCHHOFF: Yeah. I think we should be very clear-eyed about what’s happening. What’s happening is that because the world is a much wider place because of technology, the liberal world order, the nations of the U.S. and Europe, are not enjoying the kind of technological supremacy that they have for a long time. And this has big implications for the ongoing competition between democracies, on the one hand, and autocracies on the other. Just look at the way that, you know, China and Russia are both using AI to suppress their own populations.

So one thing that we certainly cannot afford to have happen as a nation at this moment is to have the people in the tech companies, that are the smartest in this technology, eschewing any contact with these very difficult issues and walking away. It would be like Detroit walking away in 1941 from the war effort. It just can’t happen. So I’m hopeful that as we go forward in these debates people in tech companies are able to understand that these debates are playing out in a global context, where we are in a very competitive world.

SANGER: Sure.

WAREHAM: Perhaps just to say that when I was speaking about Google’s ethical principles and the campaign, it was the campaign welcomed the weapons part of that. The broader—the ethical principles, though, address all sorts of different aspects raised by AI. And my organization, Human Rights Watch, and others, have got some big questions about other parts of them. And other colleagues in my organization could speak much better to the surveillance concerns, to the developments and use of facial recognition software and that tech, and how that’s being developed. So I don’t want anyone to kind of go away and think, oh, we’re all—we’re all good now that Google has issued these ethical principles. We’re not. There’s still a lot of work to do. But at least we’ve got a document to start the discussion with when it comes to that company.

SANGER: OK. Let’s see here.

Q: Hi. I’m Audrey Kurth Cronin, director of the Center for Security Innovation and New Technology at American University.

It’s pretty easy to imagine how artificial intelligence would be used once a war is underway. But what I’m interested in is to what degree is narrow artificial intelligence, so not fully developed killer robots but some of the applications that are currently in place—to what degree is that either stabilizing or destabilizing, in your opinion, with respect to the outbreak of war?

SANGER: Sorry, the last word—respect to the?

Q: The beginning of a major war.

SANGER: The beginning. OK. So early days. Chris, why don’t you start with this? It’s a subject we’ve talked a lot about, particularly on the destabilizing efforts—effects of having shorter decision space?

KIRCHHOFF: Yeah, sure. So I’d say two things. I mean, one is that systems that are relying upon artificial intelligence can be very unstable. And we’ve seen this with Tesla automobiles, for instance, that are learning constantly how to drive better on roads. And every now and then the cars will encounter a condition that the operator is not handled to equip, and something really bad will happen. So you can imagine when you now start using artificial intelligent in systems that are involved in war and peace, that the fundamental instability that can be a part of AI systems becomes a really crucial concern, which is why modeling and safety will be a crucial part of any discussion of artificial intelligence—really, in any context, but particularly in the context of national security.

And, you know, again, one of the things we cannot afford to have happen is the state-of-the-art modeling and safety technologies are all on the commercial side. You know, it’s Tesla and the firms that they contract with that are far ahead of really any of the national security agencies. And so this will be, as a nation, something that we need to import into our national security complex.

But I would say another thing about war, right, which is, you know, the Pentagon exists to prevent war, to deter war, to make sure that war never happens. And war, if it does happen between advanced states, it’s going to be so violent, so catastrophic, it’s going to happen so quickly. It’s not going to be anything like we’ve ever known before. It’s going to be the worst possible outcome we could ever get to. So I think we need to do everything we can to never allow war to break out. And artificial intelligence is going to be one of the key technologies that will advance what the Pentagon calls indication and warning, using, for instance, artificial intelligence to sift through satellite data to pick up any indications that there might be, for instance, a missile launch by North Korea, and then taking steps to prevent that from happening before it does, right?

So these technologies, you know, again, are very nuanced, very complicated, have many applications, and may end up having applications that stop war.

SANGER: Eleonore, can you pick up on that? In the discussions at the U.N., do you hear much discussion of the deterrent effects of these technologies?

PAUWELS: Yes. I mean, in terms of how now AI could be used in cyber defense and cyber offense, one of the issue is that adversarial neural networks have a tendency to show aggressive behaviors and kind of escalate behaviors when they—when they are used to automate cyberattacks. So you could be facing a problem with not really having a concept of proportionality anymore that stands. In terms of deterrence, I mean, that’s not really a strategy we can use in the same way. But algorithms could help you increasingly identifying where the source of the aggression comes from. Not necessarily the identity or the attribution as we were doing it in the past but could help you patch some of the—some of the issues you are facing. So it’s another form of—it’s more prevailing. Prevailing by foresight and having strong algorithms on your side. And really being able to do a full attribution network thinking.

So if I may, I wanted to bridge the two last questions with I think we are focusing extremely on this notion of military use of those converging technologies. But you could think of so many other ways that either data manipulation—you manipulate data using algorithms in a genomics database—or you, you know, use those automated attacks for propaganda, for surveillance, for cognitive-emotional conflict, manipulating public perceptions inside foreign countries. So political security and a form of digital biosecurity is at stake here when you can actually use data optimization and predictive innovations.

SANGER: Yeah. I think you raised a very interesting point, which is we tend to think of the kinetic when, in fact, in the Russia case we saw one of influence operations. We’ve seen cases of data manipulation. We’ve seen data manipulation against nonmilitary systems—Iranian centrifuges, and so forth, and so on. Also intended, if you asked the designers to do what Chris suggested, which is deter a conflict and push out a capability, rather than bring one down right away.

We have about six or seven minutes. So I’m going to grab three questions and let the panel here, in that wonderful Washington way, decide which ones they’re going to answer. (Laughter.)

So the gentleman there, and then let’s see. There were some hands over on this side. Maybe not. And then the gentleman in the front. And in the back there, behind you. Yeah.

Q: Thanks. Zach Biggs with the Center for Public Integrity.

I wanted to ask, if we look at a little bit of the history of the attempted reach across the country between the Pentagon and Silicon Valley—if you look at, for instance, the Defense Digital Service. When that was started the idea was to convince Silicon Valley engineers to do a tour in the Pentagon. And over time, they found that they weren’t as successful at times in doing that. And so they instead were trying to find military officers with coding skills who could be helpful for some of the applications. So when the Pentagon doesn’t find what it needs out of Silicon Valley, it finds it internally in the military. What do you think it means if Silicon Valley decides not to work with the Pentagon in terms of the way that AI technology’s developed, for military weapon systems? What does it mean if they have to find solutions internally or with traditional defense contractors, instead of working with Silicon Valley?

SANGER: OK. So that’s the first question. And, sir, you’ll be the second.

Q: Thank you. I’m Hani Findakly, Clinton Group.

The question really is to Christopher. You mentioned about the fact that private technologies dominate most technology, and that 96 percent of it is commercially available. Only 4 percent is classified. My question is, does this apply to technology in general, or is it the application of technology, that division between the 96 and 4 percent. And, two, how does this compare to, for example, places like China, Russia, other places?

SANGER: OK. And sir?

Q: Adam Pearlman, former Pentagon lawyer.

And both of those questions are much better than mine. But of the topics that we’ve hit today, nobody’s talked supply chain—supply chain security. And is it something that we should be concerned about for consumer electronics made overseas, and what that kind of market share might do in terms of national security concerns.

SANGER: OK. Three great questions. Chris, I think we’ll start with you on the first one, which was the Defense Digital Service, what we learned from that, and whether the Pentagon can provide this coding themselves. If they could, I don’t think DIUx would necessarily have been around.

KIRCHHOFF: (Laughs.) You know, our biggest success at DIUx was actually a software project involving a tool that allows much more efficient air refueling. And it’s an amazing story of how just a few programmers from the Air Force, working with a couple of people from a software development firm, were able to, for a million and a half dollars, save the government hundreds of millions of dollars in keeping tankers up flying in the fight against ISIL. So I think the military has woken up to the fact that software engineering is actually a core skill. You know, it’s going to be silicon not steel that wins the next war. And so that’s the game we’re playing. And we ought to have in-house talent to do it.

I think the Defense Digital Service actually had a lot of successes that have been really important. But as Zach mentioned, I mean, there is this divide, right, in terms of attracting people, truly, from the—from the—sort of the real private sector, the tech firms, who have a level of skills that’s, you know, 10X, maybe even 100X, your average programmer who hasn’t had that experience. And if we fail to attract them, if we fail to develop easy ways to hire them, to respect their unique skills, to bring them in laterally for a couple years, and to put them to work on our hardest operational problems, to get them a security clearance quickly so they don’t have to wait a year an a half to actually do something real—if we don’t do that, make no mistake, we will have a weaker military because of it.

SANGER: You know, you remind me of a story I heard when I was out doing some reporting, where one of your colleagues said to a head of a small Silicon Valley startup—the startup had said to him, you know, I would give you some of my coders, but I don’t think they would get a security clearance. And he said, you mean because they smoked weed when they were in college? And he said, no, because they smoke weed while they code. (Laughter.) And he said, we bring it to them if it works. (Laughter.)

OK. There was a good question about the application of the—of the 4 percent rule that you have. Anybody here can take that on? Chris, is that one—you’re the one who cited the number, so?

KIRCHHOFF: Yeah. And I would just say that, look, you know, one of the challenges for all of us is that technology is really democratizing. So it used to be the case that if you wanted to exert power in the world you built an aircraft carrier, or a fifth-generation stealth fighter. And now if you’re a small group of people, you can go on Amazon, you can buy a drone, you can strap a grenade to it, and you can cause a real problem, whether you’re ISIS in the Middle East or whether you want to go, you know, make an NFL football game more exciting than it ordinarily is.

So technology is democratizing. And it’s not just in the kinetic warfare. It’s in bio. It’s in many other areas. And so we, I think, as a nation of both law enforcement and national security are facing a whole new world. And it’s going to take deep cooperation from the people that know these technologies the best to do all they can to keep—to keep our nation sovereign, to keep us all safe.

SANGER: Mary, can you take on the supply chain question?

WAREHAM: No.

SANGER: No, you can’t. (Laughter.)

WAREHAM: Not my—sorry. It’s not my thing.

SANGER: Eleonore? Can you talk a little bit about it? I’m sure the issues come up some as people have dealt with the sovereignty issues, and the—at the U.N.

PAUWELS: I was more thinking about it in a technical aspect, but I don’t know if that’s what was meant. You know, as the convergence happen, you are going to have use of algorithm and automation increasingly to endanger different types of supply chain we have. So biotech, medical, you know, you can think of any form of data optimization and use of data we do. So those supply chains are probably not always taken into consideration because they don’t necessarily belong to the traditional military perspective. They are part of national security, though. But that’s something—you know, that’s a field where more and more cyber expert are trying to start cooperating with bio security experts—bio security experts, for example, and AI security experts who understand how we protect certain type of infrastructure and supply chains.

SANGER: Great. We have about one minute for any final thoughts.

WAREHAM: I mean, I guess I’ve been around the block too long, because I’ve worked for the last 20 years on particularly indiscriminate and inhumane weapons, with people like General Guard (sp), who’s in the audience here. And in the United States, and especially in this town, we always hear the high-tech fix. You know, the high-tech fix on landmines is to create some self-destructing landmines. And if we all just used those ones, everything would be better. The answer from the U.S. on cluster bombs and all the civilian carnage that were being caused by those weapons was to create sensor-fused weapons—a new, better type.

You know, so when I hear these arguments on the AI that we’re kind of—you know, that it will reduce casualties, it will do all of these wonderful things, we always kind of are dealing with that with a certain degree of skepticism. But, you know, we’re not saying to the Pentagon, don’t use AI. We hear a lot about the dirty, the dull, the dangerous tasks that artificial intelligence can be helpful with. And we want, however, for it to be beneficial for humanity, which is where we have to question what happens when AI is incorporated into weapons systems. And we have to look at the regulatory framework, which currently is lacking. Which is why we call for new international law.

SANGER: Great. Well, we are not out of questions, but we are out of time. (Audio break)—and appreciate your questions and thank our panelists as well. (Applause.)

(END)

Top Stories on CFR

United States

Each Friday, I look at what the presidential contenders are saying about foreign policy. This Week: Joe Biden doesn’t want one of America’s closest allies to buy a once iconic American company.

Immigration and Migration

Dara Lind, a senior fellow at the American Immigration Council, sits down with James M. Lindsay to discuss the record surge in migrants and asylum seekers crossing the U.S. southern border.

Center for Preventive Action

Every January, CFR’s annual Preventive Priorities Survey analyzes the conflicts most likely to occur in the year ahead and measures their potential impact. For the first time, the survey anticipates that this year, 2024, the United States will contend not only with a slew of global threats, but also a high risk of upheaval within its own borders. Is the country prepared for the eruption of election-related instability at home while wars continue to rage abroad?