Symposium

Will Artificial Intelligence Curb or Turbocharge Disinformation Online?

Wednesday, November 14, 2018
richard h. ledgett jr

This symposium was hosted by CFR on November 14, 2018. "Will Artificial Intelligence Curb or Turbocharge Disinformation Online?" convened policymakers, business executives, and other opinion leaders for a candid analysis of artificial intelligence’s effect on democratic decision-making. 

Session I: Moderating Online Content with the Help of Artificial Intelligence
Robyn Caplan, Tiffany Li, Sarah Roberts
Joel T. Meyer

This panel examines AI’s role in moderating online content, and its effectiveness, particularly with respect to disinformation campaigns.

SEGAL: (In progress)—real, I am not a deep-fake hologram—(laughter)—unless I say something wrong, and then I was totally manipulated and it was all fake news.

I just want to welcome you all to today’s program. This is the fifth symposium we’ve done, building on last year’s, which was on Russian election interference and securing the election infrastructure.

Please take a look at other Council products. Net Politics, we’ve covered a number of these issues. And last month, we published a great cyber brief by Bobby Chesney and Danielle Citron on deep fakes—Bobby’s going to be on the following panel—so hopefully you guys will take a look at that.

And I want to thank the D.C. staff, Katie Mudrick, Marisa Shannon, and Stacey LaFollette, and Meghan Studer, for helping us out. And on the New York digital side, Alex Grigsby, who helped put everyone together for these conferences.

I think great timing, you know, just a week after the election. The evidence, I think, on what we saw is pretty mixed. Some success from the companies on takedowns, but clearly, lots of disinformation happening, most of it seemed to have been domestic, not from Russians, but reports out that the Iranians are learning and adopting new methods. So I think we have a lot to discuss today and really looking forward to it.

Please try to stay all day. I think we’re going to have some great panels. And if you have any ideas, input to the program, please find me during the day.

So thanks, again, for everyone for coming and thanks to the panel.

And I’ll turn it over to Joel now. (Applause.)

MEYER: OK. Thanks very much, Adam.

And thank you all for joining us this morning.

Welcome to the first session of today’s symposium, titled “Moderating Online Content with the Help of Artificial Intelligence.”

We’re joined by three terrific panelists today. First, all the way on the far side there, is Robyn Caplan, researcher at Data & Society. Then we have Tiffany Li, who leads the Wikimedia/Yale Law School Initiative on Intermediaries and Information. And next to me, we have Sarah Roberts, assistant professor in the Department of Information Studies at UCLA. I should point out, her book on commercial content moderation is coming out in June. It’s called Behind the Screen: Content Moderation in the Shadows of Social Media.

I’m Joel Meyer, I’m vice president at Predata, a predictive analytics company. And I’ll be moderating this discussion today.

So I’d like to start off with Sarah, if we could. What is commercial content moderation? Who does it? What challenges is it trying to solve?

ROBERTS: Thank you. Good morning, everyone.

So with the rise and the advent of these massive social media platforms, the practices of adjudicating and gatekeeping the material that is on them has really become something that has grown to industrial scale. And these practices have their roots in, you know, early forums of self-organization and governance online when users like me, celebrating my twenty-fifth year this year on the internet, used to argue vociferously about what should stay up and what should be taken down. But the difference was that those decisions were often very tangible and present.

When social media became a powerful economic concern, and certainly even more so a political one at this point, it coincided with these practices stepping to the background. And at the same time, with the amping-up of scale, the massive scale of user-generated content that flows onto these platforms, the companies that owned them, as you can imagine, certainly needed to maintain some sort of control. The primary impetus for the control, I would—I would say, is actually one of brand management.

So in other words, most people wouldn’t open the doors of their business and ask anyone in the world to come in and do whatever they want without a mechanism to sort of deal with that if it runs afoul of their standards. It’s the same thing here, the only difference, but the key difference I would say, is that the business model of these platforms is predicated on people doing just that, coming in, expressing themselves as they will, which is the material that drives participation.

So this practice of commercial content moderation has grown up to contend with this massive, global influx of user-generated content. And what it means is that these companies who need it, from the very biggest to even small concerns that have social media presences, typically need a cadre of human beings—and also, of course, we’re going to talk about automation—but human beings who review, adjudicate, make decisions about content. That seems like a benign sort of thing when we put it that way, when we use words like “content” and “adjudication,” but as you can imagine, that quickly becomes a huge complexity.

MEYER: Right. And I’d like ask Robyn to jump in here because not all these platforms are the same, right? We tend to use this blanket term “social media” to cover platforms on the internet that really serve very different purposes and operate in very different ways. So how do different platforms handle this challenge of commercial content moderation.

CAPLAN: OK. And this is—I guess I’ll do a plug for our report. We have a report coming out from Data & Society today called Content or Context Moderation? Artisanal, Community-Reliant, and Industrial Approaches. And what we did was we did an interview study with policy representatives from ten major platforms to see how they do policy and enforcement differently across.

What we found was that platforms, first, they are trying to differentiate themselves from each other in particular ways. So they are working to understand how their business model is different, their mission, and the scale of the company.

And so we found that companies are really different from each other in terms of their content moderation teams. There were three major types that we found. The first we referred to as artisanal because that’s a term that’s actually coming from the industry. And this is the vast majority of companies. These companies do not have tens of thousands of workers like Facebook or Google do. They have maybe four to twenty workers that are all—that are doing all of the content moderation policy and enforcement and often within the same room, located within the United States. So these are incredibly small teams. At the same time, these companies are serving millions of users, but they may be able to get away with having some small teams because they have fewer flags or needs, in terms of content that needs to be moderated.

That is vastly different from the industrial organizations that we normally speak about, the Facebooks and Googles. Those companies have tens of thousands of people, often that they’re contracting offshore. There is a huge separation in terms of policy and enforcement, so policy is typically located within the U.S. or Ireland, enforcement can be really anywhere in the world. And that means that the rules that they’re creating are different, so they are often relying on kind of universal rules, whereas for the artisanal companies they’re looking at things from a case-by-case basis and building rules up from there.

At the same time, Facebook, according to our interview subjects, actually started off as an artisanal company. They started off with twelve people in a room and that’s how the initial rules were set.

And then the third type is the community-reliant, so this is Reddit and Wikimedia, where there is basically kind of federal model. There is a base set of rules that are set by the company, and then there are—can be hundreds of thousands of volunteer moderators that are able to establish rules in their own subcommunities.

So this is kind of the main way we’re trying to differentiate between these companies. These companies are also working to differentiate themselves from each other, mostly because they’re trying to establish themselves as not Facebook, as there is, like, another threat of regulation coming down.

MEYER: So, Tiffany, I’d like to ask you to dig a little deeper on one of the points that Robyn just mentioned, which is that these are global companies, they operate around the world, most of them, many of them do, certainly the big ones, but different countries have different views that are reflected in their laws and regulations on what free speech protections are available, what type of speech is regulated, and how. So what are some of those key differences? And how are social media platforms handling that in their content moderation practices?

LI: It’s a really great question. So I think when I think of content moderation issues globally, there are a lot of challenges, but there is also a lot of room for opportunity. So on the challenge front, the main issue, I think, for tech companies or the tech industry is simply the fact that the internet is global.

So if you are Facebook or if you are a small startup that allows people to host content, you have to comply with basically every single country’s regulations and these can differ on a wide-ranging scale. You can have regulations from the U.S., for example, which very much protect free speech and free expression. You have regulations in the EU that have very strong laws on things like hate speech or extremist speech that require very quick or fast content takedowns. And then you have governments in illiberal regimes which often request takedowns of politically sensitive content or other content that we may think should be allowed under general principles of free speech.

So it’s very difficult for companies to navigate this, but there is a lot of opportunity as well. I think there’s a lot of opportunity for collaboration. We see this with companies working together either within industry or with industry and with government for projects like taking down and tracking extremist content and extremist movements online. This is a great opportunity for people to really collaborate and, I think, really make some effective change.

Broadly speaking, I think the most important thing—and I’m sure you’ll all agree—is that the promise of the internet was to allow for online speech. And on a worldwide level, there are still many places where access to information is restricted or online speech and online free expression is restricted. So all of these companies right now have the opportunity to create global speech norms through their content moderation practices. And I think that’s an opportunity that we can’t forget still exists, even if there are a lot of problems and even if there are issues with conflicting laws around the world.

MEYER: And how do different platforms actually operationalize their expansion into these new markets? There have been some notable stumbles, but also, as you point out, a lot of progress being made. Can you give maybe some examples?

LI: So I think one of the first stumbles I think of is the issue of localization.

And I think, Sarah, you’ve written about this, the problem of finding moderators for every single language and for every single region or nation.

So there is—there are types of content or types of speech that may be problematic in one region that someone from the U.S. would never understand. For example, likely the moderators who are based within the United States, the few of them who exist, may not understand slang in the Philippines, for example. So they might not understand what would be considered harassing speech in the Philippines versus in the U.S. So one issue is really just scale, right? Having moderators of every single language in every single region who are able to understand the local norms and the local languages.

MEYER: Robyn, is this a challenge that some of the smaller artisanal companies are able to manage? And if not, what is their solution?

CAPLAN: So I think this is a challenge that all of the companies are struggling to manage right now, even the bigger ones. So Facebook and Google by no means have moderators in every single language with the cultural understanding to be able to understand what’s going on in that region. In many cases, they don’t have offices in every area that they’re operating in as well.

For the smaller companies, this is a problem that’s insurmountable. They are located primarily within the United States, if not—actually, I think all of the companies that we spoke with, all of their workers are located within the U.S. But what they do to be able to expand their capacity is they establish relationships with academic institutions and NGOs in different areas that they’re working. These are often informal relationships, so they are going to these academics and presenting them with a problem. Mind you, they have a much smaller set of complaints and concerns to deal with so they can do this, but they are reluctant to formalize these relationships. And that might be something that they need to do if they continue operating in these regions.

MEYER: So, Sarah, Robyn noted that for the artisanal approaches, the content moderators are mostly or maybe wholly located here in the United States. Obviously, that’s not the case for some of the bigger ones. Can you describe some of the ways that, at the more industrial scale, these companies approach content moderation, where it’s located, and what are some of the issues that that gives rise to?

ROBERTS: So even at the—you know, again, I guess we’ll just call them the Facebooks and Googles because that’s what we’re all thinking about when we think of the big guys. Even in cases for these very large firms with massive global footprint or perhaps even more so the issue of labor, adequate labor, is a problem, and so they utilize a strategy of putting together sort of hybrid solutions to attend to that, which means that they may have people in the U.S., and they typically do, and the work sort of gets broken out or triaged in some way based on its complexity perhaps or the perceived nature of the problem with it.

And then this strategy of employing people around the globe in a variety of ways is brought to bear. So it may be that these are full-time, full employees, direct employees of the firm sometimes who are doing this work, but more typically they’re contractors, whether they’re contractors residing in the United States or contractors somewhere else in third-party call center-like operations in places like the Philippines, India, Malaysia, and other places. Ireland as well was mentioned.

But also, you know, even today, you will find instances of the very biggest companies relying on other forms of labor, such as the micro-labor platforms like Amazon Mechanical Turk or Upwork or some of these other—these other platforms you may have heard of. And what’s very interesting about that is that, you know, there are sort of these secondary and tertiary content-moderation activities going on now, too, in addition to frontline decision-making.

One thing that some folks reported to me, who primarily do their work on Mechanical Turk, is that they started getting these seemingly preselected datasets of images to adjudicate and what they realized they were doing was actually training machine—doing machine-learning training on datasets, which would then be fed back into systems that would ideally automate some of that work. So not only is there a hybridity around how the labor is stratified and globalized, but there’s also this hybridity and interplay between what humans are doing and what the aspirational goals for automation are.

MEYER: Well, that’s a perfect segue to start considering how artificial intelligence can help with this challenge. We’ve just covered a lot of the challenges and complexities that are involved in this.

I think, Sarah, you mentioned the dramatic growth in the scale and the velocity of user-generated content on these platforms. You know, to me, that certainly screams out for some kind of artificial learning—artificial intelligence, machine learning-type of technology to be applied.

Robyn, maybe you could start out by helping us understand, in what ways is AI currently being used in this challenge? And how effective is it?

CAPLAN: OK. So AI and machine learning is currently being used for three different content types in particular. So the first is spam, the second is child pornography, and then the third is what’s referred to as kind of terrorist content.

For the most part, every company is using automation of some sort. The smaller companies will use automation around spam in particular and child pornography. For the larger companies, they’re really using it around these areas.

The smaller companies are limited in their use of automation in content types beyond that. So as companies try to tackle issues like hate speech and disinformation, the vast majority of companies we spoke with said that they do not use automation in those areas, that everything is looked at by a human.

For the larger companies, they say something similar, but we do know that automation is being used to detect these kinds of content. And we know that because of transparency reports that have been put out by companies like Facebook that have said that they’re using detection technology, but it’s with limited success when we have content like hate speech or harassment or even, you know, graphic violence, I think they’re actually using automated detection technology and they’re fairly successful with that. We’re seeing it in other—but it’s rarely used to remove content. So what you have is automation being used to detect content and then it’s going to human review.

We do see automation for other purposes, though. So for YouTube, this process of demonetization where they’re taking revenue away from some users that they see as not creating content that’s advertiser friendly, you see mass automation in this area. And then what you see is kind of these backwards reviews where creators then can request a manual review after their content has already been demonetized.

And what’s very interesting about this is that creators, even though there is very little chance that they’ll actually get their revenue back because most of the revenue is made in the first twenty-four to forty-eight hours, they’re fairly invested in doing this manual review because they want these models to be better. So they think that that information will then be used to train the models. So companies like YouTube seem to be much more comfortable using automation just to remove revenue than to remove content.

MEYER: Tiffany, what are some of the legal issues involved in companies not only being in a position to regulate speech in this way, but then kind of deploying artificial intelligence technologies in a way that may not be well understood by the users?

LI: So I think the first issue, generally speaking from the United States—I’m sure everyone here in the room knows we have a very strong culture of free speech and we have our First Amendment principles that are highly valued within the states. Now, this is not necessarily the case abroad. Europe, for example, although Europe has, you know, very strong democratic principles and obviously values human rights, like free expression, does not generally have the same free speech culture that we have, the free speech maximalist culture that the United States has.

So what happens and legally speaking is companies will use technologies like AI, base technologies to take down content sometimes to comply specifically with different laws and regulations. So I’m thinking right now about specific European laws that mandate things like removal of hate speech within twenty-four hours. That kind of very strict requirement often comes with legal consequences and fines.

So when companies are faced with this type of regulation that they may have difficulty complying with, they often turn to automated content takedown systems. And what happens then is you could argue sometimes a user’s speech may be curtailed by these systems. If a company does not have the time to have human moderators review specific flagged pieces of content and instead relies on automated content takedowns, then some argue that free speech for the users online is then limited or restricted. So that’s one issue I think that comes up with legal and regulatory problems with the use of AI and machine learning specifically in content takedowns.

MEYER: What type of accountability should there be? I mean, are there any examples out there that would be instructive?

LI: So that’s another great question. And I think that we were speaking earlier about this model that the founder of my research center has currently been promoting. This is Professor Jack Balkin of Yale Law School, if any of you are interested, and he has been writing about what he calls a triangle model of speech regulation, so three types of regulation of speech.

The first type of regulation is specifically government regulation of speech, which happens through the various legal processes that we already have in place. Right? And we have due process, we have accountability, we know what happens when the government tries to regulate a citizen’s or resident’s speech.

The second type of regulation that occurs in the online space is corporate regulation. So Twitter, for example, can take down a tweet of yours if it violates their terms of service. And we have some sort of accountability. As a user, you can complain to Twitter, you could, you know, pull your money out of investing in Twitter’s stock, if you’re a shareholder you have shareholder rights, and so on.

There’s a third category of speech regulation, though, that’s really interesting, I think, right now. And this category is when governments or state actors try to use private speech regulation to regulate residents’ or citizens’ speech. So what happens is a government agency will report or flag content that they believe is problematic and they won’t take it down through a formal government request system, which has accountability and due process. What they’ll do is, for example, they’ll tell Twitter this tweet is against your terms of service, you should take this down. And at this point, it’s effectively still the government regulating a resident or citizen’s speech, but leveraging the terms of service of a private institution. Now, the problem here is then that the user doesn’t really have due process, there isn’t really accountability or transparency in the way you would expect one to have for sort of government regulation of speech.

So these are the three types of speech regulation that we’re looking at a lot right now. From a legal standpoint, the first two types are legally somewhat agreed upon with what the standards are, but that third type is quite tricky.

MEYER: And I think that, to me—that’s a really fascinating scenario that these companies and users are being put in, but it also points to this challenge, Sarah, in the shifting standards and the challenges of what is takedownable, right? You know, what violates the terms of service and how does that change year to year or even week to week or even day to day or hour to hour, in some cases, and change across contexts? Is that—first of all, I think that’s a challenge for these companies, even aside from artificial intelligence. But then when you add in AI, is that something AI is prepared to deal with?

ROBERTS: Well, in a word no, I would say. But, you know, I think—I think to your point about the changing nature of what needs to be responded to, that is a fundamental characteristic of the landscape in which these practices are being undertaken, whether it’s via humans, whether it’s the machines that are doing it, or whether it’s this more typically an interplay between the two.

So any set of policies, whether encoded in an algorithm or whether written down and used for adjudication internally, they’re totally dynamic. And not only are they always in flux just by the nature of the firms themselves, but as we’ve seen more and more, these companies—and again, I’m talking about the big ones for sure—are being called to account with regard to breaking situations around the world, whether it’s sort of political unrest or, in some cases, targeted abuse of vulnerable populations, and so on. So obviously, that requires a certain nimble approach and it requires a mechanism by which to respond and adapt quickly. Of course, going back to the points from my colleagues that they made earlier, that also would presuppose having the adequate, knowledgeable staff to identify and understand those complex situations.

And since we’ve sort of established that often that isn’t even an offer for a variety of reasons, then I think we need to be thoughtful and perhaps even troubled by the notion that when we don’t even have the baseline of having that broad understanding of these issues, as well as the specificities when these incidents occur, how are we going to encode that and then automate that—automate that out? So it becomes—it becomes worrisome in that regard.

And the one other point I wanted to make about, you know, Robyn gave this great overview of sort of, like, what is AI good at, because it is—I think we—we’re, you know, we might seem like skeptics, but we can agree that it is actually good at doing certain things, and she gave those three categories.

I think one thing that I want to clarify there is it’s not—it’s not the nature of the content, as awful as those things—well, spam is bad enough, but child sexual exploitation material certainly and also this terroristic content that often shows extreme violence—it’s not the nature of the content that makes AI good at retrieving it. What makes AI good at retrieving it is the fact that, for better or for worse, much of that information is recirculated, so it’s, like, the same videos over and over again.

And so what the AI is doing is it’s really just a complex matching process, looking for the repetition of this material and taking it out beforehand. And that’s why, as both have said, things like hate speech, which are discrete instances of behavior that are nuanced and difficult to understand, cannot just be automated away.

CAPLAN: And that’s an incredibly important part.

I also want to point—I also want to note that for the representatives we spoke with, they actually expressed some reservations about the use of automation for content that’s referred to as terroristic. That in many cases, what is happening is that it’s kind of more politically acceptable to take down than leave up and so they’re not quite clear on how many false positives are in those databases. So that is a bit worrisome.

MEYER: So in the areas where AI is not currently being used or is not being seen to be effective, are you aware of any approaches out there that show promise, either in the academic space or research and development? I mean, are there examples out there that we can kind of hang our hat on and say maybe this is a good approach that we should look to?

CAPLAN: I mean, I think to Sarah’s point, any content that is recirculated is good for automation. And in some cases with hate speech and disinformation, we do see this process of recirculation, we see this kind of signal-and-noise dynamic between some publications and then bots or more fake accounts that are being used to recirculate that content. So that’s one area where automation can be used. I think it’s likely that it is being used to some degree to remove the tens of thousands of accounts that some of the major companies are reporting that they’re removing.

MEYER: So you think some major companies are using AI to actually take down accounts?

CAPLAN: For the fake accounts? I think it’s likely, but I can’t—I can’t say for sure, no.

ROBERTS: So I think I guess I’d make two points. The first is about sort of the state of the art and the second is a more philosophical comment.

The first would be to say, in addition to what Robyn described, you know, there are researchers in areas like natural language processing and information science who are looking at mechanisms by which—mechanisms by which a conversation might turn. So in other words, it’s like this predictive approach to understanding the nature of dialogue and conversations and sort of being able to create a flag, for example, when a moderator should step in and examine a particular threat or examine a particular area.

But, you know, the point that I want to make about automation and these computational tools is that when we’re thinking about major firms that reiterate over and over again that their primary identity is one of being a tech firm and their orientation to the world is to problem solve through computation, I think it should stand to reason and we ought to be cautious about the claims that they make of what those tools can do and, further, whether or not they’re beneficial.

Because, you see, when everything is automated or put through an algorithm or a machine-learning tool, accountability becomes all the more difficult. We already have difficulty with accountability and transparency, understanding what the firms do, and we’ve had reference to some of the transparency reports—which I think we can agree are getting better thanks to the work and efforts of civil society advocates and people such as my colleagues who press on that—but the firms wouldn’t do that on their own.

And when the—when the processes of adjudication of content, which, again, is a problematically benign way to think about it—we might call it the regulation of the behavior of all citizens of the world, that’s another thing we might—I mean, you know, six of one, right? But when that is, you know, when that is rendered within proprietary machines and algorithms that we cannot really hold to account or understand, you know, we ought to be cautious.

MEYER: I think that’s a great point. I mean, I kind of framed the question implicitly saying, you know, don’t we want more AI? You know, how are we going to make this better? But I think that’s interesting, yeah, right, yeah—go figure, right? (Laughter.) But I think it’s a great question, you know. Is that desirable, right? Is that the outcome we all want or—I mean, this is a very serious consequential responsibility that these companies have. Do we want AI taking over? Or for some types of things, are humans, human content moderation, is that actually where we want to be?

LI: So I think the first thing to understand is AI is not a magic pill. AI is not going to solve, at this point, anything on content moderation. It’s a useful tool for the categories that Robyn mentioned. I would also add intellectual property infringement.

CAPLAN: Yeah, forgot that one.

LI: That’s a great category where artificial intelligence systems can easily detect content that is infringing on, say, the copyright of the rights holder for, you know, a video from a movie or something like that. That’s a great use case for AI and machine learning.

But for most content issues right now, AI isn’t enough. And I think the danger here isn’t that companies may over-rely on AI. I mean, as a lawyer I think of the law and I think the danger is a lot of regulators think that AI can do everything. So you end up with laws like many of the laws—again, not so much in the U.S., but mostly abroad—where companies have to comply with these very short takedown timeframes. And while those may be well-intentioned, we all mostly generally agree that we don’t want hate speech, extremist speech, and so on online.

The problem is, those very strict, short time requirements mean that a lot of companies will end up over-censoring because they can’t rely on AI. If they rely on AI, the AI will just take down everything. So that’s, I think, one of the issues that I think of legally.

ROBERTS: And if I—if I may, the knock-down effect of what you describe is that, because those tools actually don’t exist and are less than ideal when they do, is that the companies then go out and they go on a massive hiring spree.

So the case that comes to mind I’m sure for all of us is that of Germany when the German NetzDG law went into—went into effect, and even prior to it, what did—what was the response from the major companies? To set up call centers in Berlin and hire massive amounts of people. And as you can imagine, doing commercial content moderation as a job is not fun. It’s actually you’re exposed to pretty much the worst of the worst over and over again, as a condition of your job you see what people are reporting, which is a queue of garbage usually, to put it mildly. And so there’s these other human costs of, like, massively increasing these labor pools of people who do this work, typically for not a great wage I might add.

CAPLAN: I mean, there’s another cost as well. As these companies try and scale too rapidly, they often end up formalizing their rules very quickly so they can do mass training. And that has consequences for trying to create localized, culturally sensitive rules as well because they’re often relying on these universal rules so they can kind of have the same consistent judgment across locations.

MEYER: That’s great. So I think we’re in a great discussion here. I know I personally have a lot more questions, but I’m conscious of the fact that we have great expertise and questions in the room here. So at this time, I would like to invite members to join our conversation with their questions. A reminder that this meeting is on the record.

If you’d like to ask a question, please wait for the microphone and speak directly into it. Please stand, then state your name and affiliation. Please also limit yourself to one question and keep it concise to allow as many members as possible to speak.

So with that, why don’t we get into it. Please.

Q: Thank you very much. Elizabeth Bodine-Baron from the RAND Corporation.

From a researcher perspective, content moderation done by private companies, the good side, removing content from the casual user, downside, removing it from the general academic world, the research world. How do we work with these companies to allow the people doing the research to, say, in extremist content and developing these AI algorithms and everything else like that, access to the content that is no longer available because it’s being moderated?

MEYER: That’s a good question.

ROBERTS: So, I mean, I’ll just start out by saying I think you’re certainly identifying a predisposition and an orientation to opacity and secrecy that is present in these firms. I mean, obviously, there are carrot-and-stick approaches to doing that.

I ponder a lot the notion of, how do we know what’s not there, right? I mean, again, if it’s removed, how do we apprehend it, how do we understand it? Certainly, a lot of people are concerned about that for a number of reasons, whether it’s civil society advocates, legal scholars, regulators and others.

And so I think there are several approaches. One is to try to suggest that these partnerships might help their business model. That’s one. One might be to mandate that this information be reported. That’s another. And then there’s anything in between.

But I think that even, I would say, in the last two years, the orientation of the firms towards their public responsibility around these issues has really changed. We could point to the 2016 election in the United States, we could point to Brexit, there’s a lot of things that might be—that might be at the root of that. All that to say that pressure has to be brought to bear on the firms.

And I think at this moment—and I hope that you will chime in, too, with your points of view—but my perception is that they’re attempting to get out ahead of more regulation by doing a better job of sharing some of this information through such things as transparency reports, takedown reports and so on. But it’s nascent.

CAPLAN: So there’s two major efforts actually already happening in this area, much to your point, Sarah.

The first is the social data initiative. It’s a partnership between Facebook and the SSRC. And they are making some data available to researchers. What that data is no one knows, they haven’t actually determined what that data is. It’s a matter of asking the right question and seeing if they give it to you. So there’s obviously some bugs to work out, but that is one area that it would be fruitful to go and pose the question and see if you get access to the data—and if you don’t, tell the world.

And then the second was just reported on this week, that French regulators have said that they’re working with Facebook to do some research on content moderation and Facebook seems to be cooperating with that as a way to, like, stave off regulation there.

LI: And I have to mention, generally speaking—so I’ve worked with many of these companies, as have all of you, and these companies generally are made up of people and most people don’t want to support terrorism, for example. So on issues like extremist content, there’s a lot of collaboration inside a sector, as well as with the public sector. So there’s a lot of very close collaboration with law enforcement, with national security, and on international security issues. So I think the issue of public academic access is one thing, right? That’s a little harder sometimes because you want content taken down very quickly.

But if you’re thinking about questioning whether or not, you know, companies take down content too quickly for, say, law enforcement agencies to be able to research and track extremist organizations, there have been a lot of agreements and I think very close work between the agencies and the companies to make sure that any content or user accounts are kept up for as long as necessary.

And sometimes people don’t realize this, so the public is unaware and people are upset, why is this Facebook page for this clearly extremist organization not taken down? And sometimes that’s because the FBI is tracking that extremist organization and we need to know who’s accessing that page. So there is a lot of collaboration on that front. But definitely, public academic research access is another question I think that’s very important.

Q: Good morning. Thank you very much for this interesting conversation. My name is Dan Bartlett from the Department of Defense.

I wanted to ask a question surrounding crisis situations or in extremis situations. Do you find that the social media companies are responding differently, say, to a Parkland incident where you have to do something quickly, potentially, to get in front of it? Is there any sort of, you know, quarantine and then review vice flag and then review later? Have you seen any thought being done in the private sector on those issues?

CAPLAN: So I think, in terms of what we’ve seen, there’s a whole research group at Data & Society that looks at issues of media and manipulation and these crisis periods. And what we’ve seen is basically a period of testing around this.

With Parkland, I do believe there was an SOS signal that was included within the Google search results. And I’m wondering whether or not that was used to quarantine content that was coming out.

We saw one response during the midterm election where Facebook did put together a kind of ad hoc strategy kind of war room to tackle issues like disinformation. So we are seeing these acute responses that I think are still going through an ongoing testing period. I have not seen any kind of major changes in terms of the systemic response.

ROBERTS: I would just say to that—again, gesturing at some of the things we’ve already spoke about on the panel—as you can imagine, because of the competencies that are staffed for, certain things are able to be responded to more quickly whereas, if a crisis were to be unleashed in, say, Myanmar, there is probably not adequate staffing at the firms themselves to even understand the complexities and nuances of the geopolitics going on with regard to, you know, these situations, much less even how to respond or how to appropriately triage it.

So the issues, again, are related to, in part, you know, the American orientation of the firms and their—you know, their significant limitations sometimes in those areas.

MEYER: Well, as a quick follow on to that, I think you make an excellent point about, you know, you don’t know where the next crisis is going to be and to have the experts and the staff on hand, ready to deal with it. Are they trying to be anticipatory, are they trying to look ahead and see what’s coming?

ROBERTS: I mean, as much as one can. I think one of the issues—and, Tiffany, I’ll go back to what you said about regulators’ expectations—I’m not here to apologize for the firms, far from it. I don’t think anyone would ever accuse me of that. But I think we’re asking a lot. We’re asking a lot. OK, look into your crystal ball and find out where the next global crisis will unfurl itself. I’m sure we have people in here for whom that is a full-time job in a different way.

So, you know, again, pipe all of human expressions through a very, actually, a very narrow channel called YouTube or Facebook and call that behavior and expression “content.” We’re really—we’re actually quite limited in terms of maneuverability, in my opinion. And part of that is fundamentally about the business model of these platforms which have, you know, on the one hand greatly succeeded, but on the other hand have really painted the firms into a corner in a way, in addition to the expectations and demands that we now all have on them.

CAPLAN: I mean, the other part of that as well is that when we’re talking about disinformation, there was this understanding a couple of years ago that this was the work of individuals online, people organizing online. And what’s come out over the last couple of years is that this is largely, in many cases, the work of governments or the work of political campaigns, the work of military actors.

So to Sarah’s point, we are asking a lot of these platforms. We’re asking them not just to mediate between themselves and their users or users and users, but between governments and other governments, which is a huge job.

MEYER: Yeah, that’s a great point.

Q: Greg Nojeim, Center for Democracy and Technology.

What’s a good solution to the problem of government flagging? And here’s the scenario: The legislature passes a law that says this is the information that can be censored by the government and it says if the government wants to censor the law—wants to censor the information, it has to give the person the right to go into court to have an adjudication. The government gets around that rule by flagging content and having it taken down as a violation of terms of service. What’s a good solution to that problem to make it so that government can’t do that and is accountable?

ROBERTS: All eyes on Tiffany. (Laughter.)

LI: I will solve it all.

So the first solution to that is transparency, which I know we always say is the solution to everything, but here we literally don’t have the numbers, so we don’t know how often this happens. We know it happens, but we need, as academic researchers, we need the numbers on how often government agencies request that certain things are removed, not for legal mechanisms, but through these terms-of-service flags. How often do agencies or states use NGOs, for example to report things? We need those solid, concrete numbers so we can actually say that this is an issue or that this merits a level of consideration where we should have some sort of legal change or at least some sort of policy change. So I think that’s the first thing.

Secondarily, though, this sort of indirect regulation of speech is, I think—if it—if it gets to the level where it’s widespread enough that we need to—we think we need to change this legally, that would be an interesting area for possible litigation, if not at least policy discussions around how we think that the government should be able or should not be able to do this.

I think especially within the U.S., we have this very strong First Amendment culture and this seems a little contrary, depending on how you view different lines of cases about free speech protections and so on. So I think that’s a really interesting area of law that we might see changing in the future.

But the first step is just to have the numbers. We need the numbers, we need actually the facts on the ground. And then we’ll see what happens from there.

Q: Hi. Bob Boorstin from the Albright Stonebridge Group, formerly of Google.

I just want to complicate something for Tiffany, and that is your third category where you said that government and state actors are using requests to private companies kind of quietly. Look at the motivations for where the governments or rather the sources for where the governments get their information and you’ll discover occasionally that it’s the competitors to the companies that they’re going after.

I guess my question, following on Greg, is, give me three things, one from each of you, that these big companies should be doing, aside from transparency, which we’ve heard of over and over and over again.

LI: OK, so just only one thing from each of us.

CAPLAN: Only one? (Laughter.) All right. Expanding resources—these companies should at least have offices, for the major companies, in every area where they’re operating. And they should be hiring enough people with the language and cultural capacity to be able to moderate content in those regions.

MEYER: Is that feasible for smaller companies, though?

CAPLAN: He said major companies. (Laughter.)

MEYER: Aha, right.

CAPLAN: Smaller companies? No. Smaller companies—developing more formal relationships with academic institutions and NGOs in those regions.

LI: I would say, if just one thing, I would want consistent policies. Right now I think we have a lot of companies who are really trying very hard to manage these issues—extremist content, hate speech, and so on—but what that means is every three months Twitter has a new policy and say, oh, now we’re removing this type of content, now we don’t allow this type of account. And this constantly changing type of policy is, I think, very confusing for users and removes a lot of levels of what we could call due process for people who might have their speech or their user account suspended. So consistency, I think, would be wonderful, at least maybe six months, not just two months.

ROBERTS: I would—I would put a pitch in for the human workers and an improvement in their work lives and status to include things like valuing the work that they do as a form of expertise and then giving—I mean, not to use the word “transparent” again—but to give those workers—bring them into the light, essentially, so that we can value the work that they do.

Just to quote one of the people that I’ve talked to over the years who said, “The internet would be a cesspool if it weren’t for me”—direct quote. I don’t want to swim in one of those, so I appreciate the work that they do.

The other piece that goes to improving working conditions for commercial content moderators actually is a benefit to us all. So, you know, actually ten seconds to review a piece of content and decide whether it’s good or bad is not adequate, it’s not appropriate, but that’s what we’re asking these people to do. No wonder we have a muddle on the other side.

MEYER: Yeah.

Q: Hi, good morning. Brian Katz, international affairs fellow here at CFR.

A question for all the panelists, but sparked by a remark from Tiffany earlier on in her presentation. You had mentioned that social media and tech giants are essentially playing a critical role in the establishment of global norms when it comes to—through the course of their operations and the scope of their operations, really establishing these norms of what is free speech. So this is more of a philosophical question. And obviously, every company is different. But do you think they understand this? Are they grappling with the implications of this role that they’re playing? Are some embracing it out of either some corporate responsibility perspective or dare I say their own prestige and ego? Or are some trying to avoid it from trying to avoid some type of normative role when it comes to curating free speech in society? Thank you.

LI: That’s a great and very difficult question.

ROBERTS: Is that—is that one for a beer later today? (Laughter.)

LI: I mean, I don’t know if any of us can speak on behalf of the companies. I do think, generally, again, companies are made of people, right? They’re made of, of course, the directors and the employees and companies have their own cultures. So we have, I would say, some organizations—previously I worked for Wikimedia which is the foundation that runs Wikipedia, and that organization, for example, has a very strong culture of promoting values of free-access information, free speech online. Organizations and platforms like that really care about those missions.

And even some of the corporate giants that we talk about, sometimes a little dismissively, I think also have some of those values in play, if for no other reason than that primarily the people involved within those companies are coming from the U.S., they were raised with the values of free speech and democracy generally and you see that, I think, brought up very often. Companies, for example, like Twitter often publicly grapple with these questions. And Jack Dorsey often tweets about these questions, about Twitter’s responsibility. People on the trust and safety teams there, the legal team, and the policy teams there often talk very publicly about how they’re trying to deal with their responsibility for protecting democratic values globally.

The flipside of that, though, is, of course, that they are businesses and businesses enjoy being able to operate in multiple markets. So sometimes that means having to deal with conflicting norms and that is a very delicate balance. So, you know, I am not the CEO of any of these companies, so I can’t make these decisions, and it’s easy from a civil society perspective to say, yeah, just go and protect free speech, that’s all that matters. But there are often other competing interests at play.

ROBERTS: Yeah, hear, hear! (Laughter.)

CAPLAN: I would—I would actually complicate this question a little bit and say that most of the companies have been moving with a limited-restriction model of speech for the last long while. So we saw this very, very early on and creating some rules against kind of trademarked content and intellectual property. Then they started moving into other types of content, like harassment or revenge porn. And now they’re moving into issues like hate speech and disinformation.

And when you actually speak to many of the companies, they’re very open about this. They say that, listen, we are moving more towards a community-guidelines approach. One person said to us, you know, we’ve moved away from this public-square model into we’re a community and we have X, Y, and Z rules and that’s how we function. So it’s a move towards kind of a more Rousseauian contract approach.

And the reason for this really varies between companies. So Danielle Citron put out a paper a while ago kind of worried that this was because of censorship creep due to European regulations, that what we’re actually seeing is these companies, it’s much, much easier for them to establish a kind of global rule than it is for them to have a rule in every single country, so they’re taking kind of the least-restrictive and they’re just applying it across the board.

When we spoke to them, they tried to complicate that. They said, you know, this is just because we don’t want people to flee our platforms and we’re trying to keep as many people there as possible, and to do that we know we need to create some restrictions around content. One of the companies just said this was, like, a normal maturation of the company. So I think that that’s—it might be a bit of a false premise that most of these companies are actually just moving towards a limited-restriction model that’s more like the European model of free speech.

MEYER: Just to highlight a point you made there, quickly, and I think properly moderating content could actually be beneficial to the community, draw users, and encourage productive free speech.

CAPLAN: Correct. Sure. I’m Canadian, so this is the—

ROBERTS: Right.

CAPLAN: And also, I was going to say X, Y, and Zed values. (Laughter.)

ROBERTS: I knew you were. I was waiting for it.

CAPLAN: I was, like, Z, yeah.

ROBERTS: I was waiting for it.

CAPLAN: And so for me, that’s not a bad thing. There was a moment where you could start to see that these kind of free speech values adopted by these companies maybe five years ago were actually starting to shift norms in Canada about how we think about speech because it does operate under a limited-restrictions model.

So is it time for these companies to start considering other models for speech? It might be.

MEYER: Adam?

Q: Great panel, thank you very much.

I was wondering if the Chinese companies have any influence on where the U.S. companies are going or on the debate? We know that they employ tens of thousands of moderators to take down content. We know that they’re adopting AI fairly widespread internally. We know they cooperate very, very closely with the government, just recently reports about they were already uploading terms and phrases that the government had given them for content moderation and takedown. So how do we think about the role that the Chinese social media companies are playing? And is that influencing the debate in the U.S. and how the companies are thinking about it in international markets?

LI: And that’s really interesting. So obviously, the free speech environment in China is a little different than it is in the United States. And that’s reflected with the tech industry there and here as well. For those of you who aren’t aware, in China there is effectively an equivalent, a localized national equivalent, not state national, but Chinese equivalent for pretty much every type of technology we have here. There’s a Chinese Facebook, a Chinese Wikipedia, a Chinese Amazon, et cetera. And I think what we see here is a lot of companies from the U.S. and from Europe trying to compete, but not really being able. And I don’t think it’s so much that the Chinese social media companies through their content moderation efforts are influencing U.S. social media companies. But I do think that, generally, this urge to be able to compete with that huge Chinese market, that is influencing U.S. companies.

So, obviously, we’ve seen recently the question of whether or not Google should reengage within China. And that’s, again, why I go back to the main point that companies are made of people. And what we saw there was kind of a conflict within Google, which is still happening right now, about the values, about how Google should be protecting free speech, if it should be protecting free speech, what it means to protect free speech globally and so on.

Is it better to provide some services to a nation or does that create bad international norms, right? Is it better to prioritize being able to support a growing business or is it better to prioritize this sort of principled stance on speech? So we have conflicts even between directors, between executive, and conflicts from employees. And, of course, the public is also involved as well. So I think here what’s really driving possible change or at least conflict for U.S. companies isn’t really wanting to be similar to Chinese companies, but wanting to possibly compete within that Chinese market.

Q: Thank you very much. Tom Dine with the Orient Research Centre.

As I’m listening to all of you, I can picture an ongoing—hello—an ongoing conflict among three academic categories: law school, business school, and political scientists. If we have this panel five years from now, where will that conflict be?

CAPLAN: Good question. I don’t know if I—and I also want to know what category I’m in. (Laughter.)

ROBERTS: I was just going to say, being that I’m not—I don’t—I don’t relate—

CAPLAN: I don’t—I’m not in a business school, but I am an organizational researcher, so—

Q: But you each have got legal departments—

CAPLAN: Right, right, right.

ROBERTS: Right.

Q: —and you have corporate interests.

CAPLAN: Right.

ROBERTS: Right.

CAPLAN: I hope it’s an ongoing battle and debate between these three if that’s the only way that we’re going to start to see these kind of decisions and policies continue to evolve is through having kind of pressure in terms of this normative pressure that we’re placing on companies about making sure that speech rights are being protected, that they’re not overly censoring content, that we’re, you know, tempering that with an understanding of the organizational dynamics of these companies so that we kind of know how to properly regulate them and that we’re establishing laws that aren’t necessarily too tied to the technologies that are being produced right now because those are going to continue to change, but rather the organizational dynamics of these companies and the normative frames that we’d like to preserve.

LI: I think maybe one way to think about your question—so law school, business school, political science—you’re thinking of three types of actors sort of or three types of interests—corporate interests or corporate actors, state interests and state—and state actors in terms of regulation, and then the interests of the international community as a whole.

So I think of that because it’s an interesting place right now for a lot of these tech companies. In regulation, we talk about if we think of them as corporate entities, right, which we know how to regulate, generally speaking, or if we think of them almost akin to nation states due to the power and influence of some of these companies, I think it’s really interesting seeing the way that things have changed. I mean, in many powerful industries, you have the larger industry players interacting with governments on a different level, right?

So what we see now—someone gave this great analogy. It’s no longer, say, the relationship between, for example, Facebook and the EU. It’s not the relationship between a small furniture manufacturer and the state of North Carolina. But it’s the relationship between Belgium and the EU. It’s almost that these companies are so influential that they are almost acting as state actors, they negotiate directly with governments often. So you kind of muddle this law school, business school, political science distinction now. And I’m really curious to see where that’s going to go.

ROBERTS: I think one final comment I’d make, about five years from now, is that I think it behooves us all to reorient in terms of our expectations for solving these issues. In fact, they are—they may not—the specific problems may not be intractable, but on a—on a larger—on a larger stage, this is actually an evolutionary process akin to all other kinds of policy development.

And as Tiffany and Robyn both point out, it is also at the world stage. And so, you know, solutionist orientations, even when I’m sympathetic to the regulatory desires there or AI or any other solution, those are granular. We have to think actually much larger about the impacts and implications over the long term.

MEYER: Great. I think we have time for about one more question from this great discussion.

Sir?

Q: Hi, good morning. Ché Bolden, United States Marine Corps.

A lot of the conversation has been dealing with—dealing with the negative effects of online content. But one of the questions I want to ask, particularly on artificial intelligence, is, what effects, what productive effects do you think that bots and other forms of artificial intelligence can have on the discussion and the development of content going forward?

CAPLAN: I don’t know if I see a positive—(laughter)—

ROBERTS: Well, yeah, I mean, obviously, some of the examples that have been given by panelists are positive. You know, when we think about this issue of differing norms around the world, one thing that the world gets behind typically, at least at the nation state level, is child abuse, right, and intervening upon that. So that’s one great example that we can always hold up.

But I think these issues are so thorny that we are all wary to apply AI as a—as a positive before we’d actually solve the root causes. So social justice and solving inequity issues would be great if we could automate that, but we can’t even not automate that, you know what I mean? So I’m a bit worried and wary about thinking about—

CAPLAN: Yeah, I think I misunderstood the question. Was it automation to take down or automation to distribute content?

Q: I should have used the word “productive,” as opposed to “positive.” However, the presence of bots online is pervasive and most people perceive it as a negative thing. But there are some productive ways to use bots and artificial intelligence in generation and moderation of content.

MEYER: Right. Productive use of bots.

ROBERTS: Oh.

CAPLAN: So in the generation of content, that’s where I was confused. So, actually, so there’s one area where I’m very sad that there is going to—we’re going to see a lot less bots and that’s art, that bots have been used online for lots of reasons, especially on Twitter, to kind of create content that’s really provoking and thought-provoking and amazing and funny. And I think we will start to see some of that go away.

Beyond that, though, I’m not quite certain if I see a productive role. So it would be—it would be kind of beneficial to start removing bots in terms of major—in terms of follower—the ones that are used to increase follower accounts, the ones that are used to amplify content. And part of the reason for that is that we want these spaces to be kind of our de facto public spheres. We’ve been treating them like that. And when we kind of—when we enable all of these different ways where content can be kind of falsely amplified, we start to really distort what public opinion is in these spaces.

MEYER: So I’m going to take advantage of your mention of bots as art to exercise the most important role as presider of this meeting, which is to end it just about on time.

This has been a terrific conversation. Please join me in thanking our panelists. (Applause.)

We’re now going to move to a fifteen-minute coffee reception. And the second session will begin at 10 a.m. sharp. Thank you.

(END)

Session II: Deep Fakes and the Next Generation of Influence Operations
Robert Chesney, Aviv Ovadya, Laura M. Rosenberger
Guillermo S. Christensen

This panel identifies guidelines tech companies can follow to limit their negative use and offer views on how governments should react to deep fakes, if at all. 

CHRISTENSEN: Well, good morning, everyone, and welcome to our second panel of today. The topic is "Deep Fakes and the Next Generation of Influence Operations." I think with deep fakes we’re into some very new territory for the Council and we’ll talk a little bit about a lot of very interesting issues.

First, let me introduce the panel that we have. Next to me is Bobby Chesney. Among other things, he is the blogger and chief editor, I think, of Lawfare.

CHESNEY: Ben Wittes is chief editor. Me, just a lackey.

CHRISTENSEN: Just one of the lackeys. OK. Then Aviv Ovadya, who is formerly the chief technologist with the—

OVADYA: Center for Social Media.

CHRISTENSEN: —Center for Social (Media) Responsibility but is now the founder for Thoughtful—the Thoughtful Technology Project, and we might hear a little bit more about that. And then to my far right, Laura Rosenberg (sic; Rosenberger), who’s with the German Marshall Foundation (sic; Fund).

So a couple of reminders before we start. This is an on-the-record discussion. My name is Guillermo Christensen. I’m a partner with Brown Rudnick, a law firm. I was a former CIA fellow here at the Council on Foreign Relations so I’m very happy to always be back home, as it were.

We will have a panel discussion up here at the podium for about twenty, twenty-five minutes and then we will have an interaction with the members and I look forward to a lot of very good questions and, hopefully, some controversial debate on this topic.

So as I looked into some of the history of deep fakes and some of the background for today’s discussion, it was interesting to note that with the recent one-hundred-year anniversary of the end of World War I, we have actually a very interesting historical analogy and use of deep fakes and information warfare that happened shortly after the French surrendered to the Germans in World War II. There’s a very famous episode where Adolph Hitler was filmed coming out of the train car where he forced the French to sign the surrender and he was video—he was filmed at the time, obviously, taking a very odd step back.

The Canadian in charge of propaganda—that’s what we used to call it back then—for the Canadian government noticed this and figured out that he could manipulate that foot into one of those sort of moving GIFs that you may see on the internet and made it appear that Hitler was actually dancing a jig—a happy jig—with the surrender of France, and this was played on newsreels all over the world. Obviously, it didn’t change the fate of history. Everybody already didn’t think very highly of Mr. Hitler. (Laughter.) But it added a tone of, you know, pleasure at what he was doing to the French nation that helped the allies.

So this is not something—this is not a new development, in many ways—the manipulation of video images. But, clearly, with the advent of artificial intelligence and this new development of so-called deep fakes—and we’ll also talk about shallow fakes, I think, if I may use that term—there is something that’s changing and you may already have seen some evidence of that, any of you who have watched one of the more recent Star Wars episodes.

You will have been surprised to see Peter Cushing make an appearance as the emperor. Well, Peter Cushing passed away a number of years before that movie was made and the only reason it was possible to have him in there was because of the technology that we’re going to talk about today.

So, first, I’d like to ask Bobby to talk a little bit about the technology and why deep fakes benefit from artificial intelligence and what are they, at the most basic level.

CHESNEY: Thanks, Guillermo. Thanks to all of you all for being here.

So you’re right that there’s nothing new about fraud, forgery, information operations, and manipulation of evidence, right. So as long as we’ve had information and evidence, we’ve had manipulation designed to create the wrong impression for various purposes. So what’s new?

My co-author, Danielle Citron, and I argue that what’s new is the deployment of deep learning techniques, neural network methodologies, most especially something that is, again, a regenerative adversarial network where you have two algorithms, one of which is learning and creating, the other is detecting and debunking, and they train off against each other in a rapid evolutionary cycle.

This has produced some qualitatively significant leaps in terms of quality, difficulty of detection, and, we argue, capacity for diffusion. So let me say a few words about each of that. Quality wise, it’s not like we’ve never had, as Star Wars illustrates—it’s not like we haven’t had in recent years ability to create high-quality but not actually authentic depictions of people saying or doing things they’ve never said or did. But this particular methodology is one that will yield content that will pass the ears and eye test relatively easily compared to other existing forms of digital manipulation, combined with a capacity for eluding detection by more traditional detection means.

Now, it’s one thing—as we had recently with the Jim Acosta video, which I’m sure we’re all familiar with, the allegation that the video that was distributed of the incident—the White House press briefing—where it had been sped up, that’s easy to debunk insofar as you’ve got lots of well-established credible versions of the actual video and you can just compare and contrast. So we’re not really talking about a scenario like that where it’s easy to compare a known original.

I’m talking about scenarios where we don’t have a known original to just immediately compare and contrast to. Deep fakes, when really well done—when it’s a true deep fake, not a shallow fake or a cheap fake or whatever we might then want to call it—there is a difficulty of detection that’s different from what we’ve seen in the past.

And then the capacity for diffusion—it’s great for industrial light and magic to be able to produce with months and months or years and years of effort to reproduce Peter Cushing. To be able to do it much more quickly, that’s different in kind. If you can produce something that passes the ears and eyes test that is extremely difficult to detect unless you’ve got equivalently high-end algorithms for detection, and maybe not even then, and it’s something that can diffuse and that the expertise needed to produce it or access to that technology will diffuse in various ways, whether it’s through the offering of commercial services—and there are a bunch of companies emerging in this space, companies like—to give one example, a company called Lyrebird, L-Y-R-E-B-I-R-D—commercial services, dark web services, just as you can go on the dark web and you don’t have to know how to construct a botnet or to deploy a DDoS attack but you can buy that as a service—you can rent that as a service. It will not be long before you can get high-quality deep fakes as a service even if you yourself don’t have the capacity to generate it.

Our argument is that this is going to be a further contribution to our current problems of disinformation but it’s a qualitatively different one and we think it has big implications.

CHRISTENSEN: Effectively, today, if you have a iPhone you sort of have the ability to create a—again, not a deep fake but a shallow fake, right. You can do these wonderful little face emojis—you may have seen them—where you record your face, you speak, and it looks like you are some kind of a cartoon character. So that technology is already on an iPhone. How soon before we get to the point where this technology is available—not the shallow but the deep fake—where the ability to manipulate a person’s visage, create new video, is available on someone’s MacBook or laptop and they can do that in not three months like it might take for the Star Wars creators but maybe a couple of hours for a propagandist sitting in the Ukraine?

CHESNEY: Well, I won’t offer predictions on how long it’ll be before we have apps galore that really provide true deep fake-level qualities here. But for cheap or lesser forms already there are all sorts of apps that are designed to help you with makeup, hair style, fashion. My kids have shown me many of these. It’s not the sort of thing that will persuade at the high level but this underscores a point. We, obviously, live in an environment in which manipulation of information including imagery, audio, and video already is quite possible. Augmented reality techniques of all sorts will contribute to that and put it at our fingertips.

To emphasize the deep fake piece of the problem in a way is to just shine a spotlight on the tip of the pyramid of persuasiveness where you could potentially do the most damage insofar as you’re talking about frauds or disinformation attacks that are meant to persuade even in the face of efforts to debunk. That doesn’t mean to disparage the importance of the rest of—the big part of the pyramid where the cheap or shallow fakes are that, as we’ve seen many examples of, may be relatively easy to debunk but they’re still going to cause harm.

CHRISTENSEN: Yeah. Aviv, you’ve written recently that disinformation from state and nonstate actors is just one component of this—I think you labeled it deteriorating information environment globally—and then you’ve also suggested that AI, so not just deep fakes but AI, is likely to accelerate failures in our public sphere in this area. Could you talk us a little bit through what you see and some of the dynamics and the direction that you see this taking?

OVADYA: Yeah, and just to sort of be a little bit more broad here, I guess part of that included things like deep fakes and, really, this sort of broader class of synthetic media which is, you know, you’ve got your ability to manipulate video, the ability to manipulate audio, ability to make it seem as if you’re talking to any—to a human when you’re not talking to a human—like, that entire suite of technologies where you’re—it’s sort of like that dream of that personal assistant who could do anything you want for you, but then what happens when you have that technology?

There’s all these side effects because that magical sort of science fiction realization can be applied to manipulate discourse more broadly and I think that that’s sort of what I’m trying to get at when I talk about the impacts of these advances and gains are definitely a core component of that on our information ecosystem. I think there was a second part to your question.

CHRISTENSEN: And then in terms of the risks that AI creates in this context, how would you break those out?

OVADYA: Yeah. So I think about this as sort of creation, like, how does AI affect what we can—what types of media we can create; how does it affect how we distribute media and how does it affect our consumption or belief of media. And so on creation, obviously, there’s deep fakes and then there’s those other sort of media, you know, whether it’s text or audio that I just mentioned.

In distribution, you have the—one second—you’ve got both your ability to change how content is amplified, and in particular the way the recommendation engines suggest particularly the content and/or particular events or particular people to being paying attention to across different platforms. And you also have your ability to create or to coordinate inauthentic actors and then in terms of consumption you have the ways in which you can target that.

I mean, artificial intelligence is what you use to say, oh, you’re like that person—we could use this technique, or we should show you this piece of media as opposed to that piece of media. And then there’s, similarly, you can use artificial intelligence to say, oh, here’s a piece of content that will be more persuasive or here’s—we can generate content that is more persuasive in these particular ways for that particular person. And so that’s a way that they can impact belief.

CHRISTENSEN: Laura, we’re, obviously, going to spend a lot of time focusing on the U.S. in this discussion. We seem to be in the middle of these issues in ways that so far most are still a few steps behind. But the technology is not U.S. only by any means. Artificial intelligence is, arguably, a higher state-sponsored priority in China, for example, than in the United States. What kind of dynamics are you seeing out in the Far East in China both with AI as far as disinformation and to the extent you’ve seen it in the deep fake world? If you could touch on that as well.

ROSENBERGER: Yeah, absolutely. And let me just spend one second first just to pick up on a couple threads and then I’ll move to the China and AI question. I think one of the reasons that there’s been so much attention on deep fakes is because, you know, we’re trained to basically believe what we see, right, and so when we think about manipulated video and audio content, especially when it becomes undetectable to the human eye, it could pose real challenges for taking sort of the erosion of truth to a whole different level.

What I would say on that, though, is I would not be so dismissive of the shallow fakes question in the sense that even while the Jim Acosta sped-up video was very quickly debunked, side by side, I bet if you survey the U.S. population there is a large segment of the United States that believes the altered version of the video and that’s in part because, number one, lies travel much faster than truth and so people who see altered content are actually quite unlikely to see debunking of that content, number one. And number two, with the degree of polarization we have in the United States right now, we are very predisposed to believe what we want to see and I think that that means that we need to be very worried about deep fakes but I also think that the problem is here now much more than we necessarily realize it and I think we need to sort of cast it in that light.

On the China question and what others are doing, and when I think about sort of the next generation of information operations, I think it’s really important that we not just think about information operations in the way that it has manifested from the Russian-driven model. That is one model. But when we look at how China and the Chinese Communist Party has been able to use technology—the online space—to exert information control and manipulation over its population and then we look at how AI is going to super charge that through increasingly effective and efficient surveillance and control, it poses a real challenge for democratic values and norms.

Now, right now we see this being beta tested within China’s borders and, most notoriously, in Xinjiang, where we have the combination of facial recognition technology with voice recognition technology combined with, you know, QR codes that are being placed on Uighur homes that is basically being set up to completely monitor the entire Uighur population of Xinjiang to immediately root out any dissent, completely control the information space by denial, right. It’s not by necessarily spreading some kind of false or manipulated content. It’s information operation by denial of speech in the first place and by control of that speech.

We also, of course, see in China the creation of the social credit score which, again, in its eventual version will be AI enabled. Then you combine that with the Great Firewall and what China has been able to do in terms of censorship and control. And by the way, China is starting to push out the borders of that censorship on its own platforms where they’re used outside of its borders.

Then it’s exporting digital technology to an increasing number of countries, essentially laying what I see as the digital track of information, control, and manipulation in the future. So you don’t even necessarily have to create falsehoods or false narratives or manipulated content. And by the way, the Chinese are decent at that, right. I mean, they’ve got, you know, the 50 Cent club that, you know, is basically paid—it’s humans, right—that’s paid to write commentary online that is solicitous of the Chinese Communist Party. Once you can enable that with AI and automate the whole thing, you can, again, take that to a whole different level.

Now, again, this is largely confined within the borders of China at the moment. But it certainly appears, like, in a very kind of futuristic Black Mirror sort of way that the Chinese Communist Party is basically preparing an authoritarian toolkit in a high-tech toolbox that could be exported, and I know I’m painting an alarmist picture and I’m doing that deliberately so because I do think that we need to bust out of some of what has become how we think about disinformation and understand that information operations can manifest in a wholly different way that seriously undercut both our values, norms, and interests around the world.

CHRISTENSEN: Thank you. And so if we take the U.S. approach to the extent that there is one, China—the other one that’s sort of in the middle—maybe not in the middle but out there between those two—would be the European approach, especially when you’re talking about data privacy, the use of personal information. Europe is taking a different tack than both the United States and China.

Open it to all of you. Where does the European experiment play in the use of AI and the impact of AI on information, information warfare, and disinformation and how does that—to the extent that it’s different than the U.S. and Chinese approaches? If you could comment on that.

ROSENBERGER: I’m happy to take a first crack. I mean, we don’t see significant AI development out of Europe, right. I mean, Google’s DeepMind is actually based out of the U.K. But we don’t actually see a whole lot of other European-based development of AI. That may in part—I mean, or really, you know, in the broader tech space, and there’s a lot of reasons for that.

But to kind of pull the strand back I think to get to the crux of your question, you know, AI feeds off data, right. That’s how it learns. That’s how we train it. That’s how we make it good. And the European approach by, you know, putting a pretty heavy regulatory framework around data and including things like the right to be forgotten, can constrain the use of data for AI training.

You know, in China this is not an issue. You’ve got 1.4 billion people who have no choice but to freely admit their data not only to Chinese tech companies but those are, you know, very closely intertwined and there’s a lot of good work documenting how that data flows to the Chinese Communist Party and its constituent surveillance elements.

In the United States, we haven’t set up a regulatory framework around data yet. I personally am a strong advocate of doing so. I think thinking about data privacy and data protection is a critically important question, especially when we come to dealing with broader questions of information manipulation online.

But there’s a quandary here because if we do that in a way that inhibits data flow to AI development, we’re then setting ourselves up to lose the AI arms race with China. And so I think this is a conversation that has not yet taken the kind of policy debate shape that we really need. I mean, this is a core, core question, I think, for policy makers when it relates to technology and when it relates to questions of privacy, and I don’t see that happening in any robust fashion and I think, frankly, we’re late to that game.

CHESNEY: I’ll add to that that we have this federalism system, this decentralized system, in which the federal government may not be playing the game but some states can and, you know, famously, California, as you know, has decided to enter that space in GDPR like fashion. So right now we have an interesting sort of hybrid in the U.S. where we have some elements that are moving forward with the "protect the data, maximize privacy" model. We have other elements that are not that at all, and it’s not really clear where we’re going to land.

I mean, it does seem like what you’ve got is an emerging set of blocs with Europe taking the individual privacy protection model to the maximum extent, China, obviously, going the complete opposite direction, and the United States sort of traveling this middle path, and one of the great questions over the next ten years is what will this mean—are we going to really see this Balkanization of the information environments where you can choose the American model which may become sort of the Anglo-American model with Brexit, the continental European model, and then the Chinese model, and there will be competition elements of that especially between the United States and China that Europe’s not really going to play with. They’re going to be on the consumer side, not the generator side.

ROSENBERGER: Right. That’s right.

OVADYA: There’s also the challenge of Chinese apps getting a foothold in the U.S. market, so right now TikTok is blowing up and it’s a Chinese-based company, and where is that data going, or companies like Grindr that were bought, which is basically the best information you can use if you want to extort someone, unfortunately. (Laughter.) And so there’s a lot of challenges there.

CHESNEY: You know, I would add to that the genetic 23andMe—

ROSENBERGER: Yes. Yes.

CHESNEY: —that sort of—that space. The amount of genetic data that we are putting into the hands of companies that actually in many instances now are owned by Chinese entities is really remarkable.

CHRISTENSEN: Yeah. One can imagine a future—maybe we don’t want to—but in twenty or thirty years where that DNA data is really used to make some seriously deep fakes, right, with the—(laughter)—so but, hopefully, well after my time. (Laughter.)

CHESNEY: Don’t count on it.

CHRISTENSEN: Don’t count on it. True.

Laura, you mentioned the fact that the United States and China, largely speaking, as blocs are leading the way on AI at different paces. Everybody else seems to be fairly far behind. When we look at the possibilities for technology to assist us in countering deep fakes, for example, and the possibilities for technological solutions to the challenge that these developments will bring to our dialogue, Bobby, are there technologies out there? Are there ways in which we can put up some walls or try to protect ourselves, things like—in the copyright space there’s watermarking. But is there—are there technologies that are being developed side by side?

CHESNEY: Absolutely. Right. So there are many technologies that are relevant for this conversation. When Danielle and I have spoken about the deep fakes question, the first thing people tend to say is, well, if technology is the cause of this problem isn’t it also probably the solution and, certainly, it’ll be part of the solution. Two different technologies that are really relevant for shallow or deep fakes, one is detection technologies and we could talk about the—the general consensus is that it’s an arms race.

The capacity to detect is growing as well and you occasionally will see stories—there’s been a flurry of them in recent years—saying ah, well, deep fakes—not so good because it turns out people don’t blink in the videos they create. Well, that’s super easy to fix, and as soon as people pointed that out all the databases that were feeding the generators were adapted to make sure you included normal human blinking in the—in the feed. And now the newly-generated deep fakes include blinking. Or if it’s the subtle skin colorations caused by blood circulation—whatever it is—once it’s known what the tell is, the databases can be modified to account for that. So there’s that arms race on the detection side.

Much more interesting is the digital provenance issue set. And I was looking through the list of attendees; I don’t know if Tara from Truepic is here. Yeah. So Truepic is an—sorry. Uh-oh. (Laughter.)

CHRISTENSEN (?): What just happened? (Laughter.)

CHESNEY: Truepic is doing really cool work. It’s one of several companies in the digital provenance space and what this means is coming at the problem from the supply side, trying to increase the extent to which there’s authentication-based technologies—watermarks, if you will—built right into the content at the point of capture, and it’s really cool when it works.

The dilemma from a policy perspective—the reason this isn’t a panacea yet and might be hard to get to is if this is really going to work to minimize fraudulent video, audio, and photography, you need ubiquitous uptake, digital provenance, watermarking in all the capture devices. You got to get it in iPhones. You got to get it in pixels. You got to get it in everything that’s got a microphone or a lens. That’s going to be an uphill battle. And then you need that also to be ubiquitously adopted by at least most platforms, major and minor, as at least a filter where they’ll flag content, alert users to the lack of authentication of that kind, if they do allow it, or if we really want to go there I suppose you could make it a necessary condition for upload in the first place.

There are a million reasons why this may not be a situation we’re going to get to any time in the next many years. So we shouldn’t view it as a panacea. It’s probably part of the solution and, for my part, if we’re going to go the digital provenance is the answer route, I hope that what the platforms do is not to bar uploading of content that lacks it but simply some sort of user-accessible flagging so you can take it into account. Maybe it affects the algorithm of how things get circulated as well.

CHRISTENSEN: Yeah. Before we get to the final question on what other regulatory or government initiatives could address the problems of deep fakes, are there positive scenarios we can imagine for the use of deep fakes? I mean, one that I can imagine myself is representing, in the educational setting, characters for children to view historical developments that were never filmed as a way to bring them into the story better. But short of that, I struggle outside of the Hollywood context to see what are the positive scenarios for the use of these and my follow-on question to that would be if not, should there be more of a regulatory approach to keeping them out of the information flow that we all live in.

OVADYA: So I’ve got answers to both of those, at least. So positives—well, many of you take video calls. Sometimes you don’t always look your best. What if you always looked your best no matter what you were wearing or not wearing? So there is that. (Laughter.) There is that. But there is also what if you wanted to speak to someone in their own language with your voice and your intonation translated perfectly, right? Those are—those are positives. What if you want to have—actually have that seamless interaction with that science fiction-level virtual assistant? So there is—there is this sort of—there are—there’s not nonzero benefits to these technologies and to the things that come out of them.

On the regulatory side, can we actually put this genie back in the bottle? I don’t think there is an easy way to create regulation that says you cannot use a particular type of technology. You can regulate a platform where there’s centralized control. But when it comes to a base technology it’s like saying you cannot, you know, broadcast things on radio waves. It becomes—it becomes a lot more challenging, especially when everyone actually has radios anyway. You all have a computer. You all have a million computers, and you can’t—we can’t just say you can only do some things on a computer without completely abrogating our personal sovereignty over our devices, which that’s a much longer conversation.

ROSENBERGER: Yeah. I would just say as a fundamental principle truth is essential for democracy. Truth is essential for a functioning democracy, and so the more we enable not only sort of manipulation of information but that we create scenarios in which truth does not seem to exist anymore, I think we’re doing real fundamental damage to the underpinnings of our democracy. So that’s sort of a fundamental principle for me.

However, I also believe deeply that free speech is fundamental in our democracy and that includes falsehoods, right. Whether we like it or not, that includes opinions we don’t like. And so that’s why I’m—in part why I’m skeptical of regulatory approaches to this particular piece of the puzzle and I also agree with the view that I don’t know that this genie can be put back in the bottle or that you can say certain kinds of technology are out of bounds.

Now, I do think that there are ways—you know, as we think about broader questions of regulation around technology in general and the obligations of technology companies, I do think that there are real questions there that can be, you know, at a more basic level, constructed, even if we think about just the basic premise of Section 230 of the Communications Decency Act.

You know, Senator Wyden talks about how, you know, that law was always intended as having both a sword and a shield, right. It was to shield the companies from liability but also to give them a sword to be able to police what happens on their platforms. Now, these platforms have taken on a role that was probably never envisioned by Section 230, right. Section 230 certainly enabled their creation in the first place but now I don’t think it’s exactly, you know, anything like this could have been envisioned back in 1996. But I do think that there is a space there for companies to be thinking about this and for the broader regulatory discussion to continue to enable that.

The last piece I would say, though, is when it comes to regulating any particular kind of technology, technology is always going to move faster than law and regulation, and the minute we start trying to regulate or pass legislation about particular pieces of technology we’re probably going to be, like, fighting the last battle and we’re going to already be on the next war and, you know, it’s going to be a losing race. So that’s another reason why I’m skeptical of that approach for this.

CHESNEY: (She asked ?) about the beneficial uses. When Danielle and I wrote our paper—which, by the way, if you’re interested in reading, like, the longest, more-detailed-than-you-want paper about this, Google “Chesney deep fakes” and you’ll get a copy of it—we had a section where we felt was important to list the beneficial uses, and we kind of struggled. We mentioned education, art, and so forth. Aviv has definitely given me hope that I can stop having to attend any VTC or other meeting and have my virtual me show up and say smart things and look good at the same time. So that’s clearly—(inaudible). (Laughter.)

A different one that we learned about, I mentioned the company Lyrebird. In the original draft of our paper we had some criticism of this company because we thought some of the claims they made on their website about their corporate responsibility struck us in kind of a negative way, and then I felt so bad about it later when I learned about one particular kind of pro bono public activity they engaged in. They’re using their algorithm-generated voice simulation technology to restore and give original voice to ALS—Lou Gehrig’s Disease—victims who’ve lost their voice and that—you know, a dear friend of mine died from that not that long ago and I just was really struck by, wow, that’s a use of avatar technology that is so powerful.

In terms of other responses, I mean, it’s completely correct that the legal and regulatory solutions to the general problem of fraudulent imagery or audio or photography are—it sounds good in theory but you have difficulties of attribution and finding the original actor. You’re much more likely to have effect if you put pressure on the platforms themselves. But the platforms would then quite rightly say, OK, but you better define it really specifically what it is we’re not allowed to or not supposed to circulate and you better define it in a way that’s reasonably possible for us to scale up the filtering and detection and removal of that—go. That’s a really hard problem.

We know that from the hate speech context. We know that from the—as the first panel said, the terrorism speech or the incitement speech context. This is a general problem the platforms are struggling with and the first panel I think nicely laid out all the practical difficulties of scaling up to perform that function. And it ultimately comes around to a question of will they on their own as a matter of political and market pressures or will they, because they get pushed to it by our government or others, develop a sufficiently robust process and a sufficiently transparent, by the way, process—you want to have some idea how this works—to decide what content gets pulled and what doesn’t in an environment in which people don’t agree about what sort of frauds cross that line. What’s political satire? What’s a bit of lighting manipulation? You know, think about how many political commercials you’ve seen where the candidate for the other side is depicted always off kilter, grainy, gray, voice sounds like it’s an echo chamber. Is that a deep fake or is that a shallow fake or is that just fair bounds of sharp-elbowed political content? I don’t know that we’re going to actually have solutions to that.

OVADYA: If I can add to that for a second. Yeah. So I don’t want to imply that I don’t think regulation doesn’t have a part to play in these broader set of issues and even in the issue of deep fakes and I think that—or, like, we call synthetic media, more broadly. I think that the frame here is sort of around the cost dynamics to some extent. What will make it harder to spread something that is a weaponizable fake? What is something that will make it cheaper to show—like, to spread something that’s actually real and true?

And so that’s the underlying dynamics and you can do it on both sides and you can have regulation, like, in a variety of different areas—things that affected the creation of content, the distribution, the ways in which people form beliefs and the ways in which it impacts our actual specific institutions whether they be courts or, you know, journalism. All of these have different places where they’re vulnerable and where they can—they can become stronger.

And I also want to just touch on the Lyrebird. So they actually provide a good example of this where one of the things that I admire about them is that they require you—if you wanted to—right now if you wanted to make a copy of a voice, they give you some particular sequence of words and you need to repeat that sequence of words in order to train the model. But that also provides a verification that you, the human whose voice is being used, was the human that, like, essentially gave consent to their voice being used, right. If I give a random string of words and say, you know, generate a voice that can then be used to say anything else, that provides a degree of difficulty for copying that voice where you would have to, like, somehow find those particular words, resplice them together in order to train this model otherwise. And so it creates a barrier to entry. It decreases—it increases the cost to creating that fake voice model which can then be used to say anything.

CHRISTENSEN: And that’s—

OVADYA: And so that’s an example of something a—a regulation you might advocate for if you’re creating technology that allows this.

CHRISTENSEN: And that’s one reason why you should never say my voice is my password ever again, as you see in the movies. Doesn’t work. Too easy to copy.

CHESNEY: Verify me.

CHRISTENSEN: So we’ve now reached the—I think which is the highlight of any Council session, which is when the members get to throw questions at our panel. I hope you’ll ask some hard questions. As always, please stand up, identify yourself, ask a concise question, if you will, please, and other than that, I will open up the floor for the members. And if I may, since we already put you on the spot, if you have a question then, please, let’s go.

Q: Thanks so much for the shout out. My boss is really going to like that. So my name Tara Vassefi. I just started with Truepic last week.

And my background is actually—(laughter)—my background is actually as a lawyer I was working on basically collecting, verifying, and archiving digital evidence for human rights cases and war crimes prosecutions, and, obviously, that process of verification is very robust and there’s a lot that goes into it and I’m having trouble kind of shifting away from, you know, the hours that goes into, you know, making evidence meet a(n) evidentiary standard for a criminal prosecution to, you know, content that’s just kind of captured by your phone and user generated.

And it think there is a concerning conflation between, yeah, authentication and verification. There’s a lot of—there’s a notion that if something is authenticated then what the thing is saying is true. And so I’m trying to figure out what’s the best way to apply some of that, you know, high evidentiary standard of verification for user-generated content and distinguish that, yes, if we can prove that this image was taken at a certain place at a certain time and, you know, barometric pressures and all these other things—these cool things that Truepic does—then how do we kind of translate that into OK, well, what is the person saying and what is actually verifiable in the content or the behavior that’s manifesting. So I’d love to hear your thoughts on that.

CHRISTENSEN: So does AI have a role to play in this? I think it does. But—

OVADYA: Well, I guess I’m not one hundred percent sure I understand the question here. You’re asking—and if someone else believes they do feel free to chime in—but you’re asking if—like, I guess, the authentication in this case being—meaning this is the person, this is the phone, this is the location—verification meaning this event happened. Is that—that’s the distinction that you’re trying to—

Q: The content of what it is that is being authenticated. So, like, is there a crime being committed or—

OVADYA: Right.

Q: —is the person who is saying these things and we know that they’re saying it what—if they’re—what they’re saying is actually true, does AI have a role or, you know—

OVADYA: I see.

Q: —can we use some of this technology not just to prove that the thing hasn’t been manipulated but also to show what we really want to get out of why we’re viewing these things, which is the truth behind what’s being captured.

CHRISTENSEN: So fact checking, in the modern parlance.

OVADYA: Right. Yeah. I mean, I think that AI is far, far away from that in our current—you know, it’s hard enough to do content moderation when it’s necessary at the scale of billions of people. When you’re talking about verification of this form, my guess is that you definitely, especially in a human rights context, you want humans involved in that for the foreseeable future and I think that the first places that we’ll see this improving, at least having a little bit, is going to be in the content moderation space.

CHRISTENSEN: Let’s see if we can get another question. The gentleman in the back.

Q: Bob Boorstin, Albright Stonebridge.

None of you mentioned India. You’ve said U.S. You’ve said Europe. You’ve said China. India is trying, and I’ll put emphasis on that word, to develop a third way when it comes to information. Why didn’t you mention it?

CHRISTENSEN: I’ll take that as my responsibility but why—let’s talk about India. What do we see in the Indian context and, for that matter, are there other players—Israel, I know, has a very vibrant industry around AI and these things—are there other players that are coming out with a different approach besides the ones we’ve talked about that we haven’t mentioned?

CHESNEY: I’ll just say I’m not familiar with the details on India so I don’t talk about it, out of ignorance. But my main concern when I began writing in this area was for what it means for the United States and what does it mean for the U.S. government, our lawmakers, our regulators, our intelligence community, our military, and local authorities. So it was intentionally framed as an America-specific project, in my case.

OVADYA: I guess and I’ll add that India is very interesting in that it is, you know, the most populous democracy and it’s—there’s still a large nonliterate population in India. And so when you have YouTube being a major player in India and many people using it not being able to read or write, the implications of this sort of technology become even more staggering and just the ability to sort of fact check, you know, when you cannot read or write in that context.

So I think that’s one of the interesting wrinkles that it provides in addition to, as you may have heard, some of the—some of the ways in which mass violence has been linked to social media or messaging within that context, and India is exploring some interesting regulatory options. But I’m not as familiar with the details there.

ROSENBERGER: Yeah. Similarly, I’m not as familiar so I’ll definitely have to look into it. But there are, of course, many other players in this space. You know, you mentioned Israel. Of course, Russia as well. I mean, Russia is far behind but Putin has made similar comments to Xi Jinping: to paraphrase, in short, whoever masters AI will rule the world. I mean, that’s essentially what they have both said in different verbiage.

Now, again, Russia is far behind. But there’s actually—you know, technology is increasingly an area where there are signs of cooperation between Beijing and Moscow. So there’s been a lot of attention to the recent joint military exercises that occurred between China and Russia. But the same day of those exercises there was a major deal inked between Chinese and Russian tech firms that will allow for things like data transfer and other kinds of tech sharing and collaboration. So I do think that there’s a number of other countries in this space that are important to be watching both on the tech development side and on the—you know, how they’re approaching the data questions.

CHRISTENSEN: Please. You have a mic coming up there. Yeah.

Q: Thanks. Kim Dozier, contributor with The Daily Beast and CNN.

So I’m picturing a nightmare scenario, because that’s what reporters do, where a government agency in the Middle East—government news agency—releases a piece of video purporting to show the American president calling for turning a certain country into glass. Now, I, as a reporter trying to decide whether to run that video or not, we do all sorts of things right now where we get stuff that we see on social media and we try to cross-reference it and make sure it’s true.

So, say, I, here, would run it on The Daily Beast site but it’s running across the Middle East. U.S. intelligence comes out and says, it’s not real and here’s how. But, of course, no one in the Middle East believes U.S. intelligence. So what body right now, either at the U.N. or some of the cyber cooperation bodies that we’ve created—who has the international sort of wasta influence to be believed?

CHESNEY: I don’t think anyone does. I think that if you look at parallel scenarios such as the DPRK attack on Sony—the cyberattack on Sony Pictures Entertainment—the U.S. government comes forward with an attribution—a completely correct attribution, and so many people didn’t buy it and so many people critiqued it.

So as your question suggested, if it comes from us and it favors us, then it’s not going to be credited if it’s an internationally salient issue like that. I certainly don’t see much in the way of international bodies that are, A, likely to have credibility and, B, likely to generate consensus on the issue. Something like a presidential statement may have—like the Acosta episode, have the virtue of so many other cameras and microphones on the scene that it is at least easy to, for what it’s worth—and I completely agree with Laura’s point that the truth—the debunking never catches up with the original lie—but you can at least close that gap and create the smaller impact.

It’s situations like, oh, I don’t know, the Helsinki Summit where nobody’s in the room where it happened and we don’t know what was said, and if someone comes out with a highly—high-quality audio that appears to indeed be the president’s voice making promises to Putin that, you know, we are not going to do X if you do the following in the Baltics, that could have real repercussions that could cause real things to happen and it’d be hard to debunk.

ROSENBERGER: Yeah, I agree with Bobby that I don’t think that thing exists right now. There are some interesting proposals that are being batted around, particularly led by the private sector, so Microsoft in particular. Brad Smith has been very engaged on thinking through whether there are some private sector collaborative functions that can be brought together both in terms of attribution, sharing signatures, obviously, which already happens but in terms of joint attribution largely on the cyber side. But I think they’re also thinking about the possible info ops applications of this.

There’s been some thinking—and I’m blanking at the moment on who authored this paper—on even looking to some kind of independent body that could be stood up along the lines of—they use the IAEA as a model. I think it’s an imperfect parallel. But something like that that could create some kind of verification and attribution function. Again, I think these are, largely, aspirational. I think they will require an enormous amount of political will.

I think you’re always going to have the problem of even if you have some independent commission that’s backed by, you know, very powerful private sector entities with a lot of resources behind them, you know, you’re going to have the problem of nation states that may have an interest in casting doubts on some of these questions and so then you’re still always going to have this tension there.

And so I’m not sure it’s perfect but I do think there are some interesting proposals out there that are worth thinking about and, frankly, even if I think some of them are not super practical I’m really glad that people are thinking about these issues and I think we need to be doing some more of that.

CHRISTENSEN: And the reality is is that by the time the response comes out several embassies will be turned into dust or some god-awful other disaster will have happened. So—

CHESNEY: Or the vote will have happened.

CHRISTENSEN: Or the vote will have happened. Right. So, let’s see—the gentleman over here, please.

Q: Thank you. My name is Nathan Fleischaker. I’m with the DOD.

So we spend a lot of time talking about the technical ability to make realistic videos or audio. Is there much AI being used to figure out how to—what the video should say or the audio should say? So, like, looking at how do we say—how does somebody say something that’s going to incite the most violent or incite the most rifts. So it seems like there’s a make something realistic, but also what does that realistic thing accomplish. And how do we figure out what—how do we or how does the adversary figure out what that thing should be saying?

CHRISTENSEN: Is that in terms of persuasiveness? Like, what would work the best with that culture or that language group, that location?

Q: Correct. Yeah. So if this is about influence or influence operations how do we make things that are most influential. So one part is making sure it’s realistic but then what does that realistic thing actually say to have the most influence or to do whatever the objective is.

CHRISTENSEN: Yes. In a way, does AI—can AI be used to help shape someone’s message so that it’s most impactful.

OVADYA: Yeah. I mean, I think one of the pioneers in this space is Pixar where, you know, they put a lot of people in a room and, basically, track their emotions and, like, tweak that until it really, like, hits as hard as it can, right. And right now on your iPhone you’ve got your, you know, complete 3-D scan of your face in real time and you have a computer there that can, you know, generate video in real time and you can imagine something that literally goes through that process for you, and so that’s a world that we’re entering. I haven’t seen this actually be weaponized in any meaningful way. I think there is a lot of work to be done to make that happen, which hasn’t been as well researched yet, which I think is probably a good thing. But that’s a world that we could be entering into.

CHESNEY: I think the world of marketing is very much in—all about micro targeting as a means to figure out which distribution channels will give you the highest leverage for your message. And so that’s not quite what you’re asking but I think that’s the place where AI currently provides the most bang for the buck—micro targeting.

OVADYA: Right. Yeah.

ROSENBERGER: I would just say, really quickly, just candidly, since you’re asking from the DOD perspective, I’ve been quite clear that I don’t believe the U.S. government should be engaged in these kinds of operations ourselves. So I just want to stipulate that up front because I think it’s very important. If we are trying to defend democracy, engaging in a race to the bottom of what is truth is not where I think we should be. So just to state that.

However, broadly speaking, I’m thinking from the adversary perspective in what they’re doing and trying to anticipate that. You know, the things that we see from disinformation, more broadly, I think very much apply similarly in the deep fake space. And so it’s things that hit on highly emotive issues. It’s things that speak to outrage. It’s things that speak to preconceived positions, right. So using micro targeting to identify what people may believe, reinforcing those beliefs, or just taking them one or two notches further is particularly effective and, then again, you know, basically seizing, exactly as the Russians have, on any divisive issue in our society is probably, you know, I think, broadly speaking, the categories that are most effective.

CHRISTENSEN: Up here. I think you’d be getting the counter U.S. government rebuttal at this point. (Laughter.)

ROSENBERGER: I’m fine with that.

Q: Glenn Gerstell, Department of Defense.

I’m not going to rebut that, but I do—Laura, would like to follow up on your comment before about—you commented that truth is, of course, the foundation stone for democracy. How do we deal with the fact that all the solutions that are proposed or all of the solutions I’ve heard that have been proposed to the challenges presented by AI involve in some way some curtailment on liberty?

In other words, in order to protect our democracy—I’m making this an exaggerated statement but just for purposes of getting the point across—in order to protect our democracy against the challenges posed by some of these new technologies we need more regulation, more curtailments of liberty, restrictions on anonymity, et cetera. I’m not advocating it. I’m just asking for the question of how do we deal with this paradox or irony. Thank you.

ROSENBERGER: I think it’s a really important question. You know, my own view is very much that we—in protecting democracy, we have to protect democracy and, in fact, strengthen it. And so I’m not in favor of steps that would intrude on First Amendment rights. I will go down swinging for my political opponents’ ability to have views that I disagree with. But I do think that there are steps. Number one, I don’t necessarily think that regulation alone equals curtailment of liberty. There are regulatory frameworks that actually in many ways can enhance freedoms and liberties and so I think it’s all about how we craft those.

Two is I think we’ve talked about this a lot in terms of content, which is how these conversations often go. But a lot of the most interesting approaches to rooting out the—particularly speaking from the sort of Russian disinformation angle, one of the most interesting approaches to dealing with that has absolutely nothing to do with the content that they’re pushing. It has to do with the origin and the underlying manipulative or corruptive behavior whether that’s coordinated inauthenticity, whether that’s covertness in terms of misrepresentation of who people are.

I would distinguish misrepresentation from anonymity. I think those are two different things. And so there are ways to think about this. Facebook talks about it as coordinated inauthentic behavior. I think that’s one frame. I think—you know, I’ll just make one very interesting example here is Facebook recently, just yesterday, announced more details of their most recent takedown that they conducted right before the midterms and one of the things they talked about that was in the content there was a whole bunch of content about celebrities.

Now, they removed that not because of anything about the content itself. The reason celebrity content was there—and we see this consistently in the Russian disinfo operations—is it’s building an audience, right. These operations are only effective once you have a following. How do you build a following? Well, you share interests with the people. You do that by hopping on trending topics. You do that by talking about celebrities or TV shows or things like that. It has nothing to do with the content. It has to do with what the intention is and the underlying behavior and the origin in that instance. And so I think the more we can think about it from that perspective and less about the content, then we don’t get into the same free speech quandaries that we’re talking about.

The last point I would just make is when it comes to, again, going back to Section 230 and the terms of service that the platforms have, I mean, they actually quite a bit of free rein under our current regulatory framework to enforce their terms of service, pretty much all of which prohibit this kind of coordinated inauthenticity and manipulation.

OVADYA: Oh, adding to that, so there’s this—the content approach. There’s a behavior approach and then there’s also sort of this underlying infrastructure that enables this stuff to happen, right. And so that’s the marketplaces that allow you to sell the activity. I think you alluded to that earlier. And, for me, one of the clear places where regulation might be able to jump in is saying if you are selling accounts at scale that is illegal. There is—I have yet to hear of a strong legitimate reason to be selling thousands of Facebook accounts or Twitter accounts. If someone can tell me that I’d love to hear it. Come up to me afterward. (Laughter.)

But those sorts of marketplaces, and that can be international. I mean, I think that that’s a place where you might even be able to get international agreement. So that’s, like, a very concrete infrastructure layer, behavioral—I mean, a regulatory approach. But there’s also—just going back to the deeper question of, like, democracy, liberty, freedom, I think that these are the same challenges that our Founders faced.

When you’re trying to lay out how do you balance these powers, how do you limit government, how do you limit, like, what are the tradeoffs that you’re making in order to sustain a democracy, we have to be cognizant that that is not a question where—the answer to that question is in light of the capabilities of the people, right.

When you’re trying to talk about what is the capacity or what should government be able to do, what should it not be able to do, what should people be able to do, what should they not be able to do, that’s in light of the powers that people have, and if I can create a megaphone that people can hear all over the world maybe it’s a little different than I can just talk to people directly around me, and there might be different properties around how that operates in that new regime.

CHESNEY: I just wanted to add that I very much am with Laura on the solutions here need to be in the nature of not suppressing speech, not the—sort of the European model of let’s make information go away but, rather, let’s have more information. And you’ll never close the gap entirely but more information is the better solution.

Now, part of that also leads to something I hear about a lot in these discussions, which is, well, shouldn’t we just be trying to educate ourselves—get people more sensitized to the risk of this sort of deception so they understand that their—our natural, indeed, hereditary inclination to trust our eyes and ears can trick us now in the digital world in ways you may not expect.

And that’s true up to a point. But I think one of the most easily missed aspects of this debate is if you do a lot of that—if you really pound the drum about deep fake capacity being out there—you’re going to open space for people to get away with things that actually were captured in legitimate audio and video. In the paper we call this the liar’s dividend, which is just a clunky way of saying that instead of crying fake news, people will cry, well, deep fake news even though there’s video of them saying or doing something that they shouldn’t have done.

CHRISTENSEN: Right. And on that last closing point, I think we, obviously, just touched the surface here of a fascinating topic. But it is time for the session to wrap up. I want to thank this fantastic panel for their time and great contributions and I invite you to join us for a coffee break before the third session begins at 11:15.

Thank you very much. Please join us. (Applause.)

(END)

Session III: Keynote Session with Richard H. Ledgett Jr.
Richard H. Ledgett Jr.
Judy Woodruff

This symposium convenes policymakers, business executives, and other opinion leaders for a candid analysis of artificial intelligence’s effect on democratic decision-making. The symposium is timely as countries such as China, France, Germany, the United Kingdom, and the United States rush to invest in artificial intelligence to solve cybersecurity challenges and stem the spread of disinformation online.

WOODRUFF: Hello, everyone, and welcome to this—I guess the final part of your—of your morning symposium. And this part of it is with Richard Ledgett. I’m Judy Woodruff, the anchor and managing editor of the PBS NewsHour. I’m going to be moderating the discussion. I think both of us acknowledge that we have the disadvantage of not having sat in on your morning conversation. So we recognize there may be some things that came up in those—in those discussions that you’ll want to bring up when we turn it over to you for questions. So please feel free to do that.

I just want to say at the outset I’m glad to be here to talk about—really, to help facilitate this important—what is more important than preserving the health of our democracy. And that really is what we’re talking about, as we face growing threats from disinformation and people attempting to undermine our system of government. And I’m really glad to be here with somebody who understands the U.S. intelligence community as well as or better than anybody around, in Rick Ledgett. As you know, he has four decades of experience in intelligence, in cybersecurity, and cyberoperations. He spent twenty-nine years at the NSA, the National Security Agency. At the end of that time, he spent more than three years as its deputy director, until his retirement in April of 2017.

I just want to say I want to invite—you will be invited in thirty minutes. We’re going to have conversation for thirty minutes and then I’m going to be turning it over to members for questions. The question before us, as we know, is will artificial intelligence curb or will it turbocharge disinformation online. And I want to begin with the state of disinformation right now.

Rick Ledgett, you were one of the authors of the intelligence community report on Russian attempts, Russian activities in the 2016 timeframe. But you’ve also studied Russian disinformation throughout its history. So we really—I think it’s important for us to start with an understanding of how Russia has operated, how the Russian government has operated for a very long time, how it thinks about information and about disinformation. So why don’t you start with that?

LEDGETT: Sure. I would like to make one small correction. I was not one of the authors of the intel security assessment. But they worked for me and I spent a lot of time with the people who were authors from the NSA side of the house. So I can talk about that more a little bit later.

So disinformation is not a new technique. It’s not new to Putin’s Russia. It was a mainstay of the Soviet Union, going back to the formation of the Soviet Union. In fact, it goes back to czarist times. The idea that the government owes the truth to its citizens is a foreign idea is Russia, and it always has been, and it’s been used—information’s been used as a tool to manage its population and as a tool to express the will of the Russian, and then the Soviet, and now the Russian again government in the international space. When someone in Russia says information security the meaning is very different than the meaning we apply to that phrase.

In the West, information security means keeping your information secure—defending it, making sure people can’t get to it, keeping it safe from improper use, that sort of thing. In Russia, information security means using information to secure the state. That includes disinformation. It includes propaganda. It includes, you know, weaponizing information in various different ways. And so the fundamental philosophy of how they do things is just very, very different than the way we think about information. There is no such thing as free speech in Russia. There may be the words “free speech,” but there’s no actual free speech. And the government takes a very active role in shaping the information policy of the—of the Russian citizens.

WOODRUFF: I think we all think we have a really good understanding of how sophisticated the Russians are or are not, but how would you size up their capabilities when it comes to disinformation, whether it’s spreading it in the United States or anywhere else in the world?

LEDGETT: They’re very capable. They have a long history of it, as I was just saying. And they have masterfully adopted their techniques to the modern tools of—you know, that arose from the internet. So social media. If you think about how you used to have to do disinformation and propaganda in the ’50, and ’60s, and ’70s, and maybe into the ’80s, the way you would do that is you would get a sympathetic journalist, a sympathetic editor, a sympathetic book publisher, a sympathetic movie maker, and they would—and they were either a witting or an unwitting agent. And you would write a script, or a book, or an article that as part of the narrative that you wanted to put out there. And you’d go through the process of getting it vetted, and getting it published, and getting it disseminated to people. Long process. Complicated process. Limited in reach.

What social media has done, and the internet has done, is brought a direct channel from the Russian disinformation manufacturers to the minds of their target audience. Think about—it’s at truism in information operations that the target is the brain of the decision-makers. And so the brain of the decision-maker that you’re going after in the case of something like voting or elections is the individual voter. And so social media gives them an unprecedented, direct venue to that at speed and at a scale that’s never been seen before in the world. And so they’ve done a really good job of adapting that. And if I were—if I were the head of the FSB, I would be handing out medals and cash awards to everybody that was involved in this, because it’s been wildly successful from their point of view.

WOODRUFF: Since they didn’t develop much of the social media, and I think this is relevant to what we’re talking about with the next phase and, you know, as we get into artificial intelligence. How did they learn it so fast? I mean, what was it about their technique? Was it just that they said to a lot of people: Go do it, or else? I mean, how did they do it?

LEDGETT: So they’ve been practicing for a while. They’ve been practicing going back to 2007 in Estonia, 2008 with Georgia, 2014 in Ukraine, in Montenegro, in Moldova, in several countries, and also the Baltic states. In the near-abroad they’ve been practicing these techniques for a very long time. And they’ve gotten very good at it.

WOODRUFF: And so you’re saying by the time it came around to our election, 2016-2015, they were—they had a lot of experience under the belt?

LEDGETT: They did. And you could see the switch in 2015, when they—and this has been widely reported—the intrusions into the Democratic National Committee servers, and the harvesting of emails to use in a weaponized kind of a way. There was the initial intrusions, that was intelligence gathering by the—their version of the CIA, called the SVR. And then the next step was the GRU, the second actor that came in, who were the people who do information operations. So you could actually see them transition from one phase to another, in retrospect. Unfortunately, we didn’t catch that early enough in the process. And I think someone—I don’t know if Laura’s still here—I think—I think—oh, there you are. Hi, Laura. I think she may have described it as a failure of imagination. I think that’s an accurate way to think about that. We saw this being done in other countries and didn’t imagine that it would be applied to the United States in the same way.

WOODRUFF: How much of what they’ve done is due to—is a result of human hands-on activity, and how much of it is computer-driven, or in some way process driven?

LEDGETT: A lot of it is hands-on. There’s the Internet Research Agency, that’s actually the troll farm that is run by one of—one of the oligarchs who does this for Russia. But they do use bots, which are basically automated set of scripts and agents that will do things like raise the profile of a story by retweeting it or reposting it time and again. If you think about the algorithms that are used by Twitter and Facebook to rank stories, those are a form of—we say artificial intelligence, what I think we really mean here is machine learning, a subset of artificial intelligence. And so those machine learning algorithms are designed to cause the stories that fit certain profiles, that are, you know, proprietary to Facebook and Twitter, to spike and be more visible to people. And so what the Russians have done was basically been successful in reverse-engineering those algorithms, largely through trial and error. Let’s try this and see what happens, see if I get the right output that I want from providing this input. And that then lets them manipulate those machine learning algorithms in ways that are beneficial to them.

WOODRUFF: So what does that tell you about—again, we’re just on the beginning edge of what AI—of what artificial intelligence and machine learning is going to—is going to look like. What does it tell you in the near term, as much as we can predict? We don’ have any way of knowing what all this is going to turn into—or, maybe you do, twenty, thirty years from now. But at this point, I mean, looking at 2020, is it going to have an effect?

LEDGETT: Well, I mean, it’s been going on since 2015 or so. It was going on through the past election, the midterms that just happened. And there is a—the Alliance for Securing Democracy, Laura and Jamie Fly cochair it. And, full disclosure, I’m on the advisory council. But they have been tracking these activities through a website called Hamilton 68. And they—if you look at Hamilton 68, you can Google it right now and go to the website, and it will tell you what the Russian-associated accounts were doing today and what themes they were—they were tracking. And so—and they generally fall into—Laura will correct me if I get this wrong—but I think three broad buckets.

Bucket one is stories that advance the Kremlin’s agenda in some way. Things that fit with their propaganda, disinformation regime. Second thing they do is things that denigrate especially the U.S., but Western democracies in general. And the third category of things that they do are things that cause conflict in society. So, for example, they rarely invent issues. But they’ll find issues and they’ll pile on both sides of the issues, because what they’re trying to do is pull the social fabric apart by getting people to read things online that are on the poles of their respective views, and to say things, and retweet things, or repost things online that are designed to—again, to tear apart the fabric of society.

WOODRUFF: How—they obviously know that we’re monitoring them. They know about Hamilton 68. They know about other efforts on our part to monitor what they’re doing. And we’re talking about it more out in the open. How is that affecting what they’re doing?

LEDGETT: So they’ve changed their tradecraft since 2016. They were fairly crude, they were fairly obviously. We were about to pretty easily detect them once the social media companies decided it was something that they wanted to do. They were able to pretty easily find the agents through characteristic of how they registered and how they behaved and, you know, more subtle things like they didn’t really have a lot of real person interaction.

But what they’ve one is they’ve changed their techniques now. They’ve changed how they register. They’ve changed how the maintain the accounts. They’ve changed how they propagate. They’ve also picked up something that’s’ a—personally, I find fascinating. If you’ve heard of money laundering, they have actually done information laundering. And that’s a—that’s a term that was coined by one of the ASD researchers which I really like. To me, it’s very evocative. If you think about money laundering—so I get my money through criminal enterprise, and I have to launder it through a series of businesses. So I can produce it as income, and then put it in a bank, and then access it and use it the way I would want to use money.

Information laundering, you start off with something on a very fringe publication, maybe a blog posting or an article or something that’s way, way on there—on either fringe, it doesn’t really matter. And then it’s picked up through a chain of other blogs. It’s cross-posted, it’s cross-linked, it’s tweeted, it’s put on Facebook. And then it gets picked up by a news agency. And I think if you read the intelligence community assessment, you know that our assessment was that both RT, which they now—they’re like KFC, right? It’s not Kentucky Fried Chicken, it’s KFC. Well, they’re not Russia Today. They’re RT. They’re trying to sort of turn themselves into a disassociated brand. But they’re a state-run media enterprise. And Sputnik is another one that’s state-run.

So they’ll pick up those articles. Now they’re in the mainstream media. It might not be the mainstream that you like, but it’s part of the mainstream. And then they get picked up by other news organizations. And now you’ve got laundered information.

WOODRUFF: And they just did that, just by watching what was going on and figuring out how to expand their reach.

LEDGETT: They’re very clever. (Laughs.) They’re very clever.

WOODRUFF: So the fact—the fact that we are talking about it, writing about it, having these conversations about it, you’re saying they’re constantly adapting? And that—

LEDGETT: Yeah. It’s an arms race. It’s a—it’s a move, countermove, parry, riposte kind of a thing. And it’s always going to be that way.

WOODRUFF: Do you feel confident at this point that the—that U.S. capability—that we’re able to stay ahead of them, or not?

LEDGETT: So I think you have to break that down a little more finely than that. I think in terms of the technology, does—AI’s not evil. Machine learning’s not evil. It’s like all technologies, it can be used for either. But the use of machine learning to identify the source of information—because really what we’re talking about is—we’re not talking about necessarily filtering information beyond some categories that violate the acceptable use policies on the platforms. We’re talking about being able to say where this information came from. In the previous conversation there was a discussion about, you know, free speech and the right to free speech, even when it’s speech you don’t agree with. Totally agree with that. Totally support that. And I think anything we do that impinges on free speech is a bad thing.

But what we can do and should do is be able to talk about the provenance of data. Here is where this data comes from. And there’s some things associated with that that are both really suitable for AI, and some of them—and machine learning—and some of them are hard things to do. So for example, there’s the attribution of who the post came from or where it came from, being able to—in that information laundering example I talked about—being able to trace that back and say: Here’s where this actually came from. And here’s the pedigree of that information.

And then there is the—it’s almost deanonymization. One of the—one of the things about the internet, most bad things on the internet come from the fact that people can be anonymous. And but there are also good things on the internet that come from people being anonymous, like people who live under the kind of state that wants to oppress its people. So there’s almost deanonymization that you need to have in there, so that you can say that this came from, you know, someone who’s actually a troll in the Internet Research Agency in St. Petersburg.

WOODRUFF: So, I mean, are we—how good is the U.S. right now at doing that? At sort of working it back and figuring it out?

LEDGETT: So we’re getting better. I think the engine for that is the companies themselves. And I think they are becoming sufficiently motivated to do that. But I think there’s also—as somebody said earlier—there’s a customer demand signal that needs to be send. People—you know, I want to understand where the information comes from.

WOODRUFF: And at this point, there’s so many Americans. Some of us have time to sit around and talk about this, but many Americans just—you know, they go about their daily lives and they pick up information here and there. And they don’t have to go figure out whether it was, you know, developed by, you know, some bunch of Russian officials working in St. Petersburg or someplace else. I mean, they just—they look at it and they say, oh, that’s interesting. What is—do you think enough is being done? What needs to be done to, I think, better inform the American people, better label information? Is there even a way to do that? Are we just—is the horse out of the gate and it’s too late to do that?

LEDGETT: I think there’s a way to get there over time. And it’s a combination of three things. One is, I do believe in regulation in this space, but regulation-lite and outcome-based regulation. In other words, companies need to have some responsibility for the capabilities of the things that they put in front of people. So it’s got to be done in a way, though, that doesn’t crimp innovation, doesn’t impinge free speech, so it’s a balancing act. And I’m not sure that Congress has fully gotten their minds around how to do that yet.

Thing two, I think, is the companies themselves, the social media, the internet-focused companies need to be able to use their knowledge of how their system operates in order to be able to go back and provide provenance to that information. And there’s also perhaps the small role for the intelligence community in terms of here—you know, here are Russian-based or foreign-based entities that you should look for in your stream. And I don’t think the intel community needs to be dredging its way through Facebook and Twitter, you know, in the United States. That’s a bad idea.

And then the third component is people. And this is a long-term cultural change. A friend of mine is a high school teacher in Baltimore County. And she teaches history. And what she does is she’s probably more on the left side of the political spectrum, but she comes at it completely apolitically and gives students articles and says: Tell me what you think of this. Look at it critically and say: Does this sound—based on what you know about the world—like it’s likely, like it’s true? And then go back and teaches them how to dig deeper into—not just Google it, but go deeper, you know, two and three levels deep in order to understand that. I think that sort of thing is essential for that. I don’t know how you do that on the national scale.

But I think that kind of a critical thinking, developing that is really important. I was personally stunned when I realized, when all this was going on, when I found out how many people get their news from Facebook. I mean, that, to me, is astonishing. You can—people have talked about the echo chamber that you can—that people can get into, the information echo chamber, where you pick the news organization that you want, you know, Fox, or CNN, or MSNBC, or PBS, or BBC. Whatever news organization you want that has a certain bias. And a combination of that, things you listen to on the radio, things you look at on the internet, who your friends are on Facebook and who you follow, can very quickly get you into a space where all you get is information that either accords with your worldview or is more extreme than your worldview. And so you end up sort of a self-reinforcing thing.

So I make a rule to every day when I’m reading, read something I know I’m not going to like, just to sort of keep the apertures wider than they would be by just reading things that appeal to me. And I think that’s a useful kind of approach to take.

WOODRUFF: You do that. And some of us do that. But a lot of people don’t. I mean, a lot of people don’t have time. They don’t think about it. And they’re very comfortable just picking their own silo, if you will, of information. And some news organizations are making that easier by saying, you know, subscribe to our feed and we’ll give you our version of what’s important today.

So, you know, to me, part of this—a lot of this comes down to whose responsibility is this? Some of it falls on us, American citizens, consumers of news and information. But what is the government’s role in this, to help us through this really challenging time ahead?

LEDGETT: Yeah. I think I identify the threats, help make people aware of the issues, and then hold, again—in a not-overly intrusive kind of a way—hold the providers of information accountable for the—you know, being able to not just repeat information, but being able to provide some indication of the—of where the information came from, like I said, it’s pedigree.

WOODRUFF: And is that being done right now?

LEDGETT: Well, bits and pieces. It’s not a coherent—it’s not a coherent approach yet.

WOODRUFF: I mean, because there’s much more debate now in Congress and elsewhere about the roles of Facebook, Twitter, Google, and et cetera, and how they police themselves, and how they police the traffic that comes through there, in and out of their space.

LEDGETT: Yeah, Facebook said they’re hiring twenty thousand security people.

WOODRUFF: Right. Right. So as we move ahead, I mean, how much do you, Rick Ledgett, worry about this, I mean, in terms of—and, by the way, we’ve only been talking about the Russians. They’re not the only ones doing this, are they?

LEDGETT: Right. Right. The Russians are the most egregious I think. There’s been some recent stuff done by the Iranians. The Chinese are doing it. Although, slightly different approach. Russia’s goal is hurt the West, hurt the U.S., hurt Western democracies. I think China’s goal is more long-term make China the largest and most influential power in the world. And so I think that there are different timelines, there are different kinds of approaches. The Chinese, I don’t believe, are, you know, actively putting disinformation into Americans’ information space at the same scale or pace as the Russians are.

WOODRUFF: Is there any risk that we—that we hype this—

LEDGETT: Sorry, can I say one more thing about the Chinese, before I forget?

WOODRUFF: Sure. Sure, go ahead. Yeah.

LEDGETT: So, look—next time you to the theater and you—or watch a movie on HBO, especially one that’s made in the last few years, look at the production companies. Look at the companies that paid for having the movie made. And you’ll find that a lot of them are Chinese companies. And then if you look at those movies the Chinese company makes, find a movie made by a Chinese company that says anything bad about China. I have not been able to do that so far. And so that’s an example of—

WOODRUFF: Fascinating. I hadn’t seen that.

LEDGETT: —controlling the information space, but in a soft power kind of way. We’re not introducing disinformation, we’re just flattening the curve of the variability of the information that you’re exposed to.

WOODRUFF: Are we talking about MGM and 20th Century Fox? I mean—

LEDGETT: No, no. These are actual Chinese names of companies that are backing these.

WOODRUFF: That are backing—that are playing the role of producer.

LEDGETT: Yeah.

WOODRUFF: But—

LEDGETT: A friend of mine told me that last year, and I wasn’t sure if it was actually going on. And I did a little research and it turns out it really is. And it’s the kind of subtle influence campaign that’s a little scary.

WOODRUFF: Are we—I’m going to sort of turn this whole thing around on its head. Are we worrying too much about this? I mean, we are a big country, what, 325 million people, the most economically powerful nation on the planet—

LEDGETT: For the next couple years.

WOODRUFF: Oh, really? Just for the next couple years?

LEDGETT: Yeah. The Chinese and U.S. economies are going to cross here in the near future.

WOODRUFF: Yeah. Yeah. But we’re a significant player. And we’re worried about this country that is—what’s the Russian GDP? I don’t know, it’s—

LEDGETT: Yeah, about a sixth of the size of California.

WOODRUFF: A sixth—OK. So are we overly worrying and pulling our hair out about this? Or is it something that we should be—

LEDGETT: I think it is something that we need to worry about, for a couple of reasons. One, it eats away at the foundations of our democracy. That’s what makes the U.S. great. It’s not because we’re smarter, or taller, or better-looking than everybody else in the world. That’s not necessarily true. But it’s because we have a way of operating that gives our citizens the ability to move beyond just executing the next phase in a plan, to think out of the box, to innovate, to take risks, to challenge ideas and assumptions in ways that virtually no other country in the world does. The thing that makes that possible are things like freedom of speech, things like the economic system that we have, things like the ability to unify, to get together behind ideas and make them happen, and things that undercut that are a danger to the country in, I think, the near term. And I’ll define near term as five to ten years out.

WOODRUFF: It’s values that we’re talking about.

LEDGETT: Yeah, exactly right. Yeah. Yeah.

WOODRUFF: And if you were—if you were still in government right now, what would you be focused on in all of this?

 LEDGETT: In any part of government or in my old job?

WOODRUFF: Well, in any part—the White House. I mean—

LEDGETT: Yeah. I think what we need that we don’t yet have is a national imperative in this space. We need a way to get the—government is not the solution to the problem, but government is a component of the solution. And it’s got a key role in orchestrating and alerting people in a way that we haven’t done yet. And we’ll use the “we” because I still kind of feel governmenty. So there are lots of efforts in individual departments and individual organizations that were focused, you know, on the 2018 elections, and remained focused on the 2020 elections. But there’s no unifying whole-of-government this is a national imperative. It’s like a—you know, it’s like the challenge to put a man on the moon, you know, in the next decade. That’s the kind of thing that I believe we need to be successful in the—in this near term.

WOODRUFF: So we’ve been—in essence, the United States has been taken advantage of, because we’re not focused as we should be.

LEDGETT: President Putin is—does judo as a sport. And judo is all about taking your opponent’s strengths and using them against them. And the Russians have been masterful in doing that in terms of using our First Amendment free speech against us, using our openness as a society against us, and turning our strengths into weaknesses. And as they did that, eating away at those foundations of the democratic institutions in this country.

WOODRUFF: Vladimir Putin looks pretty healthy. He looks like he’s going to be around for a while. But at some point he won’t be where he is. How confident are you that the people—person who comes behind him, or people, are going to be as determined as he has been to carry out this kind of thing?

LEDGETT: I think that’s unknowable at this point. It’s how does he—you know, does he have a successor in mind? I don’t know the answer to that. You know, most people who pick their successors don’t pick someone who’s diametrically the oppose of them. They pick someone who acts and thinks the same way that they do, so.

WOODRUFF: But it’s interesting that we don’t—or at least, as of the time you were—in April of 2017, it wasn’t clear who his—

LEDGETT: Or it was something I couldn’t talk about if I did know. (Laughter.) Sorry.

WOODRUFF: I think it’s about—is it time? Yeah. I’ll take questions from members. And I am to tell you that you are—a reminder, again, the meeting is on the record. So whatever has been said by how is all on the record, and it will continue to be. If you want to ask a question we have a microphone. Speak directly into it. We ask you to stand, give us your name and your affiliation, and we ask you to keep it to one question and keep is as concise as possible. This sounds like the White House briefing—(laughter)—except they don’t have very many of those anymore.

LEDGETT: (Laughs.) We’re not going to throw anybody out of the room, though.

WOODRUFF: OK. Here you go. Yes, sir. Stand up and, if you would, give us your name and affiliation.

Q: I’m Kevin Sheehan with Multiplier Capital.

Thank you for your remarks. What I was struck by was the continuity. And I was left with the thought that if Felix Dzerzhinksy had had this technology available a century ago, he would have done the same thing that the leaders of the SVR were doing today, although maybe he wouldn’t have identified us as the main adversary.

What I was hoping you could do, recognizing that this is an unclassified forum, is really talk about the SVR as an organization. Have they gotten better since 1991 and ’92 than they were previously? And is that because of organizational change? Or the technology is favoring them? And are there new vulnerabilities that this new information age has created in the—in the SVR?

LEDGETT: Sure. So the SVR’s actually not the main engine in this space. It’s the GRU, the Russian military intelligence. SVR’s like the CIA. GRU’s not really a direct analogue in the United States. But military intelligence organization, very large and charged with information operations. And what they’ve done is taken something that—the Russians are big on developing doctrine. And a few years back the head of the defense forces, Gerasimov—I think I said that right—wrote a paper about actually using this kind of power in terms of going against adversaries, and short of actual kinetic fighting but being able to make it so that when—if and when you do have to fight them, they’re much weaker, they’re disorganized, and they’re disarrayed.

They have actually done a really good job of implementing it. Like I said, they’ve had training grounds very close to home for the last eleven years now. If you look at things that have gone on in places like the Ukraine from an information operations point of view, from a cyber point of view, from what we would call covert action point of view, that’s been their test bed. And they’ve tried things out there and then run them against other parts of the world, including the United States.

So I think that the services have evolved. They’ve learned how to use this sort of thing. They’ve also fall vulnerability—or, are vulnerable to this sort of information used against them. You look at the roll-up of the GRU agents in the Netherlands a couple of weeks ago, where they found the guys outside the organization trying to hack into the wi-fi network of the organization OPCW, whatever, the chemical weapons guys. And you know, they were—some of the same tools that they used were used against them, causing them to roll up. it was also useful because it sort of poked a hole in this idea that these guys were all ten-feet tall with a big red S on their chest. They’re really not. Some of them just made some really egregious tradecraft errors.

So did I answer your question? OK.

WOODRUFF: Yes. All the way in the back. Uh-huh.

Q: Hi. Zach Biggs with the Center for Public Integrity.

I wanted to ask you about the role the government might have in this—with this particular issue. Historically speaking, the sort of propaganda or influence operations have largely just been tolerated by governments. There’s limited international law prohibiting it. Do you think that that IC and DOD, specifically CYBERCOM, have a role? Has a threshold been reached where those sort of organizations need to take a more active measure to prevent this kind of influence operation from taking effect?

LEDGETT: It’s a great question. I think that what’s happened is the way that the information space is in the United States, and I would argue most other parts of the world, is it’s created new vulnerabilities for the population in ways that we didn’t have before. And the response to that needs to span the entire range of potential government response options. So diplomatic, economic, intelligence, military, you know, things with allies—all those are components of that. And a cyber activity doesn’t necessarily beget a cyber response. And in fact, it’s often not useful to have a cyber response to a cyber activity.

In the case of information operations, because it does span so many different components of the government’s ability to counter it, it’s got to be a whole-of-government response. And you have to go after the levers. So what is it—we’ll keep talking about the Russians. What is it that Vladimir Putin cares about. You know, off the top of my head, he probably cares about support of the oligarchs, supporting the military, supporting the intelligence services, his money is however many billions of dollars he has overseas, and control of information to the Russian people. So there are things that we could do to go after each one of those to decrease the value and increase the cost to him. Because right now, value’s here, cost is here. We have to rest those to something more level or inverted to get him to stop.

WOODRUFF: OK. Right here in front. Yes, sir. Yeah.

Q: Glenn Gerstell. Thank you very much for this, Rick.

LEDGETT: Where did you say you work?

Q: NSA. And had the pleasure of working with you. So it’s good to be back.

Now that you’re out of government, can I take advantage of that fact and ask you: A number of commentators have talked about how the government is organized to deal with the threats posed by artificial intelligence and the various cyber threats that we’ve all been discussing. Now that you’re out of government, what’s your perspective on whether we need to change something in the executive branch and Congress, are we well suited, are well positioned to deal with these threats? Or does something need to be done in that regard?

LEDGETT: Yeah. I mean, the problem with organizing for each threat is you end up reorganizing about four times a day, because all threats are different and the perfect organizational structural, you know, might not be the same of for each one of them. I’m more of a fan of taking what you have and combining it, what we would call a joint taskforce or a tiger team sort of thing, where you say: I am—again, this requires a whole-of-government approach because by definition you’re boundary, the authority lines between the different departments and agencies of the government. But say, this entity that I’m going to have—lead the response to this. I’m going to take people from Justice, Commerce, State, Defense, CIA, NSA, FBI, you pick it, all the different relevant agencies. I’m going to put them in a room, and they’re going to be empowered to act and to come up with options, and maybe even to act in some cases using the authorities derived from their organizations. That gives you the two things that you need in this kind of a fight: Integration and agility. So I don’t think it’s a permanent organization. I think it’s a functional time balance sort of approach.

WOODRUFF: But you’re saying—you said a minute ago we weren’t doing that, that we haven’t—we don’t really have a—

LEDGETT: We’re not, no.

WOODRUFF: Somebody else in the front. Yes, sir, right here.

Q: Thank you very much. Todd Helmus, RAND Corporation.

We’re talking a lot about trying to play defense against these types of initiatives. But I want to ask you about playing offense. Do you envision a role in the future where the United States is using bots, fake troll accounts, AI, deep fakes, all of that as part of an offensive information campaign, not necessarily directed at Russia to punish them for doing it against us, but, say, in the theater of operations, and providing those authorities to meet those operations?

LEDGETT: I don’t think so. And I think—I don’t think so because it goes directly against the values of our nation. And I think it might have been Laura who said it earlier, I think our weapon is the truth, and getting the truth in front of people in the way that helps them see what’s different about us, and what’s different about our way of life. So I mentioned earlier, one of the things Putin cares about is control of information to the Russian people. You might know that the Russian version of Facebook is called VKontakte. And on VKontakte, they—families of Russian soldiers who’ve been killed in Ukraine, they would put up pictures of the funeral.

And there as a team of people from the Russian government who would go through there and take those off, and suppress that information, again, to control the flow of information to the people. Don’t tell them things you don’t want them to know. And so we could easily—I can think of one hundred, maybe two hundred different ways that we could get information in Russian citizens’ information space that the Russian government didn’t want them to have—truthful information. And I would, as a way to disincentivize, you know, Mr. Putin from what he’s doing—I would say: Let’s do a half-dozen of those and tell them: We’re doing this. And we’re going to continue doing it until you stop doing what you’re doing.

So I think use of information in that way, weaponizing the truth, so to speak, I don’t think deep fakes, I don’t think misinformation, disinformation is something that we would use, or should use.

WOODRUFF: But as long as it was truthful.

LEDGETT: Truthful, yes.

WOODRUFF: Truthful is OK.

LEDGETT: Right.

WOODRUFF: Good. That’s a relief. Yes, sir.

Q: Hi. Fred Roggero from Resilient Solutions.

We’ve been talking about the issue quite a bit in sort of a state-centric way. But these days, when we have Alexa on our kitchen counter and Siri in our pockets and purses, corporations have actually been able to achieve and to gather more data than anybody from the NSA would ever in their wildest dreams hope to gather about the U.S. So the question is—

LEDGETT: We don’t want that data, just to be clear. (Laughter.)

Q: So in that—in that context, how secure is that data, in the nation-state context—from—

LEDGETT: Oh, it’s not. It’s not.

Q: And if so, do you see a shift—if knowledge is power—do you see a shift in power from the government to these corporations who are collecting all this vast amount of data?

LEDGETT: That’s a great question. First off, the information is not safe. You know, if a determined nation-state or high-end criminal actor wants to get access to information that a company has, then they’re going to get it. There’s—you can—you can put up defenses, and you should put up defenses, but if they are willing to devote the time and attention needed to really get that information, then they’re going to be successful one way or another. And so relying on that data being totally secure is not a good strategy.

I think, though, that the idea of that information being useful for the private sector is true, and it’s actually what’s feeding the big data revolution. I mean, you have to have data, you have to have big data in order to do machine learning at scale. And that’s actually proving to be a boon to the economy. And, again, in the earlier talk they were—the panel talked about if you clamp down on that too much, then you starve the engine of machine learning, and we end up behind the curve. Our principal competitor in this space is China. Some people would say we’re neck-in-neck, some people would say we’re head, some people would say they’re ahead. I think there’s too many factors for me to definitively say.

But I think that if we choke the input to the—to the AI machine learning feeds, then we disadvantage ourselves. So I don’t think the issue is how do we control the flow of information. And I think the issue that we need to deal with is how are you allowed to use that information? And so we sort of think of this—we’re in a big data world where data comes in from all different kinds of sources. And we spend a lot of time talking about the input and how we got the input and what inputs we’re allowed to put in. I think we need to spend a lot more time talking about the outputs and the outcomes. How are you allowed to use this information and to what ends? And that’s how you—listen, I do not think the GDPR EU approach is the right approach.

WOODRUFF: How would—what would that look like? You said we’re talking a lot about input and not enough about output. What would that look like?

LEDGETT: Yeah. So I think there’s regulation or legislation on what are entities, commercial entities, allowed to do with the data you get, and what are the disclosure rules that they have to do. There’s some of that now, but it’s kind of a patchwork, and there’s not—I don’t think there’s a good, well-thought-out legal regime. John Podesta, when he was in the White House—I want to say this was maybe 2012—did a study on this. And it’s actually a pretty good read, talks about the role of data for the government and the private sector.

WOODRUFF: OK. Let’s see, yes, over here.

Q: I’m Jacob Breach with the Department of Defense. Thank you for being here to speak with us.

My question relates to your comment, which I agree with, that the strength of America—our greatest strength is our values, including free speech. So when you—in an earlier panel, we talked—the panelists talked about China exporting technology, either through its apps and shaping the information space, or exporting the technology to fellow illiberal regimes. Can you talk a little bit about the threat that that poses to our, you know, strength, and what can we do to combat that?

LEDGETT: Sure. (Laughs.) That’s kind of a broad reach, so I’ll sort of hop on a couple lily pads through there and you can tell me if I got to what you wanted, or you want to elaborate or something else. But I think the—certainly the Chinese—the spread of Chinese technology around the world and the spread of Chinese business around the world potentially has two effects. One is indirect, and I talked about the ownership of movie production houses and how you can strategically shape information in a subtle, low visibility way over time. And if you take—if your view of the world is twenty-five, fifty, one hundred years, then that’s a great strategy because hardly anybody gets mad about it.

The spread of technology has direct implications in terms of the information flow across those pieces of technology. And you’ve seen in the press, I’m sure, discussions about things like Huawei, the Chinese telecommunications company, concerns over national security implications of letting Huawei into, say, the U.S. backbone networks, or key parts of the telecommunications infrastructure. China has a law that requires Chinese companies to, on-demand, provide data to the—to the organs of state security, the ministry of state security, the—what used to be the 3PLA.

The legal regime for that is about this deep. The ability to get the authority to do that is actually pretty easy and pretty low level. In the United States, people say, well, the U.S. can do the same thing via the Foreign Intelligence Surveillance Court. That’s a different legal regime. It’s much more—much more process, much higher bar to getting it, and it’s a very different thing. Those are not a—that’s not an equal level of process there. Same for Russia as well. They have a law that any Russian company operating anywhere in the world is required to provide information to the—to the FSB, their equivalent of NSA—on demand, and any company operating in Russia is also required to do that.

And so, again, those are—those mean that the technology, the applications from countries like that are susceptible to being used to gather intelligence or information. And also, in the case of Russia, being used to put out information. Did I get to your question?

Q: Well, what can we do?

LEDGETT: What do we do about that? Oh, it’s a really hard problem, because it’s—there’s supply chain risk management is kind of a phrase that we use to talk about this. And supply chain being more than just the devices. It’s everything about—from the services, to the supersession of devices. It’s a really complicated problem, trying to make sure that you know the provenance of all that. In terms of the influence of things like movie houses, I’m not really sure what you do about that, because it’s all legal. The Chinese are not doing anything illegal. They’re using the rules of the system that we set up in order to do the things that we’ve done. Now, if they get to the place where they’re hiding, you know, the identity of where the money’s coming from, that would be a different issue. But I don’t believe they are.

WOODRUFF: But it sounds like you’re saying there’s not enough going on to counter that.

LEDGETT: I just don’t think we’ve thought about it from a strategic point of view. I mean, this is sort of a—I think a strategic conversation that we need to have, informed by real information, and then put it front of the American people and the Congress to say: What are we going to do about this? Because it’s not a—this is not a one-week thing, or one month, or a one-year thing. This is a long-term strategy.

WOODRUFF: Right here in front.

Q: Kim Dozier with The Daily Beast.

So where would you—

WOODRUFF: I’m sorry. Who are you with?

Q: The Daily Beast.

Where would you put a body to regulate information, or to go to verify? Would it be at the U.N.? Would it be one of these cooperative cyber bodies that the EU is standing up? You know, somewhere where I, as a reporter, could go and say: Well, they did not or did put their seal of approval on this piece of video?

LEDGETT: Yeah. It’s—I don’t think it’s a government’s job to do that. I mean, the government can provide some input to that, but I don’t think you want the government doing that. In Europe, I believe it was—there’s a consortium of entities that do that, some civil society groups and maybe some media groups, if I recall correctly, that were doing that sort of work, like the provenance of stories. So I think something like that might be the answer. I don’t have a definitive answer, though, on this spot on the map is where it should be. But I’m pretty sure that spot on the map is not the government. That’s a really easy step from that to censorship, and I don’t think that’s something we want.

WOODRUFF: Did you think there’s serious—I mean, are there people advocating for the government to have that role?

LEDGETT: I’ve not heard any serious talk about it.

WOODRUFF: OK. Let’s see. Looking around. Yes, right here.

Q: Hi. Nate Fleischaker. I’m with the Department of Defense.

Can you help me to think through kind of an internal toil I’ve got? On one hand, I really appreciate that you had democratic norms and truth being your weapon. I’m also thinking through historical example where military deception was very effective. So can you help me think through, like, Operation Overlord, where we—maybe not—the military and large parts of the government were very active in trying to deceive the Nazis as we were going to land. And it was very effective for making it possible, and that kind of stuff. So can you help me think through at what point does that become allowed, or is that considered—or, at what point is that considered propaganda or inappropriate?

LEDGETT: Yeah, sure. And there’s that story about the man who never was, the corpse that they released off the coast to indicate that they were going to invade somewhere else. I think that military deception is a different thing, and military deception is something—it’s a tactic that you use and, you know, was the deception about, you know, where D-Day was going to occur, was that strategic or tactical? I still think that was tactical in the greater scheme of things. So I think that kind of thing is perfectly acceptable. It’s actually accepted in international norms. That’s different than saying, you know, we are going to lie to the people of, you know, country X, whatever country X is, in order to change their perceptions over time about a particular issue. I think those are—those are fundamentally different things.

WOODRUFF: OK. Yes, right here in front.

Q: Elizabeth Bodine-Baron from the RAND Corporation.

Following up on that, how does that play with the goal of the United States to change people’s views when it comes to violent extremism and other things like that? You know, political Islam, and things like that?

LEDGETT: Yeah. So I think what the U.S. has done in that space—we haven’t, you know, told lies to people. We’ve tried to expose them to the truth and to differing viewpoints than the radical Islamic viewpoint, like the ISIS sort of approach. And there was something that was started three years ago now, where they got Madison Avenue, Silicon Valley, and Hollywood together to come up with: How do you reach out and appeal to the target audience, the young—the young Muslim or the young person who might be a suitable target for radicalization? They called it Madison Valleywood. They sort of jammed them all together. But they were—I lost track of where they are on that. I don’t know if somebody else knows. But the idea was, put together a combination of technology, advertising, and, you know, visually appealing tropes that would get people—give them an alternative to the radical Islam point of view. Something to look at besides the ISIS guys walking through Syria, handing out food to the refugees?

WOODRUFF: What came of that? I don’t have a clear memory of it.

LEDGETT: Yeah. I lost the bubble on that when I left government. I don’t know. But I think that’s the kind of approach that you look at for the countering violent extremism problem.

WOODRUFF: I don’t see a hand. Who’s got a burning question out here?

What do you think—what am I not asking you? What do you—what do you—what do you think we need to be—you’ve talked about what we need to be doing that we’re not doing. But what should people—I mean, how should people think about it? I mean, frankly, when most people hear artificial intelligence, it’s, like, immediately eyes glaze over because they don’t understand what it means.

LEDGETT: Right. Or they go right to Skynet or something, you know, where the machines are going to kill us all.

I think with machine learning there’s a—there are a couple of points—and this is a couple of technologies, machine learning and cloud technology, where they come together. And this is kind of important, I think. My favorite definition of the cloud is somebody else’s computer. So when you put stuff in the cloud, you’re putting it in somebody else’s computer. And so it’s important to me that you know—that I know where that computer is, who has access to it, how the people who can touch it are vetted, and that sort of a thing. And when I talk to clients, I advise them to know that before they outsource to the cloud.

And then machine learning is approaching the point where machine learning algorithms are going to learn more than people can understand. So you’re going to have a thing—a software program that’s taking in so much data running an algorithm that’s going to exceed the ability of humans to go back and track through the information space. And so think about the implications of that. That’s got huge implications in my old business, the intelligence business. Well, it looks like—it looks like we’re going to have to invaded Slavovia. Why is that? Well, because the box said so. I can’t really explain the reasoning behind that, but the box said so, so we’re going to have to do that. That’s not going to fly. Same for the legal business. Explain to me how you got this answer, defend this answer that the program spit out. Well, I don’t know. I can’t explain that. So that’s actually an area that folks are doing research on, is how do you—is there some kind of modeling or abstraction you can do of machine learning that lets you certify or in some way validate that the machine learning process went the way that it was supposed to.

Now, think about that in terms of fake news and information operations. So the ability of the machines to outpace a human, whether that’s—whether we’re on the receiving end of that, whether the bad guys are doing that to us, or whether we’re doing that in defense, and being able to understand what the—what the offensive directed against us information operation was, and what we did defensively, and being able to understand and accept those two answers. That’s a big, huge—in my mind—a big, huge issue. And so a lot of this stuff is cloud-based. So I’ve got software I don’t understand how it got to the answer, running on a computer that I don’t know where the computer is and who is running it. So that’s kind of a pretty big uncertainty area, and something that I think we need to work through understanding in a better way. And the good news is there are people in the research community who are doing that.

WOODRUFF: You mentioned the cloud. I mean, if we don’t trust the cloud, then where are we going to store all this important information?

LEDGETT: Well, trust the part of the cloud that you know. I’m not saying don’t trust the cloud, just don’t throw your information out the window and hope it lands someplace good. (Laughs.)

WOODRUFF: OK. Here.

Q: Lucas Koontz (sp) with the Joint Staff.

You mentioned some avenues that Russia—some positive avenues about our culture or our institutions that Russia’s taken advantage of. Freedom of speech was an example. You didn’t specifically mention this with China, but your film industry example and then you mentioned supply chain that kind of related to it, kind of made it seem like a search for capital or a search for profit might be one of the avenues that China is taking advantage of. If that’s the case, do you see any way to—any way to combat that?

LEDGETT: So I think that China is—the economic engine of China is not like the economic engine of the United States. We are motivated by making money and stockholder dividends. They are motivated by enhancing the state. And all Chinese companies are either state-owned enterprises or almost state-owned enterprises. There’s not really such a thing as an independent, free company in China. And the way that they’re writing the laws, that’s becoming even more so. So I think that the motivations are different. And I think that the way that you address that with Chinese companies is we have to think strategically. We can’t think on a quarter-by-quarter basis. We have to think in long terms—you know, ten, twenty years.

The Chinese are very transparent. If you look at their 2025 plan, their five-year plan that’s currently out there for commentary, they said: We’re going to best in the world at this, and this, and this, and this, and this. Things like, you know, handling our aging population, and that sort of thing. But things that they put out there as their goals directly trace to the things they’re doing in information space and in cyberspace. So I’m going to be number one in taking care of my aging population. That means that Chinese state-sponsored hackers are out there going after pharmaceutical companies today and stealing their intellectual property today. That’s happening. And the folks from the civilian intelligence agencies, the CrowdStrikes and the Symantecs and the FireEyes keep telling you about that in generic terms. They won’t give you the specific companies.

The only thing the Chinese go after that’s different is the five poisons. And the five poisons are Falun Gong, Taiwan, Tibet, democracy, pro-democracy movement, and the Uighurs, the Muslim minority in the West. And so any place in the world where there is a representation supporting the five poisons, you will see the Chinese in various ways going out there and trying to affect that information.

Did I answer your question?

WOODRUFF: You didn’t have a follow up? OK. There’s a hand right here. Yeah.

Q: Thank you. Shiraz Saeed from the Starr Companies, insurance carrier.

You talked a lot about artificial intelligence and the impact on information. Can you talk a little bit about artificial intelligence and the impact on physical warfare in terms of autonomous vehicles or any of these other items that might impact it?

LEDGETT: Yeah. That’s a—that’s a great question. And there’s a lot of work being doing for that, for autonomous vehicles, for swarming vehicles—you know, like swarms of drones—that sort of a thing. And I don’t think it’s—and my Air Force friends will hate this idea—I don’t think it’s, you know, too many years in the future where you’re not going to have manned platforms out there in combat, or at least they’ll be by exception. They will not be the bulk of the forces.

And I think the—that also changes the kind of warfare that you do to one where things like command and control become really important, you know, the ability to hold adversaries’ satellite systems at risk and reconstitute when ours are held at risk, and that becomes a really important maneuver going forward. It’s all part of that combined thing. You guys know the Third Offset strategy? First offset was nuclear weapons. Second offset was stealth, night-vision goggles, stuff like that. DOD’s been looking for the third offset. And the things that they seized on—AI, autonomous vehicles, things like that—are exactly the sorts of technology that the Chinese are actively acquiring at speed.

And they’ve figured out—we have a thing called CFIUS, the Committee on Foreign Investment in the U.S., where when there’s an acquisition being made there’s a—it’s led by Treasury and they’ll convene a group that will get together, and they’ll make a national security determination of whether that acquisition is a threat to national security. Most recently that was invoked by the president when he denied the ability of Broadcom to buy Qualcomm in the U.S., because he thought it was a threat to the competitiveness in the fifth-generation wireless. The Chinese figured this out. And there was a report that was done by the defense innovation unit, experimental—it’s now—they dropped the X. They’re not experimental anymore.

But last year they did a report that talked about Chinese research dollars in the U.S. And something like 85 percent of China’s research dollars in the U.S. are going to angel seed and series A funding of startups. And so they’re buying heavily in these same areas that we’re looking. So they’re buying the startups. And maybe eight out of ten will go down, but the two that are good, that actually are successful, they’ve acquired their intellectual property at very low cost and completely evaded the CFIUS process.

WOODRUFF: In AI and what else did you say?

LEDGETT: AI, autonomous vehicles.

WOODRUFF: Autonomous vehicles. Fascinating. Well, we could go on, and on, and on. This is endlessly fascinating. I want to thank all of you for being here. And I especially want to thank Rick Ledgett for enlightening us. Thank you. What a great conversation. We appreciate it. Thank you.

LEDGETT: Thank you. Appreciate it. (Applause.)

(END)

Top Stories on CFR

Mexico

Organized crime’s hold on local governments fuels record election violence; Europe’s cocaine pipeline shifting to the Southern Cone.

Defense and Security

John Barrientos, a captain in the U.S. Navy and a visiting military fellow at CFR, and Kristen Thompson, a colonel in the U.S. Air Force and a visiting military fellow at CFR, sit down with James M. Lindsay to provide an inside view on how the U.S. military is adapting to the challenges it faces.

Myanmar

The Myanmar army is experiencing a rapid rise in defections and military losses, posing questions about the continued viability of the junta’s grip on power.