Meeting

Stemming the Tide of Global Disinformation

Friday, October 11, 2019
Kacper Pempel/Reuters
Speakers
Paul M. Barrett

Deputy Director, Center for Business and Human Rights, NYU Stern School of Business; Adjunct Professor of Law, NYU School of Law

Amanda Bennett

Director, Voice of America

Richard Stengel

Former Undersecretary for Public Diplomacy and Public Affairs, U.S. Department of State; Author, Information Wars: How We Lost the Global Battle Against Disinformation, and What We Can Do About It

Presider
Nicholas Thompson

Editor in Chief, Wired Magazine

Panelists discuss the extent of disinformation, its impact on democracy, and what can be done to prevent, mitigate, and stop its spread.

THOMPSON: Welcome to today’s Council on Foreign Relations meeting on “Stemming the Tide of Global Disinformation.” I’m Nicholas Thompson. I’m the editor-in-chief of Wired. I’ll be your moderator today. Let’s get crackin’.

Rick, how are you?

STENGEL: Good. How are you?

THOMPSON: Great. You have just spent the last three years writing about disinformation. He has a new book; it will be available later. You spent the last three years thinking about disinformation. Tell me how your thoughts deepened as you went along, because we all know why disinformation’s a problem. There’re some obvious reasons why it’s a problem. But now that you’ve spent more time thinking about it than anybody else, tell us what you learned that we don’t know.

STENGEL: I don’t think I’ve spent more time thinking about it than the president has. (Laughter.) What a way to begin!

THOMPSON: Yeah. (Laughter.)

STENGEL: The other false premise of your question is that my thinking has deepened about it. So my book, Information Wars, is about the time I spent at the State Department countering disinformation, countering Russian disinformation, countering ISIS propaganda. And I had never really seen it before. I’d been a—I was editor of Time for a bunch of years, had always been in media, and after the annexation of Crimea by Putin in 2014, we saw this tsunami of disinformation around it, you know, recapitulating Putin’s lies about it, and it was a kind of a new world. And the idea of disinformation as opposed to misinformation is disinformation is deliberately false information used for a strategic purpose. Misinformation is something that’s just wrong, something that we all, you know, can get in the habit of it.

And I saw this whole new world being born. I don’t mean to steal your thunder with the question, but inside we were talking about whether there’s more disinformation relative to correct information now in history than ever before. I don’t know the answer to that, but what I do know is it’s easier to access it. And once upon a time the Russians, who pioneered something called “active measures,” which was their idea that warfare, the future or the present of warfare is about information, not just kinetic warfare.

The way they used to do it in the ’50s was they bought out a journalist in a remote newspaper in India to put out a false story about something and then the Russian media would start echoing it and then it would get into the mainstream. Now, they hire a bunch of kids to work in a troll farm in St. Petersburg and put it up on social media with no barrier to entry, no gatekeepers to prevent it from happening. And I don’t know the answer to whether there’s more of it, but there’s easier access to it.

And I do think as we approach 2020, part of the other problem of disinformation is it’s not just a supply problem; it’s a demand problem. People want it. You know, confirmation bias means we seek out information that we agree with. If you’re likely to think that Hillary Clinton is running a child sex trafficking ring out of a pizza parlor in Washington, D.C., you’re likely to believe anything and seek out information that confirms that. That’s a problem, and that’s a human nature problem.

THOMPSON: Paul, let me ask you a variation on this, having just listened to Rick. You’ve just published a report on this very topic. You could have written reports on lots of topics.

BARRETT: I suppose. (Laughter.)

THOMPSON: You’ve got a varied career. Look at the man’s bio. You know a lot of things. Why are you so worried about disinformation right now?

BARRETT: Because it is a foot in the land, it is pervasive, and without a good distinction between real facts and fake facts, we can’t run a democracy in an effective way. People can only make honest political choices with real information. And I think we have at key moments and in key places a lot of false information, and intentionally false information.

THOMPSON: And is it getting worse? Is it worse today than it was yesterday? Is it worse today than it was four years ago?

BARRETT: I think that’s hard to say. I mean, I think it is—it was present and significant in 2016 and has not stopped since, and I think there’s every reason to think that we’ll see it kick up again as we get closer to the next election as well.

THOMPSON: Amanda, how worried are you?

BENNETT: You know, I’m going to—I’m going to be the dull and boring person at this party, because nothing I do has—I was going to say has anything to do with tech or bots or deep fakes or anything like that.

THOMPSON: Help people set up zingers. (Laughter.)

BENNETT: Well, let’s hope that that’s true. But my argument is that we are under-valuing the pursuit of straightforward, truthful, honest news and information in our fight to push back this other thing, these fake things. And that disinformation, misinformation—give me your definition of disinformation again.

STENGEL: Deliberately false information used for a strategic purpose, nefarious purpose.

BENNETT: So I’m the director of Voice of America and yes, we still exist; no, I don’t wear the funny hats anymore; and no, we don’t do propaganda. Thank you.

But that definition right there, I will maintain that half of the world at least lives under that condition daily, with no other—no other news or information. And so this is nothing new, the thing that we’re talking about. If you’re talking about the kind of technologically sophisticated things, I’m going to be the boring person to say that this exists in great proliferation throughout the world already, and that what seems to be an antidote to it in many ways is putting something straightforward in front of people.

THOMPSON: So I totally buy that. Are you also saying that we talk about disinformation too much in this country?

BENNETT: No, I’m saying—I stipulate that everything you guys say is true. Everything is true, we should be worried, these deep fakes are a problem and we should—

THOMPSON: And we should talk about it, or does talking about it so much make us think there’s more of it than there is?

BENNETT: And this—the article that we just read back there that talks about cynicism, I think if you talk about this type of disinformation and misinformation and its goal being to breed cynicism and confusion, the fact that when you talk about straightforward truthful news and information being possible or desirable people roll their eyes at you, says to me that they’ve already kind of won, that we’ve already come to the idea that we—that that is not an effective way of pushing back at things, but actually we need technological solutions to things.

STENGEL: I mean, the thing that has exponentially increased is user-generated content. Remember, the biggest platform in the world, in the history of the world, is all created by content that we put on it, not professional journalists. It’s not vetted. I mean—and so that has—is the thing that has exponentially increased, and because it is created by regular folks and isn’t professional content, the possibility for disinformation, misinformation, anything wrong is that much higher. And one of the things we’ve also—and we will chat about—is the fact—is that the law, the Communications and Decency Act that created all these platforms, does give primacy to third-party content and doesn’t give them any liability for publishing false content if it’s put on by a third-party person, as opposed to a professional journalist like we all are or were.

THOMPSON: All right, but let me give each of you a hypothetical. So let’s assume I have two kids; they’ve just graduated from college. They’re really interested in this problem. One of them says, I’m going to go into deep fake detection. I’m going to figure out how to get rid of disinformation. I’m going to help Facebook fix their algorithms so they can identify disinformation. And the other says, I’m going to go be a reporter and I’m just going to tell the truth about everything and I’m going to tweet out all my stories.

BARRETT: Well, one will have a job and the other one likely won’t. (Laughter.)

THOMPSON: True. But who’s doing the more important work, the work that we need right now?

BARRETT: Well, it’s—seem like equally important work. Sorry not to—(laughs)—not to go for your bait, but—

BENNETT: I would have said I’ve been a really good parent if that’s—if I have—I have that choice there, I’d say I’ve been a really good parent, that both those things—(laughter)—are incredibly, incredibly useful.

THOMPSON: Probably my kids are going to work for troll farms. But anyway—(laughter).

STENGEL: You wouldn’t say the reporter.

BENNETT: No, not necessarily. I’m not—I’m not saying that one of them is more. I’m saying that right now all this attention is going onto things that, actually, a lot of people out here in the room, it’s more scary because we can’t touch it—we can’t do anything about it—when, in fact, I think what’s happening is that your attention is being turned away from the fact that really truthful—people can distinguish truth from lies. One of the ways they can do it is by seeing things head-to-head and they can make decisions.

I mean, the famous example is the Chinese trail (sic; train) derailment. Remember that, when they said nothing here to watch, nobody hurt? And the people that were there doing their—you know, uploading their photos, were showing that there actually was. And that caused a lot of dissonance in the Chinese media ecosystem. So I’m maintaining that not just there’s too much of it or we shouldn’t be doing it, just that there’s something else out there. There’s something else.

STENGEL: I would—I’d actually tell my kid to do the deep fake detection, and I’ll tell you why. Because disinformation warfare—information warfare is asymmetrical warfare, right? It’s like a bunch of young people in a troll factory in St. Petersburg, which costs a relatively tiny amount of money compared to an F-35, can do more damage than an F-35. And so it’s asymmetric warfare in that countries that can’t afford missiles or jets or tankers or whatever can engage in this.

So that, in that sense, what has also happened is the offensive weapons in disinformation war have matured and evolved faster than defensive weapons. We actually need better defensive weapons, and we need to spend more money on it. So I would argue that somebody who could figure out a system to detect deep fakes instantly would be doing a lot of good for the world.

THOMPSON: I appreciate that answer and I also appreciate something you said in there, which is offensive weapons. Should the United States have offensive weapons when it comes to disinformation?

STENGEL: (Pause.) Are you talkin’ to me, Nick? (Laughter.)

THOMPSON: Based on the way you paused, I am 100 percent talking to you. (Laughter.)

STENGEL: Well, I mean, we do have—I mean, I’m not in government anymore, but I think I’m still—have to abide by the strictures of classified information and all of that, or I’ll be prosecuted by the State Department. But I—you know, we do have offensive weapons. I mean, there are—there are well-publicized examples of us using them in Iran, for example. I actually think—

BARRETT: But those are cyberattack weapons, as opposed to actually spreading—

STENGEL: Yes.

BARRETT: —you know, spreading bad information.

So what the U.S. Cyber Command does is actually pretty distinct from what we the United States could be doing, which is matching what the Russians are doing with information operations. And we—so far as I know, we don’t do that, at least not anymore, and I think that’s a good policy. I don’t think we should do that. I don’t think we should be in the truth-twisting business.

STENGEL: Yeah. So Paul makes a good point, and I talk about this in the book. There’s a—on the spectrum of hard and soft power of information, the hard end of information war is cyberattacks, malware, things like that. The soft end is propaganda, content and this and that.

On the soft end we don’t do—I mean, I was involved in the creation of the—of what is now known as the Global Engagement Center, which is a not-completely-funded department which is a kind of whole-of-government department residing at the State Department to combat disinformation. But again, it’s all done in a non-classified way. All the content is labeled U.S. Government. It doesn’t create false information or disinformation.

THOMPSON: So what about—OK, so let’s take another example. So Amazon has this thing where they pay tons of people, some of whom work for Amazon, and they pay them to tweet what an amazing place it is to work at Amazon, and they give them scripts, right? And so there’s this kind of this steady flow of—it’s not false; these people may genuinely like to work for Amazon, particularly since they’re being paid to tweet. And so they tweet out, but just kind of garbage. Should the U.S. pay people to tweet out positive things about the U.S. image and tweet The Star-Spangled Banner in Russia?

STENGEL: (Pause.) You’re still lookin’ at me. (Laughter.) Well, I—

THOMPSON: We got a definitive answer of no.

STENGEL: So Amanda and I—

BARRETT: But that’s a little different. I mean, the idea of someone—you know, tweeting the United States is great and its enemies are not great. And doesn’t the State Department set up projects and programs that essentially do that?

STENGEL: Look, once upon a time, the U.S. Information Agency, which was then folded into the State Department, did create what I would call positive propaganda about the U.S. I was dinged here on another panel a couple of years ago for saying that there’s such a thing as good propaganda as well as negative propaganda. I don’t think propaganda just is automatically a terrible thing and that nations do practice it. So all those trolls will get upset again.

But we don’t really do what USIA used to do anymore, you know, in terms of Frank Capra—why we fight and documentaries about great black athletes and things like that. I mean, all of which was true content, it just was used to give people a better picture of the United States. And I always argued when I was in government that we do that already. I think U.S.—I would always want to make people around the world be able to see U.S. media and not only what we say about ourselves that’s good, but what we say about ourselves that’s critical, so people see that we have an open press and what that’s like.

I think that sends a great message, which is essentially the message that you send, Amanda.

BENNETT: Hmm. The word “message” is a very, very dirty word at the Voice of America because that implies that you are deliberately moving your content in order to achieve a particular end.

Yes, I say that we have an offensive weapon, and I do say that this whole argument has in fact won a little bit, because I’m going to tell you what I think our offensive weapon is and I’m going to see a collective eye-roll around the room, which is our most effective weapon is our First Amendment. And I say that we export the First Amendment and that people can tell the difference. Not completely. I stipulate that everything you guys are saying is true, that deep fakes and all this stuff in troll farms are bad and dangerous and hazardous. I’m glad that you guys are paying attention to it.

But I’m also saying that—and I’m so glad you brought up that F-35, because my personal budget at the Voice of America is less than two F-35s. If anybody’s out there listening would like to help fix that problem anyplace, that would be great, because I think that we reach, you know, hundreds of millions of people around the world for a very small amount of money. And so the First Amendment, neutral news, truthful news, not messaging, independent of a government. People can tell if it’s—if it’s being moved around.

Here’s my—here’s my question right now. You guys all read newspapers still, right? In paper? Any of you in this room? Somebody? Thank you. And sometimes you see these inserts like from the China Daily or from, you know, Abu Dhabi, the City of the Future that kind of stuff? How many of you read them? Have you ever read a single word of them? One word? OK, a couple words out there. And why don’t you read them? Because you know that they are moving something, they are trying to sell you something. You’ll read the newspaper that surrounds it, but you’re not going to read the thing inside.

That’s what I’m saying that propaganda is like, and that you can tell the difference. Maybe not if you have good deep fake that’s doing things, but—so I agree that you guys are good, but people can tell the difference and it’s a worthwhile thing to do. It’s a very worthwhile thing to do. (Applause.)

THOMPSON: All right, let’s move to the platforms, the social media platforms. That was a good answer.

BENNETT: That wasn’t the eyeroll I was expecting. (Laughter.)

THOMPSON: Standing up for truthful news? Journalists are going to—I’m certainly going to applaud that.

All right, let’s talk about the technology platforms. Paul, you’ve just published a report on what they’re doing, what they need to do. Last time when we talked about the 2016 election, we mostly complained about Facebook and Twitter. After 2020 when we’re all diagnosing what went horribly wrong on the social media platforms, which ones will we be looking at?

BARRETT: Well, I say in my report that Instagram, which is owned by Twitter, a photo- and video-based—

STENGEL: Owned by Facebook.

THOMPSON: Owned by Facebook.

BARRETT: Excuse me. Forgive me. By Facebook, I apologize—deserves more attention. And the main reason for that is because we already know that it is a disinformation magnet. The Russian Internet Research Agency, the main trolling operation that the Russians ran in 2016, had more engagement on Instagram than it did on either Facebook or Twitter. And experts in this area have pointed out to me that increasingly, disinformation is being conveyed visually, and that is Instagram’s specialty. And I think that’s the platform to focus one’s attention on, at least initially.

STENGEL: It’s also harder to find—

BARRETT: Harder to detect, that’s a very good point.

THOMPSON: Rick, would you agree?

STENGEL: I do agree. I mean, I—Paul’s report, by the way, is absolutely terrific, and it’s a great primer, I think, on disinformation, both what happened in 2016 and going forward. The Senate Intelligence Committee report that came out, I think two days ago—

THOMPSON: Yeah.

STENGEL: —you know, had a lot that Robert Mueller had, and the stuff in my book that Robert Mueller didn’t have—I’m just telling you that too. But one of the things that they did have is that the Russians have actually increased in terms of volume what they’ve been doing since 2016, and largely on Instagram and other platforms that we probably don’t even know about.

What Mueller didn’t have—and I want to get to the platform things in a second—is that what the Internet Research Agency was doing was completely integrated with what Russian mainstream media was doing, with Russia Today and Sputnik and TASS. And with the Russian, you know, foreign minister, who used to echo canards and misinformation that was created from the Internet Research Agency and start talking about it at a press conference, and then it was covered worldwide. So it had a much greater impact than just the audiences that the Internet Research Agency was going for.

But in terms of the platforms, I do think—and we—and Paul also talks about this in his report—they need to have more responsibility and more liability for the content that they publish. They cannot escape this idea that they’re—that they’re not publishers anymore. The gentleman from NewsGuard here, which is a fantastic new organization that is fact-checking information on the web. I actually stole some language from you about what the companies need to do. They can’t be liable the way Time magazine or Wired is for every word that they publish, but they have to make a good-faith effort to take down demonstrably false content, as Paul talks about. I would argue hate speech, speech that leads to violence, those—there’s no excuse for that, even if it’s framed as political speech. That should just be off, and they should be liable if they don’t take it off.

THOMPSON: So let’s do an example. Let’s talk about, I don’t know, the famous example that came up was the video of Nancy Pelosi slowed down so it looked like she was slurring her speech and drunk. So you can make the argument that’s demonstrably false or you can make the argument it was satire. Satire’s got to be a protected form of speech. What do you guys think? Would you take that down if you were Mark Zuckerberg, would you knock that off the internet?

STENGEL: I—

BARRETT: Well—I don’t want to—I say yes.

STENGEL: I say yes. I mean, and I think also one of the things that they did, so they slug-did (ph) or—“slugged” is a journalism word. They had a—you know, a chyron up saying this is not true content, or this is manipulated content.

One of the things that influences all of this, and I write a little bit about it in my book, are these cognitive biases. And there’s a terrific dissertation, and I forget the young woman’s name who wrote it, about belief echoes, she called it, which is that this idea that if you see something false, even if you then immediately are told that it’s false, and even are persuaded that it’s false, it creates a belief echo in your head that never gets erased. So to me, part of the problem of putting a caption under the Nancy Pelosi video is that you can’t un-ring the bell. You can’t un-see that. That stays in your brain. It should not—it should not have been on the platform at all.

THOMPSON: So you would knock Andy Borowitz off the platform too? I mean, political satire, making fun of things that—pretending that Trump said things that he didn’t say? Because there could be belief echoes with that, even though it’s slugged as humor.

STENGEL: You’re trying to trick me now, Nick. I am—(laughter)—

THOMPSON: I’m just trying to get some of the complexities here.

BENNETT: Do you—do you remember when the People’s Daily re-ran the story about Kim Jong-un being the world’s sexiest man, that was written as satire? And they were like, “world’s sexiest man declared by U.S. publication,” right?

THOMPSON: I ran traffic analytics at the New Yorker and sometimes Andy Borowitz’s post would be picked up as true in China, and the traffic spikes we got were killer. (Laughter.)

BENNETT: Yeah. Yeah.

THOMPSON: All right, so let’s—so we’re kind of ragging on the platforms right now and talking about some of the problems they have. 2016, obviously lots of problems. We had a 2018 election and as far as I can tell, wasn’t a whole lot of misinformation. The only thing that I read about was a bunch of Democrats running a test to try to take—to criticize Roy Moore in Alabama, right? We had very different disinformation problems. So maybe it’s under control. Maybe we’re over-indexing on 2016.

BARRETT: Maybe, but I don’t think we should take the risk that that’s the case. You’re absolutely right that the Russians’ level of interference was negligible immediately around the time of the election. We don’t know exactly why that is; they’re keeping their powder dry for 2020, a much more important engagement perhaps. Perhaps the platforms deserve some credit for having gotten more on the stick and more in the business of taking down phony accounts which they are now doing in some numbers, whereas in 2016 they were completely asleep to that.

The Cyber Command that we mentioned earlier reportedly ran an operation that shut down the IRA, at least for a few days, around the election itself so that they were taken off the internet temporarily. All those things may have played a role.

But the general problem continues. There is disinformation flowing from abroad, not just Russia but also Iran. And I just don’t think this is the kind of problem that you say, well, we had one good outing, so we’re done, all our problems are taken care of.

THOMPSON: But are the signs that you’re seeing, right—we’re a year out. Are you starting to pick up a sense that it’s going to be like 2016 or are you picking up a sense that’s going to be like 2018?

STENGEL: One of the things in the Senate Intelligence report that I found interesting was this idea that the Russians masquerading as Americans would seduce or entice actual Americans to do their bidding on the Web. You wrote about some examples that they did in 2016.

BARRETT: Right.

STENGEL: The one that still kills me that actually wasn’t in the final Mueller report—it was in the first Mueller indictment, and I think you mentioned it in your report—that from St. Petersburg the guys from the Internet Research Agency create—did a rally, a pro-Trump rally in Palm Beach where they hired a flatbed truck and an actress to play Hillary Clinton in a prison cell on the back of a flatbed truck, and they did that from St. Petersburg. That was in the first Mueller indictment. I don’t know why he didn’t put it into the Mueller report.

But in terms of them using Americans to do their bidding, I would worry about that in 2020. That’s very hard to detect. Because if you persuade somebody in Palm Beach to do something like that again, then that’s an American person expressing their First Amendment rights to, you know, say Hillary Clinton should be in prison.

THOMPSON: All right. Let’s spend the last five minutes we have before we go to Q&A, coming up with an agenda for the United States of America, for citizens of America, for the government of America, to lessen the risk of disinformation. Because, as Paul said at the very beginning, democracy can’t function if nobody believes anything.

So we should have engineers looking for deep fakes. We should have true and faithful news. The platforms should be looking for this stuff much harder. What else do we need to do?

BARRETT: And cooperating with each other to a greater degree than they do, and cooperating with the government to a greater degree than they do in order to exchange information and, you know, sort of suss out threats sooner than otherwise they might. And they need to do a lot of what—a lot more of what they’ve already been doing, hiring more people to review content and continuing to improve their artificial intelligence filters.

THOMPSON: Amanda, what else do we put on the agenda?

BENNETT: You know, I would go back to the same thing, which is keep your eye on the ball. What are you trying to push back disinformation for? What is—what is the thing you are trying to push it away from? And that, I would definitely strengthen that, and I would not roll our eyes at the 1999 concept that this stuff actually has value. And that it—and it can be believed, that people can believe it.

STENGEL: I agree with all that we’ve said. I think vetting mechanisms like NewsGuard and others are valuable. I also think a long-term solution—I mean, one of the things I say in the book is we don’t have a fake news problem; we have a media literacy problem. Lots and lots of people—once I left journalism I realized wow, lots and lots of people can’t actually tell the provenance of information and where it comes from and what’s a reliable source and what’s not a reliable source.

It has to be taught in schools, starting like in elementary school. And that’s the reason that so much of this has purchase is that people can’t tell that it’s false and they’re more susceptible to believe it.

THOMPSON: All right, so let’s give a lesson to everybody in this room. We’re all going to—at some, point we’re going to see information that might be false. How should people evaluate it? How can we learn media literacy? Members of the Council on Foreign Relations, well educated, but they’re not going to go back to school for this.

STENGEL: Well, actually, one of the proposals I have is about—is about journalism, digital journalism being way, way, way more transparent, right? So when—in the day when we did stories, we did interviews, we did research, we talked to people, it was fact-checked, we wrote an outline. I think all of that—you should be able to link to that, that you write the story, in the New York Times there’s a link to “here’s my interview with the national security adviser.” “Here are the photographs we took that we didn’t use.” “Here’s the research I did, this chapter from this great new book by Rick Stengel.” (Laughter.) Oh, sorry. And would every reader look at that? No, but it would show the kind of the—how the building is created and it would create more confidence in the result.

THOMPSON: How about changing the law? Should we make the social media companies liable if there’s an excessive amount of disinformation on their platforms?

STENGEL: I think so.

BENNETT: And I will say what I always say, is write the laws as if your adversaries are going to be the ones implementing them. Just make sure you know what’s going on. You can write them because you think of what you want, but think about—think about a law like that in the hands of somebody you don’t like.

BARRETT: And interestingly, Mark Zuckerberg has actually proposed something roughly along those lines, has talked about having some type of government body that would assess the prevalence of bad content on the sites and sort of superintend whether the sites were making progress. I doubt he would go for actually creating, you know, private liability and litigation to flow from that, but the idea is not as far out as you might think.

THOMPSON: But he might go for that, because the only company to be able to comply with those laws is his.

STENGEL: Is his. Exactly.

THOMPSON: And any start-up would be wrecked because they won’t be able to hire all the lawyers and lobbyists they need, which is one of the problems with these laws is locking in monopolies. But, Rick, you said yes, we should change the law. Which laws?

STENGEL: Section 230 of the Communications and Decency Act, which basically gives all of these companies zero liability for the content that they publish, because it’s third-party content. Now, when it was written—when you write a law to incentivize some behavior, like you write a law saying hey, we need to have more people go to Staten Island, let’s—you know, I’m going to create a law where you can build a bridge, you can have a toll for it for ten years, but then you change the law.

The law from 1996 did incentivize this, in a massive way, in a way that unintendedly created all of this other stuff. Needs to be changed now. These platforms need to make a good-faith effort to do that. And one reason they don’t take content down is because if they took content down Congress would go, oh, you’re an editor after all, so you should have liability for the stuff on your content. That’s why—one reason that Facebook is so loath to take things down, because they don’t want people to say, hey, you’re performing an editorial function.

THOMPSON: All right. It’s 1:30. I’d like to invite members to join our conversation with their questions. A reminder, the meeting is on the record. Second reminder, the Council on Foreign Relations is not liable for any defamatory statements that you put in your questions. (Laughter.) Please wait for the microphone. Speak directly into it. Stand. State your name and affiliation. Please ask just one question and keep it concise so we can get as many as possible.

All right. In the back, in the light blue.

Q: Hi. Kathryn Harrison, CEO of the Deep Trust Alliance.

You talked about media literacy. That’s like telling everyone who drives their car poorly that they need to go back to school.

STENGEL: I agree with that too. (Laughter.)

Q: An important—an important part of the solution for sure. But as the equivalent of cars, as the technology for creating videos, images, text get better, faster, stronger, cheaper, is there not an opportunity to make in the technology itself standards, labels, or other elements that would provide the guardrails, the seatbelts, or the airbags for consumers who are viewing that content?

STENGEL: What would that be?

Q: You could have a very simple labeling system, human-generated, computer-generated. You need to be able to track the provenance—what’s the source, how is it manipulated—but that would at least give you a signal, much like when you go to the movies, you know if you’re going into an R-rated movie that there’s going to be violence or sex or language, versus if you go into a G-rated movie. That’s the first place where we’ve shown kind of information that isn’t real.

How can we use some of the models that we already have in society to tackle some of these problems? Because it definitely needs technological as well as human remedies.

BENNETT: I often thought that was really interesting. You know, like, I’ve got friends who forward really stupid things like the one-cent tax on emails. How many of you have got friends that forward the one-cent tax on email thing? I think, oh, guys, get a grip, you know? (Laughter.) But on the other hand, I would really love to see something. This thing was posted by something that in the last thirty seconds posted ten thousand other things. I just think that would be a really useful thing to have and it wouldn’t be that hard to do. I mean, Facebook and Twitter can both do that right now.

STENGEL: So what—I’m a little wary about the content purveyor creating the definition. Now one of the things that a lot of bills that are out there, like the Honest Ads Act for political advertising, or almost any advertising, is to show the provenance of the advertising. Why were you selected to get this particular ad?

Well, it turns out that you bought a pair of Nikes last year and they’re looking for people who bought Nikes in Minnesota. I think all advertising that—and I actually think advertising has a role to play in the rise of disinformation, because automated advertising, when people started buying audience as opposed to brands, that allowed disinformation to rise. So I think the kind of transparency in terms of political advertising and other advertising insofar as that could be applied to content, without prejudging it, I would—I would welcome that.

THOMPSON: All right. In the back, who also might be in turquoise—slightly misleading my initial calling. Yes.

Q: My name is Aaron Mertz. I direct the Aspen Institute Science and Society Program.

A lot of the examples you gave came from very large entities, governments, major corporations, often for quite nefarious aims. I’m thinking about individuals who might have ostensibly good intentions, parents who want the best for their children, but then who are propagating false information about things like vaccines. How do you counteract that kind of disinformation that’s coming from individuals who then can band together, form these groups and then potentially even lobby governments to change policy?

BENNETT: I think you’ve just put your finger on one of the real—the real, you know, radioactive things about this whole discussion. How far do you go from vaccines which we don’t agree with to a form of religion we don’t agree with? Let’s talk about Christian Scientists. Would you like to ban that from the internet? I mean, that’s—you’ve just put your finger on the third rail.

THOMPSON: So how do we solve the third rail?

BARRETT: Well, I would encourage the platforms to diminish the distribution of or take down altogether phony life-threatening medical information. So, I mean, you have to do it carefully, you have to be very serious-minded about it, but I—

THOMPSON: Who determined—who gets to determine what’s phony?

BARRETT: Hmm?

THOMPSON: Who determines what’s phony?

BARRETT: I would go with doctors and scientists. (Laughter.)

BENNETT: Me.

BARRETT: You?

BENNETT: I’m going to do it, yeah.

BARRETT: Well, I’m less impressed by you. (Laughter.)

STENGEL: But to say something that will also be unpopular, when I went into government, and having been a journalist, I was as close to being a First Amendment absolutist as you could be, you know? Justice Holmes, the First Amendment doesn’t just protect ideas that we love, it protects ideas that we hate. And traveling around the world, particularly in the Middle East, and people would say, why did you allow that reverend in Florida burn a Quran? Well, the First Amendment.

There’s no understanding of the First Amendment around the world. It’s a gigantic outlier. All of these societies don’t understand the idea that we protect thought that we hate. I actually think that, particularly the platforms, the platforms have their own constitutions; they’re called terms of service agreements. They are not—they don’t have to abide by the First Amendment as private companies. Those need to be much stricter about content closer to what the—what the EU regards as hate speech and other countries do. There’s a phrase called dangerous speech, which is speech that indirectly leads to violence. I think we have to be stricter about that, and I—and the platforms can do that because they are private entities.

THOMPSON: All right. I’ve got so many follow-ups. We’ve got a lot of questions. George Schwab in the front center here.

Q: Thank you. George Schwab, National Committee on American Foreign Policy.

From the perspective of international law, does state-sponsored misinformation constitute aggression?

BENNETT: Not my thing.

STENGEL: Well, one of the things I’ve been saying for a long time is that the Russians didn’t meddle in our election; they did an act of cyber warfare against the foundation of our democracy. That’s not meddling. I think when there’s state-sponsored disinformation, I think there should be repercussions for it. And part of the reason there’s more and more is that no country pays any consequences for it. I mean, yes, we sanctioned the Russians, or a few Russians, but it’s not a disincentive for them to do more.

THOMPSON: So what should we have done?

STENGEL: I’m sorry?

THOMPSON: What should we have done to the Russians after 2016? We’re not going to nuke them, right? (Laughter.) Like, where’s the line that we’re going to—

STENGEL: Well, I think we should have declared—there’s a—something akin to a kind of information national emergency, that our election is being interfered with by a foreign hostile power in ways that we still don’t know, and people have to be wary.

THOMPSON: OK. Far right here.

Q: Peter Varis, from TechPolis.

Richard, you mentioned two cases that you actually worked on, the ISIS misinformation and the Russians after Crimea. It’s obvious that we have a lot more misinformation because the cost has declined. But what’s the difference from a terrorist group or, ditto, an insurrection like ISIS, and a state-sponsored little campaign of misinformation, which is—both are linked to actual kinetic warfare.

STENGEL: Yeah.

Q: But what’s the difference? Because that helps us to understand the budget difference. With $50 you can have a lot of impact with targeting on the internet, but what did you feel, hands-on, on those two experiences?

STENGEL: So I write about both trying to counter ISIS messaging and Russian disinformation. And the former is easier in the sense that the ISIS disinformation, they weren’t masquerading. They weren’t pretending to be other people or Americans. They were digital jihadis, and when then advocated violence, right there was stuff that you could take off. I mean some—and I, in the book I talk about how—what great things that Facebook and Google and YouTube did in taking down violent extremist content. In fact, someone at Facebook likened it to child pornography, where the image itself is the crime; you’re under arrest. Promotion of violence, you’re out.

The problem with the Russians is they pretended to be Americans. They pretended to be other people. They were hidden in plain sight, and that is—that’s a lot more difficult, and it’s still more difficult.

THOMPSON: All right, let’s get some questions on the left. As far left as we can go. Right here.

Q: Speaking of far left. (Laughter.) Peter Osnos with Public Affairs Books.

So some of us grew up with Russian propaganda. Then it was called Soviet propaganda. And what we all agreed was that it was incredibly clumsy. So in 2016 and beyond, suddenly those same Russians, now a new generation, managed to create vast amounts of bits and pieces that were considered effective. And you referred to the stuff up in St. Petersburg, and there are people who say it was in Moldavia or some other places. Who was doing all that stuff? Who—low-paid trolls? Who created tens and tens of millions of these bits and pieces, many of which were, I’m sorry to say, very effective?

BARRETT: Well, there was a—the main engine for the information operation side of it, as opposed to the cyberattack against the DNC computers, which was brought off by the GRU, the intelligence wing of the Russian military. The information side, the IRA, was run like a company that was owned by a crony of Putin’s and allegedly, according to Robert Mueller and U.S. intelligence agencies, was something that Putin himself approved of. So—

Q: That’s not the answer.

BARRETT: Not the answer?

STENGEL: But Peter, I’d make a distinction between effective and sophisticated. What they did was effective; it wasn’t sophisticated. I was a recipient of all the stuff from trolls. I can’t even—I can’t say the words that they said. They couldn’t even spell them. The grammar was atrocious; they had terrible English. We looked at the handbook that the trolls would get when they went to the Internet Research Agency; it’s laughable.

But as someone said to me, a marketing guy said to me, you know the emails you get from the Nigerian prince who needs $20,000 to get out of prison and you’re going to get $10 million? I said, yeah. He said, and you know they’re like filled with spelling errors and grammatical mistakes? And I said, yeah. He said, that’s deliberate. Why? Because if you respond to it, they know they’ve got a live wire.

So the stuff that the Russians did were for people, as I said before, who will believe these strange conspiracies people, who don’t really know about the Oxford comma. (Laughter.) So they don’t really care about it, and that’s why it’s effective.

THOMPSON: All right. Let’s go to the back. The very, very back.

Q: Steve Hellman, Mobility Impact Partners.

Do you expect more vectors of interference in the 2020 election, particularly Chinese, for example? Do we expect foreign adversaries to weigh in on both sides of the election at this stage? What do you think?

BARRETT: Possibly. I mean, I think the Chinese are a possibility. We’ve just seen them active in Hong Kong, where they used Facebook and Twitter accounts, some of them English language, to try to undermine the democracy protestors in Hong Kong. I see shifting the attention over to the United States as only a minor potential adjustment. I think the Russians could be back and the Iranians have already test-driven their information operation. So I think there’s every possibility that there could be more vectors, as you put it, coming from abroad.

And in terms of volume, we should remember that the vast majority of dis- and misinformation comes from right here at home where we’re doing this to ourselves, in a sense. So there’ll be that aspect of it as well.

THOMPSON: But isn’t one of the interesting questions when you try to think about what countries will try to influence our election is which country has a clear goal in the outcome, right? So who—will China want Trump or his Democratic opponent to win? Like, Russia had a clear goal in ’16—

BARRETT: In promoting Trump, and presumably China would have the opposite goal.

THOMPSON: Perhaps, unless they think that the backlash Trump has created is beneficial to them. I mean, I’m not a China foreign policy expert, but—

BARRETT: Me either.

THOMPSON: Who is going to—who has a clear interest in the outcome?

STENGEL: One of the things that we saw about Chinese disinformation and propaganda operations was that it wasn’t directed outward. It was much more directed inward, both for the Chinese audience itself and also for marketing the Chinese miracle around the world. They weren’t trying to effect particular political outcomes. I mean, that may have changed, and what’s going on in Hong Kong is evidence that they’re getting more sophisticated about it. But they were not nearly as aggressive as the Russians, of course, and the Iranians, who do also have an interest.

But I also would quibble a little bit with—the Russians did end up of course helping Trump, but in the beginning, I mean, their whole goal, and has been—

THOMPSON: Helping Bernie first.

STENGEL: Well, but their whole goal was sewing disunity, discord, grievance. That’s what they’ve been doing since the ’40s and ’50s and ’60s. It was only when they saw Trump starting to lead the pack and praising Putin to the skies that they turned and started marshaling resources about it. I mean, one of the things I write about is that in the beginning, the first six weeks, you know, Trump was made fun of by the Russians just like people here were doing.

THOMPSON: All right. Do we have a Chinese foreign policy expert who wants to raise their hand?

BENNETT: This poor lady’s been right in front waving her hand. It’s driving me crazy. (Laughter.)

Q: I’m Lucy Komisar. I’m a journalist.

In the New York Times yesterday there was a story with the headline Ukrainian President Says ‘No Blackmail’ In Phone Call With Trump by Michael Schwirtz. He said Mr. Zelensky also said “he ‘didn’t care what happens’ in the case of Burisma, the Ukrainian gas company that once employed” a son of former Vice President Joe Biden. “In the phone call, President Trump had asked Mr. Zelensky to do him a ‘favor’ and investigate the debunked theory that Mr. Biden had directed Ukraine to fire” an anti-corruption “prosecutor who had set his sights on the company.” “Debunked” was the word of the author, not of Trump.

Well, go back to January 23, 2018. In this room, Joe Biden, speaking to the Council, on the record. “And I went over, I guess, the twelfth or thirteenth time to Kyiv and I was supposed to announce that there was another billion-dollar loan guarantee, and I’d gotten a commission from Poroshenko and from Yatsenyuk that they would take action against the state prosecutor, and they didn’t.” I’m eliminating a couple of paragraphs just for time, just to get to the nut-graph. “I looked at them and said, I’m leaving in six hours. If the prosecutor is not fired, you’re not getting the money. Well, son of a bitch—(laughter)—he got fired.” Now what would you say about this disinformation in the New York Times yesterday? And do you think that they should take down this demonstrably false information?

STENGEL: What are you saying is false about it?

Q: Well, the writer says that it was a “debunked theory” that Biden directed the Ukraine to fire an anti-corruption prosecutor who had his sights on the company. In this—in the Council here, Biden says exactly that he said we would not give the billion-dollar loan guarantee unless you fired this prosecutor. It seems to me that Biden in one place is telling the truth and in another place he’s not. Maybe we have to figure out that, but I don’t think he lied to the Council. It’s all online; anybody can see it. Therefore, it seems to me the Times wrote fake news and they should be asked to take it down.

BENNETT: I think the point that you’re—that you’re actually making the larger point I think people would be interested in is that a reputable organization that does this looks at errors and puts—researches them and corrects them when they make them. If it in fact is an error, then people should correct it. But that’s a generalized principle, and I don’t know anything about the truth or falsehood of what you just said. I’m just saying that’s one of the things you want that Rick’s talked about, is transparency and correction.

THOMPSON: Let’s not—I don’t think we want to litigate this, because we don’t—

BENNETT: Yeah, we—

THOMPSON: We’re not experts on that particular statement.

BENNETT: We’re not expert on that. We don’t—

STENGEL: If I could just to go in the weeds for a second, having gone to Ukraine several times at the same time that Vice President Biden was there—he was there twelve or thirteen times; I went three times. That prosecutor was a corrupt prosecutor who was shaking down the people he would potentially prosecute who already had exonerated Burisma, the company that his son worked for. So he was saying the prosecutor that exonerated Burisma needed to be fired. And you know who else was saying it? The IMF, the World Bank, the EU, everybody else. It was a corrupt prosecutor.

Q: He now says he—(off mic).

THOMPSON: All right. Woman at the table behind. Right there. Yes, you. Yes.

Q: Going back to the question of whether there was disinformation—

THOMPSON: Oh, and your name and affiliation.

Q: Oh, sorry. Absolutely. Ann Nelson, Columbia University.

The question of disinformation in the 2018 campaign, I wonder whether you were looking at U.S. intermediaries at state-level campaigns. So specifically the National Rifle Association, which has its own apps and its own dedicated social media platforms and they have repurposed Russian memes and as the Senate Commerce Committee minority report pointed out last week, the NRA, Maria Butina, were very heavily involved with the Russian campaigns over a few years, including supporting her attendance at the Council for International Policy. So looking at campaigns such as Heidi Heitkamp and Claire McCaskill, where the NRA was extremely involved both online and on the ground, do you still think they weren’t very involved in 2018?

BARRETT: Not sure exactly how to answer that. The NRA was active in—I mean, the Russians had certain contact with the NRA. I’m not sure that that is—fits in exactly the same frame as the information operations that we’ve been talking about, but certainly you’re right that the NRA is reputed, certainly by its foes, to stretch the truth on a regular basis and they have that intertwining with certain Russian agents, namely that woman. Beyond that, I don’t really have the—know what else to say.

THOMPSON: OK. Gentleman in the far back, in the blue jacket.

Q: Hi. Jamaal Glenn, Alumni Ventures Group.

What’s your prescription for how to deal with information that doesn’t fall in the demonstrably false category? I want to challenge this notion that some of the Russian operation weren’t sophisticated. I would argue—maybe not technically sophisticated, but incredibly sophisticated if you look at their ability to identify American political fault lines and play to those. Things like race.

I have friends exceptionally well educated who played right into the hands of some of these actors. And many of these things weren’t technically false. So I’m curious. What’s your prescription for these things that sort of fit in this non-demonstrably false gray area?

BARRETT: Well, I was going to say the platforms, but mainly Facebook, already has a mechanism for what they end up calling false news, which would be broader than in my—in my thinking than demonstrably false information, and they down-rank it and label it, if they—if their fact-checkers have found it to be false, they label it so that when you go to share it, you’re told with a little pop-up that what you’re trying to share here is false, so, you know, think twice before you do it.

I think that mechanism, for something that’s determined to be false, but where there’d be some difficulty in calling it demonstrably false, might be the way to deal with that. A certain amount of misleading information, you’re not going to be able to do anything with because you’re not going to be able to know in the first instance where it came from or who’s manipulating it.

THOMPSON: But what if it’s true?

Q: But what if it’s true?

BARRETT: OK, well—

THOMPSON: So what if the Russian government is spending money to promote stories that are irrefutably true. Say they’re about—

BARRETT: Yeah, then you’re looking for categories of behavior that indicate that there’s some inauthenticity to the accounts that are sending it. The platforms have been moving more in that direction, taking down accounts on that basis. But all of this points to the fact that you’re not going to be able to get everything. No matter how aggressive you are, and not everyone wants to be that aggressive, this environment is going to be shot through with material of questionable provenance.

THOMPSON: OK. Right here on the right, gentleman in the orange tie.

Q: Michael Skol of Skol and Serna.

Isn’t this partially a generational problem? I am one of those who does read the morning papers on—in paper. the Times, the Journal, the Post when there’s a funny headline. But I don’t—I don’t think there’s a lot of people a lot younger than I am who follow this, and which—what are the implications of this, that this problem is only going to get worse because the younger people who don’t pay attention, who don’t prioritize demonstrably true media outlets, are growing up and they overwhelmingly, possibly, there will be a population that’s worse than it is now.

BENNETT: Again, let me—let me be the cheerful, non-cynical person in the room. Because we are able to look at digital behavior around the world, and let’s just stipulate that based on what you said, paper is for our generation; digital’s for everybody else. One thing we are finding that is fascinating is that people are coming to look for news and coverage from other countries, and I’ll give you one specifically.

In China, what we found in the last six months or so is that the volume of traffic coming out and looking for news on Venezuela has just gone through the roof. Now, why would that be, and who is it? I think it’s because they’re trying to find out things that they’re not being told at home. I think that is a really interesting thing. It says to me that these things are true that we’re saying here.

 It is also true that people want to know what’s really going on and they have a search for truth. I know this is, like, 1990s, 1980s, but I still believe that that is true. And we’re watching our digital behavior. When there were the street protests in Iran, our traffic went crazy. Our Instagram traffic went crazy. This is all people coming off of cell phones, so it’s young people carrying their cell phones. They were looking for stuff. So we saw this happening. And so I’m saying that I’m not sure you can say that everybody under the age of 65 is kind of undiscerning and stupid. I don’t actually believe that. Well, sometimes I do, but—

BARRETT: Some of us are. (Laughter.)

BENNETT: But not often. Anyway—

THOMPSON: I would just add that the data from 2016 shows that there is a real generational problem with fake news. But it’s the older people. (Laughter.)

BARRETT: Yeah.

BENNETT: Yeah.

THOMPSON: On the left. (Laughter.)

Q: Jove Oliver, Oliver Global.

My question is with your journalist hats on, when you see , say, a public figure, maybe the president of the U.S. breaking the terms of service on a certain platform, whether that’s by spreading, you know, disinformation on maybe Twitter or something, what’s the—what’s the remedy for that with your journalist hat on? It’s a public figure. Arguably, what they’re saying is in the public interest. At the same time, they could be causing violence against people or certainly spreading disinformation, which is against the terms of service of these platforms? Thank you.

THOMPSON: Or we could even make it more specific. Rick, you sit on the board of Snapchat. Should you kick Trump off?

STENGEL: Well, I’ll—(laughter)—I’ll answer that in a second, but I’m going to—the previous question. It’s a well-known fact that stories on paper are more factual than stories on telephones. Wasn’t that the implication of your question? That’s a joke.

Q: Depending on which paper. (Laughter.)

STENGEL: OK. I think the highest order of magnitude—and again, one of the things that’s been great about this panel, Nick, is you’ve actually caused us to have to think while we’re up here, which is usually not allowed on panels.

But to me, the highest value is whether something is demonstrably true or false, rather than the news value of a certain story or the news value of a certain news figure making that statement or the higher protections that political speech has than regular speech. So that was the—that was the story about Facebook and the—now taking off that ad. They were privileging political speech over regular speech, and they—basically they were saying, to me, was that political speech, even if it’s false, is protected, whereas regular speech, if it’s false, is not protected.

I would say the highest order is the falseness or trueness and even if it’s a public figure, then that content should be taken off.

THOMPSON: Banning Trump from Snapchat?

STENGEL: You know, not everything he says is false. And there is a—he is a newsmaker, I believe, and one of the things that—and as Nick mentioned, I’m an adviser to Snapchat. Snapchat does more of a traditional curation of news where the news is linked to a brand, rather than a topic or audience. And in fact, one of the things that I also say in the book is that the rise of automated advertising where you buy an audience, as opposed to buying an ad in Time magazine or the Economist or Wired, is one of the reasons that all of this disinformation becomes out there.

And I’m going to say something very unpopular now among my news brethren, that I actually think the movement toward subscriptions also creates a greater volume of disinformation because the true content is now behind a paywall that very—that relatively fewer people can get, whereas the bad content is open and free. So talking about this age discrepancy, young people are now going to think well, I got to pay $68 a month to subscribe to the New York Times but I can get all this other stuff for free, free is a very powerful word in our society. And in fact, I used to say in the early days was, you know, when people used to say information wants to be free, I would say people want free information and we gave it to them and that’s why they are biased in favor of it. So I think the subscription paywall model is also a recipe for the increase of disinformation.

THOMPSON: Well, there’s only one way to solve that problem and that’s for everybody in this room to subscribe to Wired. (Laughter.) All right. It’s 2:00. We’re done. Thank you very much to this panel. Please turn on your phones and spread some true information. (Applause.)

(END)

Top Stories on CFR

United States

Each Friday, I look at what the presidential contenders are saying about foreign policy. This Week: Joe Biden doesn’t want one of America’s closest allies to buy a once iconic American company.

Immigration and Migration

Dara Lind, a senior fellow at the American Immigration Council, sits down with James M. Lindsay to discuss the record surge in migrants and asylum seekers crossing the U.S. southern border.

Center for Preventive Action

Every January, CFR’s annual Preventive Priorities Survey analyzes the conflicts most likely to occur in the year ahead and measures their potential impact. For the first time, the survey anticipates that this year, 2024, the United States will contend not only with a slew of global threats, but also a high risk of upheaval within its own borders. Is the country prepared for the eruption of election-related instability at home while wars continue to rage abroad?