Panelists discuss the roles of the government and private sector in combating online misinformation and safeguarding freedom of expression.
CALVIN SIMS: Thank you very much and welcome to Council members. Today we are going to take a deep dive into combating online misinformation. And we're pleased to welcome with us for this meeting three experts in this field: Jamal Greene, Joan Donovan, and David Kaye. And so, I want to start out the outset by saying that we will look today, both globally at this misinformation effort, as well as locally.
And so I thought we might start out with Professor Greene and report a little bit about what has happened in the last couple of days with Facebook. There have been sort of two monumental sort of shifts at Facebook. One when you had these attorney generals from around the country putting out a statement about— I think there were twenty to— twenty state attorney generals, who put out a statement basically asking for prevention by Facebook of these biases and disinformation spreading around hate crimes.
In addition to that yesterday, Facebook took an extraordinary act, and actually taking down misinformation that was posted by President Trump regarding the coronavirus. So Jamal, you're part of— one of the co-chairs of this new committee that's been set up by Facebook to monitor content. So could you basically start out by giving us your reaction to what has happened with the attorney generals as well as Facebook taking this extraordinary step to take down misinformation by the President?
JAMAL GREENE: Sure, so first, thanks for having me and happy to address this particular audience. Just to clarify the nature of the oversight board: so this is— it was created by Facebook to be an independent body that reviews content on Facebook and Instagram. It's independent in the sense that the board members — and there are twenty of us at the moment — we don't work for Facebook, we work for the oversight board. And the board is administered by an independent trust. And it's funded through an irrevocable trust, ultimately by Facebook, but Facebook has no control over the funds for the board or over the actions of board members. So the effort is, I think, an effort to inject some trust into what Facebook does.
So you mentioned two instances, one involving misinformation around coronavirus. And that's something— so I guess I'll step back a little bit. Facebook has come under lots of fire in relation to its handling of misinformation. In particular, it doesn't, in general, fact-check political speech on the theory that the people of a particular country should be the ones to police speech by politicians, but they've made some essentially ad hoc exceptions for coronavirus-related misinformation and for certain kinds of election suppression information. What I'll say is— and I'm reluctant to comment in interviews on specific cases. There are lots of issues that Facebook faces that could come before the board when it's fully up and running in the next couple of months. But I will say that what this does reflect is, you know, Facebook has struggled, I think, to combat this issue for lots of reasons, but some of those reasons have to do with the inherent problems of fact-checking and other approaches to misinformation that Facebook has struggled with as other platforms have struggled with. And part of it has to do with Facebook's own financial and reputational incentives to act in particular ways. What the oversight board is designed to do is to sort of take out that piece of it.
The difficulty in controlling misinformation is going to remain, it's not something that the oversight board is any particular solution and certainly not a complete or even a significant solution to but it does represent an effort to try to remove the financial and political and reputational incentive for that. So that it the issues of trust that for good reason attached to things that Facebook does, don't attach to what the board does.
SIMS: Thank you. Joan, any reaction from you to these twenty state attorney generals and their call, you know, for hate speech to be removed?
JOAN DONOVAN: I think it— you know, it's really indicative of the time that we're in, as well as the long struggle that activists and even journalists have been calling attention to, really for quite a while, but became much more pointed, I think, after Charlottesville, when most folks started to realize that these are not just content platforms, they're not just libraries or warehouses or whatever kind of metaphor you want to use. They are spaces of coordination and when you do circulate an immense amount of information and people are motivated to act based on that information, we do get into these situations and a lot of times, you know, in the U.S., we will cast that as a foreign problem and say, well, in Myanmar, there's a problem with how the government or how bad actors are using Facebook. But if you actually look at the U.S. case, hate speech has become a prolific issue on Facebook because of their lack of action for so long and their permissiveness for so long. And so I feel like at this moment, the AGs are really just catching up to what other folks, particularly activists and researchers and journalists have already been sounding the alarm on.
And the other piece of this— I just want to comment briefly on why they removed one post that was up on Trump's page is it— it gave the— the video gave the impression that children were somehow immune to coronavirus. And this is something that we've researched at Shorenstein. Brand- Collins-Dexter has a report out about how COVID misinformation is targeted at black communities online. But this notion that certain types of people, certain populations, are immune is something that we've seen quite a bit before Trump's, you know, entrance into the issue. But we haven't seen a lot of traction on getting rid of those posts en masse. It really— there's really a complicated system of flaggers and fact-checkers and folks that are not necessarily employed by Facebook to fix the problem. And so my reaction to this is we do need some kind of policies and sanctions at scale that incentivize Facebook to clean up the platform.
And if we don't do that, what happens? That's the other side of the coin is if nothing is done, does it get worse? And I think that if we look over time, it's been getting worse. So we can say that yes, the trend is going to continue. And it isn't the case that people are just leaving the platform and deciding that it's, you know, dispensable to them in their lives and that they can live without Facebook. The pandemic has actually brought renewed value to network platforms. And so we're in a really difficult spot where most of us are getting our news and our social lives and our entertainment from these places. And so, it's not as simple as saying, “Well, if they won't do it, and government won't regulate, then the people will just leave,” because I don't think we're in a position right now where we can.
SIMS: David, would you like to weigh in on this as well, I mean, especially these are the Section 230— which sort of gives—
DONOVAN: What’s that? What's Section 230? (Laughs.)
SIMS: (Laughs.) Some degree of immunity to Facebook and social media sites.
DAVID KAYE: Yeah, so maybe if I could just step back also and make the problem even more, you know— to problematize it even further. The way I see this issue is, you know, you've asked the question in the context of online hate in the United States. And I think it's really important for us to recognize that the nature of this problem is truly global. So we get exercised about it when we're talking about hate speech and racism and white supremacy in the United States or vaccine disinformation in the United States. But this is, you know, indeed a global problem. And it's a problem that is not just one of content. You know, it's a problem of law. You know, it's a jurisdictional problem: who's responsible for making decisions around, you know, what content is problematic, what should be taken down? Should it be the companies that do this? Should it be governments? So it's kind of a legal, it's a global, it's a jurisdictional problem.
And then, on top of that, I think, and this is the amazing part to me, and this gets to the Section 230 issue. You know, after years of knowing this problem of disinformation, or hate speech, online, after knowing that this is out there, and that it's global, we're still struggling with the basic question of who should decide these questions. You know, should it be the companies making decisions on grounds that are really related to their business model, or to their own kind of terms of service and rules that they all have? Should it be a kind of you know, like the Facebook Oversight Board that Jamal was describing? Should it be some kind of external but still self-regulatory mechanism so that the companies just regulate themselves? Should it be the companies in kind of tandem like across the industry? Should they be regulating themselves in some way through some kind of mechanism? Should we have, you know, public authorities doing this, you know, actual public regulation? And all of the issues that arise when we start thinking about whether we want governments to engage in content regulation, which is always going to be dangerous, or whether we want them to be involved in some other way of perhaps transparency and disclosure regulation that might be valuable. So, you know, we have a significant global problem.
Section 230 just is really just an American issue. You know, jurisdictions around the world are thinking through how do we, you know, maintain these platforms for robust speech for sharing of information for debate and so forth, while protecting the public and our democracies. Frankly from, you know, the problems that arise here, Section 30 is just one part of that. And I don't want to speak too much more on that. But— and I'm sure we'll get into it, but it's really just one element in a global question mark at the moment.
SIMS: I’m just gonna go back to Professor Greene, Jamal, you— this has been described, this oversight committee, almost like a Supreme Court board, though has so many more members, but you're one of the co-chairs of it. How is the committee progressing? And when are we likely to see it take its own action?
GREENE: So I think Facebook has tended to resist the Supreme Court analogy. I myself resist it in some ways. I think it is actually accurate in some ways, in the sense that we will be a kind of last line of review for content decisions in the way that a court might be, and will issue decisions and written opinions and they'll be public in the way that the courts decisions are, but we also have some advisory capacity that's a bit different from a court. So can make advisory recommendations to the to the company.
In terms of the— in terms of what's happening now and when we'll be up and running, I think it's fair to say we'll be up and running sometime in the fall. It's hard for me to be super specific about that because there are a lot of different issues going on. We’re— the board is twenty people around the world. We all have full-time jobs and this is a part time job. We often have families or others to care for during this pandemic. And we're trying to sort of build this institution from the point at which Facebook stepped back.
So Facebook stepped back from it early May, and now we're trying to train ourselves. We're not all lawyers, we're not all human rights experts, we're not all technologists. So it's substantive training that's happening on kind of the ins and outs of content moderation, on Facebook's own community standards and its values, on international human rights law and the international human rights system, kind of developing procedures, developing our deliberative culture, figuring out case selection, how we receive information from outside sources or from Facebook as appropriate. You know, how do we deliberate, how do we draft opinions? Hiring a bunch of staff, we're gonna have about forty staff members who have to be hired. And that has to be independent of Facebook. So there's a lot going on, and we're quite busy.
But we're not hearing cases yet. So, again, I think sometime this fall, I think, I'll say I'll be disappointed if we're not up and running in a couple of months from now, but it's hard to say with specificity.
SIMS: And, Joan, you've written and spoken out about the role of news media, in this disinformation campaign: how much attention do they put on this information that's being posted around? Could you talk a little bit about that role, which is in addition to the role that Jamal's going to be playing on the oversight committee, that media— traditional media itself, sometimes, you know, pushes as disinformation by how much attention they give to it. And you've talked about the role of local media, especially in places where there's no— there's a news desert, local journalists on the ground who can report and debunk a lot of this stuff.
DONOVAN: Yeah, so there's so many different nuances to what's going on in journalism, but by and large when we think about how disinformation circulates, my team uses a framework where we think about it in a lifecycle model.
So we look at first, where is the manipulation campaign being planned? Second, how are they littering the web or blogs and then social media with this content. But the third stage is what's really important, which is who responds?
So if a journalist picks up the disinformation and is like, “Look, there's you know, eighty-seven people on Facebook claiming that Black Lives Matter is calling for white people to shave their head in solidarity.” You know, this was a troll campaign that we saw: Bald for BLM. Right? It was just these trolls trying to get people to shave their heads for Black Lives Matter. And the whole influence op was kind of jokey-hokey. But nevertheless, they were pretending to be activists and were trying to push this hashtag. But if journalists pick it up at that point, right, if they say this is a big deal, it becomes a big deal, right?
It’s— and there are other folks too though that can make it a big deal. If a politician picks up a piece of disinformation and circulates it, the mere fact that the politician is tweeting or posting about it makes it newsworthy, which is why when we see in the pipeline where, you know, someone like Trump is spreading disinformation, we get into this position where journalists have to cover it because it's newsworthy, but at the same time, they're introducing new audiences to that piece of disinformation. And we see this cycle time and time again.
And then we also look at, I think, where Jamal's work and David's work also make the lifecycle model even more important is we look at the interventions by platform companies to mitigate these kinds of attacks. And we have seen significant new policies over the last several years, where platform companies are trying to deal with the weaponization of their platforms. And then we look at how manipulators adapt to those changes, right? How do they keep it moving? How do they— do they just get rid of that campaign and start up a new one? If they've lost a network of, you know, two hundred fifty accounts, you know, do they start from scratch again? Or do they just move to another platform?
And, but by and large, with journalists, the question of whether to cover a disinformation campaign is a really difficult position to be in and, especially as a lot of journalists aren't, you know, investigative journalists or they maybe they've got to do two or three stories a week and this one is easy to see and would get some quick hits. We're in this position, though, now, where as researchers, we've called for journalists time and time again to try to step back and do more of a threat assessment to make sense of: is it only existing on one platform, has it broken out of echo chambers?
So for instance, with vaccine misinformation, there are communities on every platform that discuss vaccines and children's health and natural cures. And only recently, though, have platforms decided that these particular communities are trouble. And so they then have to go back and decide how do we make sense of this? And how do we as a platform company deal with it?
But the reaction or the takedown by the platform company itself becomes news. Right? And so we're kind of stuck in this, you know, circling the drain, to use a metaphor there, where we know that attention to these issues, if you bring too much attention to them, can have the opposite effect, can spread the disinformation, can— especially with conspiracies, because conspiracy communities often thrive in environments where they get media attention, but then they also want to believe that the media is adversarial to them. And so the media calling attention to them, and then them getting shut down or sanctioned, ends up strengthening the resolve of some of these communities. And so it's all very complicated. But I think that ultimately journalists have, in especially the work of First Draft, have made it so that journalists are at least attuned to the trade-offs, and can then make assessments.
And when it comes to the local news piece of it, we're going to see a lot of this when it comes to local elections in 2020, where people are now attuned to the tactics of media manipulation and will deploy them, whether they be political operatives, or just, you know, people who are big, big fans of certain candidates. And so the problem really is going to get worse as people innovate and learn those tactics.
SIMS: Can you talk a little bit because you've worked, you know, a good portion of your effort here on an international scale? And I'm thinking of Russia and China, and their misinformation campaigns and how they try to weigh in on elections. Talk a little bit about that, and how we can counter that? And also, I want to throw in, does the United States engage in some of the same tactics overseas that we're deploying now?
KAYE: Yeah, I mean, so your question, Calvin, is a really important one. And, I mean, first and foremost, for me, what it highlights is that the disinformation problem isn't one problem only. You know, it's domestic problem in the— and it's a domestic problem, you know, everywhere, where there is a domestic— you know, wherever the platforms are, you know, are— wherever they have markets, there is a kind of localized disinformation problem. You almost might call it like an indigenous problem that's related to politics and media and so forth. Some of it is very much intersecting with things like hate speech and incitement to violence and other kinds of political and related issues.
But beyond that— And also intersecting with it is the problem of state manipulation, of foreign interference, and that foreign interference can be— and this has been a subject of news reporting that participants are probably familiar with from the last four years of talking about this. You know, this is a problem of both kind of almost like a structural hacking problem, like hacking into another state's information system. And it's also a problem for the platforms in the sense of, it involves foreign actors, oftentimes using inauthentic behavior in order to kind of mask who they are, and to drum up and perhaps polarize people online in ways that they might not naturally be polarized.
And this was, you know, something we saw, you know, regularly in the 2016 presidential campaign. And certainly Facebook saw this. It became more evident after the election of Donald Trump and became exposed more afterwards, but where, you know, Russian disinformation operatives, essentially, would be playing both sides of the game, you know, creating user groups on Facebook, that were completely inauthentic and going all the way towards having these offline impacts by creating, you know, demonstrations and public gatherings and so forth. So, there is a very real problem out there that is related to government interference.
Part of this, you know, when we start thinking about responses, part of this may involve our own criminal justice system. And, you know, we have seen indictments in the United States against members of the the Russian intelligence services who engage in exactly this kind of behavior. And we'll see it in the context of other states as well. It's not just Russia. It's not just China. It's also Iran. Many other states that have a very strong interest in— if we're thinking about this just from the U.S. perspective— of, you know, messing with our information system.
And that goes again to Joan's point, that— answer your question, that this also requires, to a certain extent, a kind of media environment that will pick up what's happening in the social space and amplify it on you know, local news, Fox News, you name it. So that's a real problem.
And you know, there are different possible solutions. The platforms, as just as one example, the platforms really try to identify what they call a coordinated, inauthentic behavior. Right? And they try to identify who's creating this disinformation, where's it coming from, which accounts are inauthentic and are coordinating amongst themselves? And they try to tackle those. That's one possibility. That is one thing that the platforms have to do, and they do this, I think 24/7.
I think the big question is, whether that, you know, considering the scale of the platforms, you know, the billions of users that they have, and also, frankly, the limited space they have in the United States. You know, Facebook, only about ten percent of Facebook's user base is in the United States. So the big question or a big question is to what extent can the platforms actually deal with this problem as a global problem, when we're talking about, you know, real bad actors using sophisticated techniques in order to mess with domestic information environments?
SIMS: Before we go to questions, just quickly, Jamal and Joan, what can be done by the average citizen to be more, shall we say, media literate? We talked about, in a past meeting years— about, you know, what is media literacy? And what can the average person do to become more educated about all this information that has been brought to the forefront? And most of it, you know, bad? (Laughs.)
GREENE: Well, I, you know, I think one of the problems that the rise of misinformation has kind of piggybacked on, I mean, really a significant problem is the decline in trusted intermediaries so that traditional media— for all problems with that, and you know, there there are problems associated with having a bottleneck with certain certain intermediaries. But learning that there are some sources that are trustworthy and some that are not.
You know, I think as we go forward, one of the things that platforms try to do, and this is consistent with what David just said about coordinated inauthenticity, but it can go beyond this, is trying to focus on particular users, rather than on particular posts or content, which is to say there are some institutions or organizations that are more trustworthy than others. And there are actors out there who are trying to sow doubt about that, and trying to trying to say, you know, “Which are the sources I trust and which are the sources that are— I've never heard of this, you know, I've never heard of this content. It's got some typos in it. It says something that's really hard to believe, and I don't see it anywhere else.” That's a pretty good sign that someone is trying to manipulate you. And I think we just need to be more aware of that.
Obviously, it goes much, much farther beyond simple literacy and education. There are also other tools that need to be brought to bear. But at a minimum, it does require some more media literacy.
DONOVAN: So, from our research, you know, I just want to highlight a couple of features of disinformation. One is that it's meant— it’s really meant to trick you, right? And so it's meant to look and feel like news, right? Down to— we see URL spoofing, so instead of ABCnews.com, it's ABCnews.go. And you think you're on ABC News, or it uses the logos of news organizations, maybe it pulls a really nice photo from, you know, some stock imagery or from another article. Sometimes we see some of these fake news operations where they will— there will be a Facebook page, it'll serve a lot of local news. And every once in a while it'll pepper in a little bit of misinformation or disinformation. Right?
And so it’s— what's hard about a camp— a media manipulation campaign designed to trick you is that it often works. And as a result, the reason why it works is part of the second issue that we come into, which is that it works because it uses the affordances of platforms. It actually is very low-tech, in the sense that it uses some of the features that are built into Facebook or into Twitter to make it harder for you to even discern if it is, in fact, you know, ABC or NBC that you're going to, but that the link to the article is often much lower and is like a— you know, if you look at a URL, sometimes you can't really decipher what's going on with it or it's hidden behind a shortened bit.ly link or a Google shortener link. And so it's not always easy to look for all these signals of credibility in order to know what it is that you're reading.
And then it also leverages human psychology. So disinformation— disinformers tend to leverage our psychology to be attracted to things that are novel and outrageous, right? That's why we would share something. People don't generally share average things. And so the truth is always pretty boring. It's always a lot less interesting than the conspiracy around it. And so sharing the truth is not even something that people are entirely motivated to do, right? You don't see an incredible amount of people sharing that it's gonna be sunny and seventy-eight [degrees] today, right? They'll share that there's a storm coming or they'll share that it's an extremely hot day.
And so the features of social media have actually been optimized for folks to pay attention to those kinds of information and then enter in these disinformers that are really entering into an information zone where all of the tools and technologies have been really built to fit the circulation of novel and outrageous information.
And as you add in social media, right, this idea that you serve media to your social networks, you as the individual become the news distributor. And we often think of ourselves as atomized. You know, my account, my wall, my feed, my timeline, but I think we also have to start to broaden the horizon to think about well, we are distributors now, right, and then the choices we make in particular, have consequences when they start to add up in mass across networked environments.
And I'll say one other thing about the disinformers, which is there's two classes of them really. And the ones that David has pointed out that are state backed ops. And, you know, these are folks that are meeting in a boardroom and are deciding on a communication strategy and are building it, you know, from the top down. Whereas other kinds of networks like network harassment groups or troll groups or the white supremacists that we study, build their campaigns from the ground up, and those are the things you can actually study. We tend to find evidence everywhere of coordinated campaigns that come from groups like this that actually have to mobilize large groups of folks to game algorithms. That's why— what was interesting about the Russian IRA cases was really interesting is they actually had to pay for it. Right? And there's this funny meme online where it's a kid and he looks really puzzled and underneath it says, "Wait, you guys are getting paid to do this?" And the fact that Russia had to rent out office space in, you know, and sort of like subcontract this kind of work really shows that those campaigns were not organic or mimetic in any way, or even memorable. And that's why some of the research is tilted towards the fact that these campaigns had low to no impact on voter turnout.
But if you switch the lens, and you think about these groups that are coordinating campaigns by mobilizing people to share information, but are also using techniques like cloaking websites or URL spoofing, they do tend to have an impact on our culture. And they do tend to really influence the media agenda in the in the kind of public discussions that we have. And those are the kinds of campaigns that my team is most interested in. And those are the ones that I think individuals need to be more accountable about and individuals need to be thinking about more which is, "Am I you know, kind of a useful idiot?" to use a different metaphor in this campaign? “Am I sharing it because it confirms what I believe to be true and therefore I'm participant participant in a misinformation campaign?” Or is it the case that, you know, people are just following along: click, like, and share. Which is exactly what these platforms want you to be doing.
SIMS: Very well said. I'm gonna now open the conversation to the several hundred Council on Foreign Relations members who are tuning in virtually and we're going to switch it over and hear questions from them that they will address to the panelists.
(Gives queueing instructions.)
Our first question will be from Razi Hashmi.
Q: Hi there, can you hear me okay? So I am a term member, I work in the Department of State's Office of International Religious Freedom. So we've seen a high degree of misinformation in South and Southeast Asia, primarily along the lines of religious and ethnic divisions, which has used WhatsApp and Facebook and has led in many cases to communal violence and even some deaths. You talked about the role that citizens play in social media literacy, but what is the role that government — not just the U.S. government, foreign governments have, but also social media companies when operating abroad — in curbing misinformation. And obviously, I mean, I think one thing we're seeing now is the rise of deep fakes in being used in these platforms as well. Thank you.
SIMS: David, would you like to…?
KAYE: Yeah, sure. I'll jump in on that one. Razi, thanks for your question. You know, I think that you— what you're describing is a problem that we have definitely seen. I mean, I totally agree with you. We've seen this in many, many places in South Asia and Southeast Asia and really everywhere around the world. And there's a couple of responses that I think the platforms should be taking on.
So one is, I think it's very clear that the platform's have very, very little insight. You know, when you think about the scale of their work, very little insight into the nature of local information, local dynamics, how people share information, what the code might be? So what language might surface that— in a normal— so I think either Jamal or Joan mentioned flagging or other mechanisms that the companies have in order to flag, say, hate speech or incitement? Well, oftentimes that language is coded in such a way that the platforms aren't going to be able to get to it. They just don't have the kind of access to local communities that allows them to have that kind of insight. And they clearly need to build that up. I mean, they really— it's one thing to have general rules like community standards for Facebook, or the Twitter rules, or YouTube's community guidelines. It's one thing to have those, it's another— and that's very important. It's important for those to be alive to the kinds of manipulation that we're talking about and to incitement to violence. But it's another thing to be able to implement them at the local levels.
And I think one lesson we've had from the last several years is, even though I think there's some goodwill within the companies to make a change, they're still very far from being able to do that. They just don't have the kind of— there's not the local kind of ownership of these public spaces that really is important to develop effective responses.
And the problem, often, is that— Because you ask about governments, oftentimes governments are either the bad actors, or they're the negligent actors. And they are not the ones we can necessarily look to. It's even hard for the companies to have real engagement with some governments because the governments themselves want to manipulate that space, sometimes to incite violence, sometimes to tamp down criticism or minority views.
So it's very complicated at that level, but I do think it's a very fundamental level, this is something that that the platforms really need to be ramping up. And that means ramping up their accessibility to people in local communities around the world. It's a huge challenge.
STAFF: Excellent. Our next question will be from Yael Eisenstat.
Q: Hi, this is Yael Eisenstat. I'm a visiting fellow at Cornell Tech's Digital Life Initiative. I would just love to hear from any, or all of you, whoever wants to take this question. We've seen that— so one of the things I focus on a lot is misinformation as it's relating to the upcoming election and to voting. And I'm curious what you guys think of Facebook's policy to label all posts right now. My understanding is that they will be not labeling necessarily, but putting a label on each post that then directs you to another center to get more information about voting. And I know there's some concern that labeling all posts makes people not know what they should and shouldn't trust at all. So I would love to hear anybody's thoughts on that particular choice of policy.
DONOVAN: I can take some of that, which is to say that, you know, we do see— There's a couple of things going on, which is the politicization of the fact-checking initiative by Facebook has led to different understandings of what it means to get a fact-check label, right? So this has compelled more research in the fields trying to understand does labeling some posts but not all posts, make people believe that the posts that are unlabeled are somehow true, right? So it's sort of the reverse where if you fact-check the misinformation, everything else that isn't fact-checked, people assume that the platform has gone through this.
And even Francesca Tripodi's work on scriptural inference and the ways in which people read a Google search return is instructive here, which is to say that people will assume that the first few results on Google must have been checked by Google to be true.
And so we're in this environment right now, where I think that people don't understand. The general public does not understand the ailing infrastructure that drives content moderation, and how nascent that technology is at this point, counterweighted with this, you know, need and desire to know things in the world. And as a result, when you show up on Facebook and you use the search bar, you do make a bit of an assumption that what you're getting served has somehow been ranked in order by Facebook in some systematic way.
But when it comes to the voting issues, we know that any disruption to the voting process opens opportunities for disinformation and media manipulators. So prior to the pandemic, if we think about the confusion over counting in Iowa, right, there was just— it was a wild night online at 2:30, 3:00 a.m., where people were just throwing out any old theory about what must have happened during the Iowa caucus. Because there was this technology that was introduced that people didn't really understand very well.
And so when it comes to voting, I'm, you know— the research has definitely— confirms both sides of the problem to suggest that some, you know, labeling is going to help but also too much labeling might give different impressions. But ultimately, the big challenge is going to be understanding which pieces of the democratic process are being misunderstood and how that creates a new opportunity for disinformers to step into the breach and really push that wedge.
And I think that we know right now, and I'll wrap on this point, that the nuance between absentee voting and mail-in ballots has been made a thousand miles wide by, you know, current discussions this week about how you should understand that problem. And depending upon which outlets you're getting your information from and how you even search for absentee ballots or vote-in ballots, or mail-in ballots, you may get different information entirely. And that, to me, I think is what at least the Facebook initiative to send everybody to the voter center is trying to hedge against, but I don't know how effective it's going to be. Because it— really any disruption to the process, even absent a pandemic, is a real challenge and political opportunity for platforms.
GREENE: And can I just, I'll just add something. And I completely agree with everything that Joan said. And one of the challenges of fact-checking in this context, as in so many other contexts, is that it's just too slow, right? So even if you trust it, and that’s— those are really the two problems, the problem with trust and the problem with speed.
So even if you can generate trust, and in a politicized environment, especially around elections, it's very hard to generate that trust, the things can go viral much more quickly than they can be fact-checked.
And so the strategy that's being deployed is you just try to essentially not try to fact-check too much, but when you see certain kinds of content, to flood the zone with information that you believe is true. I think this is especially complicated in this particular election environment.
Because as someone who teaches election law, it's actually quite easy to get things wrong about elections. It's also quite easy to get things wrong about mail-in voting and about absentee voting. And so people can be acting in good faith and completely get something wrong. People can obviously be acting in bad faith as well.
And so the strategy, again, is to try to— you know, here is a trusted source of information, we're just going to send you here and make sure that it's easy for you to access this information.
But I completely agree with Joan that it's not clear how effective that's going to be in this environment. It's not clear how much the platforms can really do here.
STAFF: Our next question will be from Sonia Stokes.
Q: Sonia Stokes, Mount Sinai Icahn School of Medicine, thank you to the panel. And thank you to Joan Donovan for bringing up the issues of vaccination because, when we talk about the next two months, the first thought that comes to my mind as a physician is the fast-approaching flu season and the need for equitable access to flu vaccines. And misinformation about vaccinations is a major determinant to help equity. Because the problem I see as a physician is that at the time a post is up, damage is already done. And in the middle of a pandemic, that damage is even deadlier. So I do respect the problems that come with content takedowns in terms of paradoxical augmentation of bad information or viewpoint discrimination. But are there preventive measures that can be used to discourage the circulation of medical misinformation on social media that anyone from the panel has seen as effective?
KAYE: Maybe I'll just say, very quickly on that, I think certainly, that's an incredibly important question. And you've, I think, highlighted both the importance of action in the face of vaccine disinformation and health— public health disinformation generally, and the countervailing pressures and policies around viewpoint discrimination and public debate. Those are absolutely in tension with one another. I think this is an area— and you know, the World Health Organization a year or two ago, came out with a report on how to deal with disease pandemics and what they've called the infodemic, which has become now a kind of popularized term for disinformation related to pandemics and public health threats.
And I think this is one area where, clearly public education— and that means not just public health authorities, but also elected officials and other public actors need to take on an extraordinary role and a high profile role in talking about the public health importance of actions like vaccination. And that's not going to be enough, but it does have to be a very big part of answering disinformation around vaccines.
Of course, it also has to be a kind of rapid reaction kind of force that the platforms need to have as well. But as you say, you know, you post something, once it's posted, it will circulate even when it's taken down. It's already been the subject of a screenshot and it gets circulated and re-circulated. It's going to be an ongoing problem that needs not just a platform response, but— and not just a regulatory response, but a public education and a responsible public authority response as well.
DONOVAN: And I'll just add that we will be having a— we will have a paper come out hopefully later this month that we wrote in consultation with WHO about how public health officials and other health communicators can enroll different coalitions and allies to combat health related misinformation. But the challenge is remarkable right now. Even the naming of the vaccine in the U.S., Operation Warp Speed, has presented itself with, you know, an opportunity for lots of misinformation and speculation about the safety of the vaccine because if you prioritize swiftness over safety, we get into a problem where the public trust is obviously at issue. But I will pledge to you that that paper will come out. We just started— are in the back and forth phases of what feels possible in terms of building coalitions and broad support to make sure that people have access to timely, local, and relevant information on demand.
STAFF: Thank you. Our next question will be from Alan Kassof. Mr. Kassof, can you unmute your mic? Looks like we're having some technical difficulties. So we'll go to Jordan Reimer.
Q: Hi, my name is Jordan Reimer. I work at the RAND Corporation. I was wondering— you talked a lot about Russia disinformation efforts and no one has talked that much about Chinese or Iranian disinformation or any other state sponsor, particularly with regards to the upcoming election. I was wondering if somebody could please state whether those are threats, whether there's any other states that might pose a threat to the upcoming elections? Thank you.
KAYE: Yeah, those— Jordan, it's an important point. And I alluded to this before. It's clear that we're not just talking about Russia. I think Iran has been a real active participant in disinformation. And, you know, China, perhaps to a lesser extent. There may be other states out there. This is not, I mean, you might remember back in 2016, you know, there was there was this talk of, you know, Macedonian teenagers who were just making a buck. I mean, that's, that's quaint and long gone.
I mean this is a problem of state interference. I think those are the three that— I mean, at least when we're talking about interference with the U.S. election, right, that we're probably most, and that the companies are probably most worried about, right?
But remember, again, these are global platforms. So we're not the only target. You know, we meaning the United States and our electorate, you know, so too is, you know, say, India or, you know, other places where there may be— or Ukraine and the Ukraine-Russia relationship and, I mean, we could go down the list of where there are clear, bilateral problems, rivalries across, you know, particularly in the Middle East, and states are interfering in others' information environments.
And the platforms, as far as I can tell, while they're very much alive to this problem as a problem of affecting the United States and then, to a certain extent, other markets where— or other domestic environments where politicians have a certain amount of sway and voice with respect to the companies. In many parts of the world, this is happening and the platforms are basically uninvolved.
So, you know, I think that this is clearly a problem that involves many, many states. Calvin asked the question much earlier about whether the U.S. is involved in any kind of offensive disinformation. I mean, clearly the President of the United States is involved in disinformation. I mean, that's not really so much up for debate these days. Whether it's involved in disinformation outside of the United States, I don't have any particular insight into that.
DONOVAN: I’ll just add really quickly to this that the the tactics— like if we think about the ways in which the tactics of media manipulators have evolved over the last several years, we actually haven't seen mitigation strategies evolve at that pace. And so it is an open information environment and on our team, we often refer to it as a network terrain, which is to say that platforms will, you know, tend to treat themselves like walled gardens. And as a result, the tactics tend to conform to the terms of service on each platform.
But if you're running most of your misinformation content through, you know, a web blog, or, you know, a website, and then you're just simply attaching it to and sharing the links through a set of accounts on Facebook, you know, those things can be really hard to detect if they don't look at the content. And they don't start to understand what's at stake, or they don't, you know, platform companies don't start to look off-platform for other evidence of coordination.
And so we've really tried to bring a framework to the field so that platform companies can understand that they're just part of a pipeline of disinformation and that different features of their platform will be get leverage to sow disinformation on another one.
For instance, YouTube tends to act like an infrastructure or repository for these things. But it isn't necessarily the place where we see a ton of networked coordination. We just tend to see that, you know, videos will get planted on YouTube, and then the affordances of Facebook will kind of carry the day, which is to say that the tactics are available, literally, to anyone.
And if you also study online marketing, you can almost get ahead of some of the problems because digital advertisers are often adept at trying to figure out how to get what we call free reach, like how to get into new networks without having to pay a toll, or having to pay the platforms themselves.
And so there are different places where we're looking for understanding about media manipulation tactics, and then how to get platform companies to build mitigation strategies that scale.
STAFF: Our next question will be from David Broniatowski.
Q: Yes. Hi, this is David Broniatowski at the George Washington University Department of Engineering Management and Systems Engineering. I appreciate that the panelists all brought up the COVID-19 infodemic, as it's been called by the World Health Organization, and I was curious if they could speak to whether this constituted a qualitatively different dynamic around COVID-19, as opposed to the sorts of misinformation and disinformation content and scope that we had seen prior to the infodemic, but also around other topics not related to COVID-19. Thank you.
DONOVAN: I’m going to share an article that I wrote in the chat for Nature about the infodemic and flattening the curve of misinformation. But, you know, what we're seeing is a remarkable increase in grift and scams and hoaxes and not just misinformation at this stage. Because everybody is online and has their attention trained on COVID-19 and coronavirus, the uniqueness of those keywords alone presented an enormous grab bag of actors with new ways of reaching the public because everybody was looking for COVID information so we saw mass scams and hand sanitizer scams and toilet paper scams.
I mean, if there was a shortage of it, there was a scam for it and including supplements and this is why the the issue around potential treatments for coronavirus became such an issue. Just to say that hydroxychloroquine, which no people had— nobody had really been giving much thought to until Fox News covered it and remdesivir was definitely much more in the know amongst doctors and scientists online. When that started to blow up as a potential treatment, people started to flock to any website or place that was claiming to sell supplements for it, right? Before we even get into any of the issues related to you know, the politicization of the cure itself.
We do know that people are listening really intently to what's happening, and are changing their search and their health behaviors accordingly. And as a result, we're starting to see all kinds of other misinformation attach itself to coronavirus. Particularly some of the stuff that we're looking at related to when the borders will get reopened, right? This is an old issue for white nationalists, white supremacists, ethno-nationalists in the United States. But we're starting to see alliances start to grow across different countries where folks are seeing keeping the borders closed as a way of furthering their own political ends. And then coronavirus as a— becomes a kind of tactical rationale for that. And so we're starting to see all of these different things ebb and flow and mesh together. And it's, you know— if you start— if once you pay attention to it, it's almost like you can't unsee it, you start to see it everywhere.
GREENE: And I'll just add that, and Joan is really a bigger expert on this than I am, certainly, but, you know, my sense is that the tools are not different. But as with other— the features of this epidemic are such that those tools are more powerful. So people are paying attention. It links up with politics in lots of ways, it links up with national security. It feeds into a need for people to have information about things. So it's the magnitude of the crisis rather than the tools themselves. But it tells you that the tools are quite powerful when the opportunity presents itself.
SIMS: Well said. And so Jamal, you've had the last word on this, and we're at the end of the hour, but I want to thank everybody, all the members of the Council for joining in today to this virtual meeting. And also especially to the panelists: Jamal Greene, Joan Donovan, and also David Kaye, thank you very much.
(Gives closing announcements.)
And with that, thank you for joining us. Thank you to the panelists again, and have a good afternoon.