Meeting

Confronting Disinformation in the Digital Age

Thursday, November 9, 2023
Beata Zawrzel/Getty
Speakers

Executive Director, WITNESS

Reporter, New York Times

Associate Editor, Digital Forensic Research Lab, Atlantic Council

Presider

Senior Fellow for Digital and Cyberspace Policy, Council on Foreign Relations; @Rightsduff

Panelists discuss the challenge of discerning accurate information on online platforms, the rise of disinformation and how news agencies verify the truth in their reporting, and the repercussions of misinformation in the current conflict between Israel and Hamas.

DUFFY: (In progress)—policy, and I will be presiding over today’s discussion. My goal with this discussion and with this meeting is to take this moment to examine how complicated it currently is to map and interpret information flow during a crisis, to determine what information is credible and what is not, and how difficult it is to prepare adequately for how synthetic media’s rise may add to the particular information chaos that we are all increasingly experiencing. 

And so, with that, I am extremely happy and grateful to introduce three incredible guests today who are joining us. 

The first person I will introduce is Layla Mashkoor. Layla is the associate editor at the Atlantic Council’s Digital Forensic Research Lab. Her research interests include disinformation, content moderation, and digital repression in the Middle East. And she has reported on these issues from Abu Dhabi. She has also reported from Hong Kong. And she has trained hundreds—probably at this point, Layla, thousands—of journalists around the world on open-source intelligence-gathering techniques, on fact checking, and on verifying information. 

And so I’m going to ask you, Layla, before I move on to our other guests: What is the most important thing you hope our audience takes away from the conversation today? 

MASHKOOR: Hi, everybody, and thank you for that introduction. 

I think it’s important to know that social media can and is influencing and shaping perceptions of this conflict. And I hope we can come away from today’s session understanding that the digital space is an integral component of conflict, and the most important thing is ensuring fact-based information prevails online and offline. 

DUFFY: Thank you, Layla. Fantastic. 

Sam Gregory is the executive director of WITNESS, where he has served for more than twenty years, I think, at this point, Sam, in various capacities. WITNESS is a nonprofit organization that is comprised of a global team of activists and partners who support millions of people using video and technology globally to defend and protect human rights. And over the past, I would say, five years, really, WITNESS has been very ahead of the curve in looking at the rise of AI-generated audiovisual information, and the impact that that will have in information dynamics and in human rights. 

And so with Sam, turning it over to you, quickly, with the same question: What do you hope our guests—our audience will take away today? 

GREGORY: So AI-generated media has often been overhyped over the last five years, but we’re now starting to see the impact of it in real-world situations. So we need to act on some key tactics, resourcing, and infrastructure choices that will better prepare us for a greater volume and variation of synthetic media; and empower some key stakeholders, including journalists and human-rights defenders. So we need to act, but not panic, at this point. 

DUFFY: Fantastic. Thank you. 

And next up, last but certainly not least, I’d like to introduce Tiffany Hsu. Tiffany is a technology reporter for the New York Times, where she covers misinformation and disinformation and its origins, movements, and consequences. She writes about the quality of information, the threats against shared facts. And she follows false narratives, conspiracy theories, influence campaigns, deepfake fora, and other sources of misleading or inaccurate content, as well as the defenses against those materials—so fact checking, content moderation, media literacy, AI-detection tools, regulation. And so Tiffany has been a reporter for many, many years, and was covering media before she came onto the—what we call the information beat. I also discovered in prepping for this that her great passion is fried chicken. And as I am a native Kentuckian, I have thoughts and feelings that we will go into at the end of this session, probably offline, off-record. 

I would like to remind everyone that today’s session is on the record and for attribution, and will be available for streaming on CFR’s website as well as on YouTube after we conclude today. 

OK. And so, with that, thank you all so much. 

Oh, and Tiffany, I’m sorry; what do you want our audience to take away today? 

HSU: Oh, it’s all good. Very briefly, I want everyone to just take a breath whenever you’re dealing with information that seems highly emotional, that seems very extreme, and really just any information, especially coming out of this particular conflict. Take a breath, evaluate where the information is coming from, consult multiple sources, and just generally be careful. Don’t move as quickly as you think you need to. 

DUFFY: Fantastic. Thank you. 

All right. So, Layla, I want to start with you. And what—for our—for our audience, just to let you know, what I’ll be doing is asking each of our panelists to weigh in on a sort of specific question that is directed to their particular lane of expertise, at least for this session, and then from there we’ll move into more of a dialogue among the panelists. 

So, Layla, you have monitored and analyzed open-source information across multiple environments, contexts, crises. Can you walk us through the trends you have seen emerging in this—in the particular conflict that we’re grappling with right now in the Mideast and explain how what you’re seeing either tracks with or diverges from other crises or conflicts that you and DFRLab have tracked? 

MASHKOOR: Yeah, definitely. So the degree of disinformation that we are seeing in the online environment right now is greater than anything I have seen in previous conflicts, and I think this is because of several factors. 

First, this is an incredibly emotional time for many people, and disinformation seeks to exploit that emotion and exploit that tragedy. 

Second, wars are incredibly fast moving, and social media aids in that rapid and never-ceasing flow of information. But verification and determining the validity of information takes time, but unfortunately the desire for up-to-date information is far outpacing the ability to verify content. Time is simply a luxury we don’t have in conflict settings. And on top of that, disinformation is primarily spreading in three languages—English, Arabic, and Hebrew—so the scale of verification efforts is greater and requires diverse language expertise to address the various falsehoods spreading in various languages. 

And lastly, this conflict has proven that social media platform design is inherently tied to perception of the war. So X, Meta, TikTok, they’re all taking different approaches to how they moderate content, and that can be felt by the general user seeking information. So how you view this situation might depend on which platform you are using to receive your information. 

But the types of disinformation that we are seeing are not necessarily unusual. It’s the scale that feels unprecedented. 

So just to provide some examples of common forms of disinformation that we’ve seen circulating, we’re seeing old and outdated footage from previous wars circulating. Often, a lot of it is from Syria. 

We’re seeing out-of-context footage circulating, so that’s real footage that is being misrepresented. 

Another source of false footage is video games or movies and films. 

We’ve also seen impersonation accounts. So these are people who pretend to be journalists or media organizations, and they’re aided by Twitter’s new verification model. And these accounts seek to appear legitimate as they spread falsehoods. 

We’ve also seen forged government documents spread online. 

We’ve seen misidentification of hostages or victims. 

And another major form of disinformation spreading in this conflict is claims of false flags or crisis actors. 

And of course, there is also AI-generated images that are a factor here as well. 

But I think the big question is, why is this happening? Why do people spread disinformation? And there is a number of reasons and motivations. So this isn’t comprehensive, but three key motivating factors that we have observed are: They might be trying to further a political goal; they might be trying to sow confusion and distrust so people get overwhelmed or frustrated and they disengage—there is a saying: If you can’t persuade them, confuse them—or people might be trying to serve their own personal interests. They might be trying to get more followers, gain engagement, and latching onto viral topics to serve themselves. And then, on top of all of that, there are people who are just struggling to navigate this polluted online space, and without trying to manipulate the conversation they will share misinformation unknowingly and without malicious intent. 

But all of this contributes to an information disorder that makes it very difficult right now to discern truth from fiction. There are counterclaims and counternarratives circulating for all types of information that are being shared. 

But what we do know for certain is that disinformation is dangerous because it inflames an already tense and hostile situation. It fuels hate speech. It furthers violence. It validates those who seek to incite more violence. And all of that compounds all of the harms that we are seeing. So information integrity is vital in times of war, and unfortunately the flood of disinformation can drown out the important and authentic information that is spreading. So, as I said earlier, we have to ensure that evidence-based information prevails online. 

DUFFY: Thank you so much for that. 

Sam, I’m going to turn it over to you now. You know, when this—when this particular crisis began in the Mideast, I was—I was immediately worried that this would be the first conflict where we were seeing AI-generated audiovisual information, because of generative AI tools, just get pumped out. And for those who are joining us today, when we talk about AI-generated information, we’ll also be talking—it’s also referred to as “synthetic media.” So if we throw around the phrase “synthetic media,” we’re talking about information that was created by AI, right, or by a person behind using AI is a better way to phrase that, I think. 

And so that’s one of the things that I was really examining; like, are we—now that we’re in the land of generative AI tools being publicly accessible, are—is this going to be the first time that we truly see synthetic media in a real-time conflict just pumping out? Layla and Tiffany, we can get more into whether or not that’s actually happening in a bit. But, Sam, I wanted to ask you to think a little—to take it up a notch for us and say, like—you know, WITNESS has been leading on synthetic media issues now for years. Your reports are on President Obama’s, like, AI reading list. Congratulations. That was an exciting little, like, post. You’ve been testifying before Congress, I think, like, every month—(laughs)—for the past few months. Can you—can you walk us through why synthetic media—why AI-generated media—is top of mind for so many policymakers at this moment, and where it’s creating new challenges and new complexities? 

GREGORY: Thanks, Kat. Yes. Just yesterday, I was in front of the House Oversight Committee’s meeting on deepfakes, and I think you can really see there what’s happening in terms of policymakers really starting to worry and panic, and it’s across a spectrum of issues. And I think it reflects what’s happened in the last year, which we’ve gone through a previous hype cycle where there were definitely harms and possibilities coming out of the synthetic media field, mainly the targeting of women with nonconsensual sexual images. Huge problem that continues to expand. But in real-world politics and conflict situations, they really were not happening. 

And what you’re seeing now is the technologies are allowing, you know, a much easier creation of this, a much more commercialized; doesn’t require computing skills, doesn’t require you to do anything more than put in a language prompt for an image generator or put in, you know, an audio text for a—for a voice generator. And of course, this has really exploded in the last year based on the research before. And so I think policymakers are worried because it suddenly feels very visible. 

You know, and the topics of conversation, of course, go across the range between the fog of war—which I’ll focus on in a moment—but also these very rooted concerns about how these are going to impact particularly on women in public life and women in private life targeted with nonconsensual sexual images. And I don’t want to separate that from the U.S.—from other contexts, because in WITNESS’s global work we know that’s a threat everywhere, and it targets journalists and human rights defenders, and pushes them out of the public space. 

So you see those discussions, and obviously there’s a focus on elections. But even in the hearing yesterday, the question of the fog of war and distinguishing true and false came up in the opening comments from Representative Mace. And I think this also reflects our uncertainty about how to be able to handle this, right? And there’s a broad uncertainty about the technologies available to handle this and the skills available. So that’s the starting point. 

The second thing I’d say is this has, obviously, led on existing problems. And in WITNESS’s work, we’re always historicizing and contextualizing, right, to some of the other issues that Layla is mentioning, that I’m sure Tiffany will also touch on around, you know, existing massive problems with decontextualized media that occur in conflicts and that we’ve seen for many years in our own work; an under-resourcing of the media, including in their capacity to do this; and then some newer dynamics, right, the kind of OSINT for clout, open—clout—open-source intelligence where there’s a rush to kind of prove and disprove stuff in public, often when people don’t actually really have the skills to do this, right? 

So what we’ve seen in our work—and this relates both to analyzing what’s happening in the current conflict, but more broadly—if you’re starting to see a growing number of AI claims. I want to name the—I think they’re a very small percentage compared to other types of mis-contextualized media. We still have to gauge that. But with these trends in access and commoditization, you’re seeing particularly with images and also in work—less so in the Middle East conflict currently, but in other contexts—audio. You’re not seeing video, and I think it’s important for people to remember that video is pretty complex to do in complex real-world war situations. It’s much easier to mis-contextualize something. But to make an image—and for example, the images we’re seeing in the context of the current conflict are—some of them I would describe as propaganda images; they’re someone creating a kind of inspirational participatory propaganda moment. Others are attempts to show, you know, an AI image that represents reality. 

And then the other thing that’s happening—and this is something we see a lot and it’s really important for people to understand—is the plausible deniability around real-world content in the absence of detection tools and measures, right? So you see people saying, well, that’s being made with AI, or claiming something’s being made with AI. In fact, we see that much more frequently than actual AI in the global context, right? People say this recording of a politician was actually faked with AI when it was, in fact, real; this recording that people are using for OSINT is faked with AI when it’s, in fact, real. And so I think there the key question is we have to work out how to understand that plausible deniability—it’s sometimes called the liar’s dividend—alongside an escalating use of AI images and audio. 

And both of these problems tap into two fundamental gaps we have. One is around access to detection tools and, indeed, even the way those detection tools work. We don’t have good generalizable tools that work across contexts, work with compressed media, work in social-media settings, work across multiple languages. And we don’t have access to them for journalists, and human rights defenders, and frontline people who have been given the responsibility of doing this verification. So in the conflict you’ve seen people using public tools that give false positives, false negatives. And they try to do it fast, which contributes to the problem. 

The second this is we don’t have an infrastructure to show authenticity and provenance of what is true or false—or authentic, perhaps better spoken—to explain how something is created. So we’re really dealing with some technical gaps that are reinforcing resourcing and skills gaps, and you’re starting to see that escalate in the current conflict in a way that is likely to increase in the coming months and then in future similar types of conflict situations. 

DUFFY: Thank you so much, Sam. 

And, Tiffany, I want to now turn it over to you. You know, as we’ve heard from Layla, as we’ve heard from Sam, as we all know—(laughs)—sort of deeply at this point after years of working in these areas, information flow has become so complicated that at this point multiple media outlets, like, including the Times, have not only built entire teams focused on vetting and verifying this information that—I realize we’re throwing around this term open-source information as if it’s like a normal term in the normal world and not just jargon in our world, so I should clarify. When we talk about open-source information, everyone, we’re just talking about information that’s publicly available, right? That could be satellite images. It could be social media posts. It’s just things that are out there that don’t exist as a classified thing in the intel space, but instead that are out and available and open for the public to look at. 

So you now have whole teams in media that are dedicated to looking at this flood of open-source information that’s coming out trying to figure out what’s credible, what’s not, what is actionable for reporting. And now, in addition to that, you have entire information beats that are being created in those outlets that have the resources to do it to really report on what is happening with information dynamics and what’s feeding into those. And that’s really where I think you and your journalism live, because in the past six weeks you’ve covered everything from the impact of synthetic media in this crisis, to the impact of Elon Musk taking over Twitter, to the access that children now have to graphic images of war. 

How are you seeing your area of reporting evolving? And as you’re watching, this crisis plays out, what feels specific to you about this particular moment and the trends and the dynamics that you’re seeing versus, again, what you’ve seen in other situations, where this is sort of an iteration on a larger theme? 

HSU: Yeah, definitely. I always say that this is a beat that gives me—(laughs)—the most job security that I’m the least happy to have because I’m always busy, it’s always distressing, and it’s always fascinating. 

So, like you mentioned, there is a team of reporters at the Times. It’s myself, Stuart Thompson, and Steve Lee Myers. We also do a lot of work with Sheera Frenkel. We’re working frequently with the tech team, with the politics team, with the climate team. You know, I’m working on a COP-28 story. We’re working with the foreign desk. I have a story coming out about China and Taiwanese disinformation. We’re working across every section. And the Times and many other outlets have realized the value in having groups of people with expertise in this subject really drilling down into this, because at the end of the day disinformation and misinformation, false narratives, conspiracy theories can exist in any story about any news subject. 

As a result of the increased interest in this field, there’s also been a backlash, as you might expect, and it’s been mirrored around the world. You know, people are saying that disinformation reporters are agents or affiliates of the government, that they’re politically motivated. And so we’re often in the same boat as researchers and fact checkers and other disinformation watchdogs in that many reporters on this beat are dealing with harassment, and online abuse, and threats. And so it can be a really tricky environment to navigate. We’ve all done very uncomfortable stories. We’ve talked a lot about, you know, the balance between when we need to inform readers versus when we’re maybe unnecessarily amplifying false narratives when we attempt to inform readers. 

So, generally speaking, my peers and I think a lot about speed. Information flows are so fast these days that the temptation to catch a news cycle is always there. But our beat especially requires really intense consideration. It requires, like, layers of confirmation and a fair amount of hedging. You know, I sometimes find that it takes me longer to fact check a story than it does to actually report the story because I’m taking so much—I’m making so much of an effort to triple check everything, to talk to sources multiple times, to talk to a lot of sources that don’t end up in my stories, just because we need to have that buffer, because we’re talking about fact. 

So this sort of thing has come up a lot with this conflict in particular. There was just such an immediate lot of content. You know, a lot of it was manipulated. It was fake. It was polarizing. So that volume has just really been stunning. It just caught everyone off-guard. 

So there are a couple things about what’s happening now in Israel and in Gaza. You know, there’s so many different news elements all at once and they’re constantly shifting, so many different angles to take on many different stories, which doesn’t always happen with a news event. 

There are hordes of different players. You know, online, my colleagues have reported that this is—this is a digital world war, right? China’s involved; Russia. There are Israeli and Palestinian expats. There are people weighing in who have no discernible personal connection to what’s happening. But what that means is there’s just been this outpouring of content. And it’s mixed with intense emotion, which is often a key ingredient in making misinformation or disinformation very effective. 

Local media is, as you might expect, stretched incredibly thin, so it’s been very difficult for many outlets to confirm and contextualize on the ground. And we always say that face to face is often much more valuable than reporting from behind a screen, so the fact that people have been struggling to do that makes it so much more important to take just incredible care when you’re dealing with the information. A lot of the parties that are involved in this particular conflict are well-versed and sophisticated in their ability to work with information. 

And so, in conclusion, I’d just say that because of all of these factors that really are quite unique to what’s happening in Gaza and in Israel, reporters and really anyone that is working with information coming out of that region just needs to pause when they’re dealing with it. You need to back up what you know as much as you can with as many sources as you can, because the information that’s coming out of this is consequential in a way that I don’t often see. 

DUFFY: I’d like to follow up, sort of. I’d love all of your thoughts on—I mean, I have so many questions, but I’m going to start with this one. One of the things that has really struck me is—and, Layla, you spoke to this a little bit—your perception of what is happening, right? Your perception of reality is based very much on where you are getting your information about what is happening. That’s not new, right? Like, that’s the history of human existence, right? Information has always been sort of curated in some form or fashion. I think what we’re dealing with at this moment is that it feels like no information is being curated at all, but in fact each of these platforms has different approaches to how it is putting information in front of people, right, and how people are operating with that information. You are going to have a very different lived experience of this particular conflict if you are in a closed WhatsApp group of friends and families who are local to the region and who are exchanging information than you are if you are primarily understanding what’s happening, let’s say, by reading the New York Times, right? 

I’d love to hear your all’s thoughts on both how different platforms might be creating different realities for individuals based on their particular approach to the information that they will show, right, and amplify, and the information that they won’t, because I suspect many of our members rely on major media outlets for curated information. And then, also, many of our other members, colleagues, children are getting their information directly from the social media outlets, and so it’s hard to align on facts. 

So I guess, Layla, could I start with you on that? But I would love—Tiffany, Sam, like, please, please weigh in. 

MASHKOOR: Yeah. I think it’s a very important question, and I’ve been looking quite closely at each different platform’s moderation efforts and how they are responding to this. So I might just quickly go through a few platforms one by one, try to be quick here. 

But starting with Telegram, which is sort of where the inception of everything happened on October 7, it’s the platform of choice for Hamas, and that’s because it lacks significant content moderation. And the CEO has defended the platform’s hands-off approach to moderation, but that system of light-touch moderation actually serves a utility to extremist groups and terrorist groups. And we’ve seen Telegram remove Hamas channels at the request of the Apple and Google Play stores, but if you go to Telegram’s website and download the app from there the content is not banned. So there is sort of this contrast in their moderation efforts in trying to appease requests that come to them from their partners and their app stores, but then also trying to maintain this sort of—their policy on lack of moderation. But we’ve also seen Hamas preparing for disrupted communications, so they are promoting right now an app to their supporters and trying to sort of centralize in a space where they can control things. 

To move on quickly to Twitter, this has been pretty widely discussed, but there is a pipeline from Telegram to Twitter. And one way that disinformation emerges is in that transition where people might try to add analysis or they might be using machine translation, but in an effort to share something on a different platform things get a little bit distorted. And we’ve also seen some of the guardrails that existed at Twitter and some of the changes made under Elon Musk contribute to the spread of disinformation. 

So the revamped verification system, as I mentioned, makes it more difficult to find trustworthy voices and it prioritizes those who pay for verification. So it’s much harder to find credible voices now. And the monetization policies also encourage people to be extremely active, to post frequently, and that goes against what Tiffany was saying about moving cautiously, moving slowly. But there is a monetary incentive to pollute the information space. 

And then another important change was the changes to the API system, where it’s now prohibitively expensive to access. And that makes it much harder to understand the contours of the conversation and who might be manipulating the online space. 

And perhaps most damaging is dismantling the trust and safety team, which are the people tasked with those really, really hard questions about what is permissible online and what is not. 

I’ll just quickly touch on Meta as well. It has more than 200 million users in the Middle East, so it is a very popular platform here, and it also has a long history of engaging with this conflict. Since the 2014 war, Facebook was a player in the Israel-Palestine conflict. It has met with leadership on both sides over the years. So this isn’t something that they were unprepared for or that is new to them. But what we’ve seen is Meta has primarily been a source for journalists in Gaza who are using Instagram to share information from the ground, and there have been numerous reports coming up about content being removed, accounts being restricted, shadow bans. And Meta has been among the more communicative platforms, so it is trying to engage in a conversation about changes that it’s making, trying to address these claims, and it is trying to offer a level of transparency into how it is navigating this space. 

But what is evident is that there is a lot of confusion about what is permissible, what will result in account restrictions. And this shows that it’s not just a matter of platform design but also user perception of platform design, because that will inform how people act. So we’ve seen things emerge like algospeak, which is intentionally misspelling words to avoid takedowns. And so even if people perceive that their content is being removed or that they might be acted against, it will influence how they use that platform. 

And just to quick touch TikTok, lastly, they have a very strict policy against graphic content. So what we’re seeing is TikTok is far more used for conversation/discussion as opposed to firsthand accounts of information or things from the ground. 

DUFFY: Thank you, Layla. For those—for those who may not be following—just quickly—when we say Meta, I think most people know this, but Meta is the company that owns not only Facebook but also Instagram and WhatsApp. So those are three major platforms that are all operating in that region in different ways. 

And when we talk about Twitter’s API, basically, what we’re talking about is because Twitter was a public platform for many years, researchers could, at very low or inexpensive—at free or very inexpensive cost actually look at a lot of the information analytics that Twitter had, and it made it much easier to do analysis and investigation about how outside actors were using Twitter. And so now that sort of transparency for researchers is gone because people who are striving to influence the information environment operate across many platforms. Twitter was a really good starting point for folks to understand how an actor might be using the information environment to manipulate information. And so the lack of that research tool is now impacting not only our collective understanding of how information is flowing over Twitter, but also information dynamics in the background across many platforms. So I just wanted to give some context on those things for those of us who are listening who are who are lucky enough not—(laughs)—to do this all day, every day. 

I want to say to our audience, we’re going to—I’m going to open up for questions in about three minutes or so. So if you have questions, please start raising your hands. And then Tiffany, Sam, would love to hear any thoughts from you all on the previous question. 

HSU: Yeah, if I can jump, in, I have a couple of thoughts on this. The first is, it’s important to note that while it’s true that the platforms could often do more on the moderation front, I think in this particular conflict they’ve just been caught entirely off guard. I mean, they’re seeing things that they would have never expected to see, that they don’t know how to handle. I mean, if you talk about synthetic media, a lot of these platforms are using the detection services that we—that Sam talked about earlier, that are unreliable at best, right? They’re not capturing a lot of content that is probably AI-generated.  

And on the converse, they don’t have the tools to be able to evaluate when someone is maybe claiming that something is AI-generated but it’s authentic, right? My colleagues Sheera Frenkel reported that Hamas had hijacked some of the accounts of hostages in order to harass their family members. I mean, Meta had no idea that this sort of thing was going to happen because it had ever happened before. So I think in a lot of these platforms there’s been kind of a rush to figure out what to do as they’ve been inundated with a flood of very tricky content to parse through. 

The other thing I think is of note, is that we tend to have a very U.S.-centric view of the platforms, when in fact different demographics in different countries use platforms very differently, right? In Taiwan, for example, Line is dominant. The Taiwan FactCheck Center has a really interesting report up about how disinformation about Israel and Hamas is spreading online in particular, right? It’s not really Instagram, or Facebook, or X in a place like Taiwan. It’s a platform like Line. And you have to look at, like Layla mentioned, you know, the black box platforms that researchers and journalists can’t get into because they’re private chat rooms. And, you know, we’ve spoken to a lot of experts who’ve said that over time, especially as platforms have faced political pressure to moderate more, that a lot of the really concerning content is moving into these closed rooms that are difficult to gain any sunlight into. So I think that’s happening more in this particular conflict. 

DUFFY: And I think you raise a great point on Line. We also—if you think about this from a geopolitical context, right, many of us are foreign affairs thinkers, your perception of this if you’re relying on WeChat as your primary platform is going to be completely different than your perception of this will be, you know, in the West, or if you’re in a—you know, if you’re in the West and in a diaspora community that uses WeChat, that’s one thing. But overwhelmingly, I think many in the West don’t have much insight at all into how WeChat operates or, frankly, what information is flowing there. 

Sam, did you have anything you wanted to add to that? 

GREGORY: I’ll just make two observations in relation to synthetic media. I think the first, which Tiffany just touched on, is the platforms are not yet well equipped nor are the fact checkers who work with them to do the detection, nor are platform policies all that clear. Meta has been sitting on a platform policy update since January 2020 on how it handles synthetic media and has not released it. So that’s one side of it. The other is to remember this sits in a bigger generative AI discussion about the rapid release of tools by the platforms and others into the world without many safeguards. So to give an illustrative example there, in WhatsApp we’ve had an ability where you can create AI stickers, basically AI images.  

And one of the things that journalists discovered there was that when you put in a word like “Palestinian,” it would generate an image of a person with a gun or a child with a gun, and it wouldn’t do that if you put in the word “Israeli,” right? And so that’s a type of sort of safeguard and in kind of preparedness for releasing these kinds of generative tools into environments that we see across the board. So this is a platform responsibility. It’s an AI industry responsibility that’s being shirked. And the commercial race is to release these tools without thinking about their impact and their discriminatory impact. 

DUFFY: Thank you. 

With that, you guys, I’m going to open it up to questions. I’ll remind our audience that if you would like to raise a question, please raise your hand in the chat. And though I know we frequently at CFR take questions individually, we have so many people joining us that I’m actually going to batch questions. So I’ll take three questions at once and then we’ll turn them over to the panelists, and then we’ll try to get in another round of questions. My hope is that that allows us to actually get more people’s questions and hear from more audience members. And so with that, if I can turn it over to my CFR colleagues. 

OPERATOR: (Gives queuing instructions.) 

We’ll start with Dan Caldwell. 

Q: I’m Dan Caldwell, professor emeritus of Pepperdine University in Los Angeles. 

I’d like to focus on AI deepfakes and both the public and the private AI deepfakes. A month ago, Tom Hanks and Gayle King, the CBS anchor, called attention to some AI deepfakes that had been produced of them endorsing certain products. And those are relatively easy to disavow on the part of Tom Hanks and Gayle King. But I’d like to focus on private AI deepfakes, particularly in the intelligence community. Because you’ve focused on propaganda and disinformation, and I’m wondering in particular what individuals can do to identify AI deepfakes of foreign intelligence agencies trying to get information using both visual as well as audio AI deepfakes. 

DUFFY: And, Dan, sorry, if I could just ask a clarifying—when you say private versus public deepfakes, are you speaking about deepfakes that are of public figures versus private individuals? Or how are you distinguishing private deepfakes versus public deepfakes? 

Q: The public AI deepfakes would be readily visible on the internet, whereas private deepfakes would be targeted for particular individuals to get information from those individuals. And, Kat, you referred specifically to limited resources. If a private deepfake is staged against somebody in the U.S. government, they have the U.S. government resources to rely on. Whereas individuals who are not in a governmental agency or a large corporation don’t have those resources. 

DUFFY: Got it. Thank you, Dan. I appreciate the clarification. 

OPERATOR: We will take our next question from James Galbraith. 

Q: Thank you very much. 

And I would just like to get a little bit more concrete guidance on the question of the kind of visual information that’s coming out, in particular in the last few weeks from Gaza. I’ve seen a lot of it. You see bombs going off. You see the wreckage of buildings and entire neighborhoods. You see the craters. You see people being brought into hospitals. You see children who are badly wounded. You see all kinds of horrific images. The only moderation that I can see, that I’ve noticed, is that sometimes the images are pixelated and a notice saying it’s graphic and you did touch on it before you can see vividly what’s going on. 

But it’s entirely consistent, though more detailed, with what you might see on CNN, for example. And I’m just wondering, exactly how am I supposed to change, or alter, or condition the way I react to these images on the thought that they—that they might be exaggerated? They don’t leave any—they don’t leave anything to the imagination, and very little to a reasonable person’s view that something is being distorted in most cases. In fact, I haven’t noticed anything that looked to me like it was, you know, a deliberately manipulated image, although obviously the commentary, you know, is what it is. But what you’re seeing, so far as I can tell, am I supposed to be suspicious of it? And if so, why? Thank you. 

DUFFY: Thank you. And our next question? 

OPERATOR: And we’ll take our third question from Valentina Barbacci. 

Q: Yes. Hello. Thank you very much. 

I would be interested to hear a bit more about the specific tools that the various teams within news agencies and other entities are using to distinguish, you know, truth from falsehoods. Not only because, you know, it’s a useful thing for us to know—(laughs)—quite frankly, as an individual. But it’s interesting that, you know, these teams are being built and are a big focus with these agencies, because I think there’s a lot of deterioration of confidence in these entities over the last years because some things that were deemed initially conspiracy theories turned out to be true. So then that erodes—it makes it, frankly, tough for, you know, the people like Tiffany to do their job, when they’re obviously very earnest about what they’re doing and trying to actually do a really good job at distinguishing that. 

So I’d love to hear a bit more about the tools that are employed, to the extent that you can share them and they’re not proprietary. So both as an exercise to understand the lengths to which entities go, but also so that we, as individuals can perhaps employ some of those same methods in distinguishing fact from truth because I don’t know—I’m personally not so confident anymore in using Snopes.com, for example, or other fact-checking entities, because of what I mentioned—that some of the things they claimed were conspiracy theories turned out not to be a year or two later. And it becomes difficult to navigate what is truth anymore. Who do we go to? There’s a lack of trust in these once trusted, you know, establishments and institutions, and therefore it erodes that very ability to trust in entities that that claim to know fact from fiction. So I’d be grateful to understand a bit more from your perspectives on how you distinguish, what tools you employ, and what the agencies are doing to improve and bolster their efforts in this. Thank you. 

DUFFY: Fantastic. Thank you so much. All right, so I actually—you know, I see a real sort of nexus between these questions, and I appreciate all of your asking them. I think we’ll start, Valentina, with your question. We’ll really talk about what is out there? Like, how can you get a sense of, you know, what is real, what is not, who’s good at that, what resources are out there? And then from there, I think for our panelists, I’d like to move on to James’ question about how you think about what you’re going to consume, how you react to it, what boundaries you might want to put on yourself, how you might want to change your media diet given these questions around verification. And then I think from there, Dan, we’ll close with your question, which is both around, you know, we’ve been talking a lot about the public dissemination of this. But what have people seen, or what are people’s thoughts on the use of deepfakes in—and synthetic media in particular—more for intelligence targeting and for specific targeting of individuals?  

And so let’s start with just the tools, the resources. What do you all look to? What do you all use? And where do we have—where do we have clear gaps where, just from a technical standpoint, we are still really lacking, and everyone’s kind of just winging it? Any of you? Honestly, just jump in. All of you will have things to say. 

HSU: Why don’t I jump in, just because Valentina mentioned news outlets specifically. To start, I need to say that the difficulty of my beat is that truth is just generally often not cut and dry. It’s full of subtlety that shifts, it’s based on perspective. I mean, experts will often tell me that conspiracy theories are the strongest when they’re based on a kernel of truth. And then they’re just spun out into, you know, often bizarre directions, but often in directions that seem like they would pan out. And, you know, researchers generally will say that a piece of evidence can be interpreted in different ways. And so if you read the way my team writes stories generally, we tend to be very careful about determining that something is fact. Like, black and white, this is this fact, right? Especially if we can’t back it up 100 percent. 

And I think that strategy was born out of years of practice of realizing that there are a lot of different ways you can cut a certain story, right? And so the best service that we as a journalist can provide, especially if we’re not, say, an expert in health but we’re writing about COVID-19—the best thing we can do is say: This is what the preponderance of evidence seems to show. This is how we arrived at that decision. These are the tools that were consulted to help arrive at that conclusion. You know, but at the end of the day we’re trying not to say this is definitely what has happened, or this is—you know, since we’re talking a lot about AI and synthetic media broadly, we’re very careful about saying this video is authentic, this video is AI. Because, frankly, we can’t tell a lot of the times. 

This is actually—sorry. I should—I should back up and say, this is generally more the case with, like, audio or images. Deepfakes, like Sam mentioned, videos are a little bit harder at this point in time to make just very realistic, especially on the fly. But the way—you know, I wrote a story with Stuart Thompson recently about how AI is a specter in the Israel and Hamas conflict, right? It’s not so much that there have been—that we’ve been inundated with, you know, convincing examples of AI fakes. It’s more that the threat of AI, the availability of AI technology, is making it so that people can question what’s real and what’s not. And so for that story what we did is we talked to a lot of AI experts, we leaned on, you know, academics, people who work at detection companies, people like Sam who have dealt with this sort of thing a lot more than we have. And we just asked them to kind of evaluate the specific signs of certain pieces of content, right? Like, does an image show the blurry fingers that tend to show up, you know, in particularly crude examples of AI-generated content? Are letters difficult to read? 

DUFFY: And, Tiffany, I’m going to—I’m going to jump in there and actually asked, like, for Layla and Sam in particular, because you all are really doing this analysis of the OSINT on the reg. What are you looking for? What tools are you using? What fact-checking networks are you using? And where is there an actual technology gap that needs to be fixed? Layla, I’ll start with you. 

MASHKOOR: Sure. I think, Valentina, the question you posed is one I’ve heard multiple times. A lot of people are having a very hard time trusting what they’re seeing online right now. And it’s important to maintain skepticism, but it’s also important to trust credible sources. And so one of the first things is anyone who claims to immediately know what happened often doesn’t know what happened. As we’ve said, it takes time to establish facts. And often you are leaning towards the most—the most evidence-based approach. But, as Tiffany said, it’s hard to be 100 percent in times of war where we can’t do independent investigations, we can’t be on the ground collecting evidence. So we are relying on secondary sources of information. 

But for me personally, I always try and find that first originating source of the claim, the video, the photo, to see where it first came from. Is that a credible person? Because things will proliferate online, and it’s sometimes hard to trace back where exactly it came from. But if you can find where it first emerged, that’s often a really good indicator of the credibility of that information. And just to sort of quickly touch on James’ point as well, a lot of the information we’re seeing right now, at least from Gaza, is coming from journalists on the ground and it is authentic video that’s coming from people who are witnessing this and documenting it. But then there’s a proliferation of accounts that are supporter accounts just reposting things. And sometimes that’s where you see false video, out of context video get mixed in with the authentic video. So I encourage people to try and get as close to the firsthand sources of information that they can, and rely less on the aggregator-style accounts, as that leads to a little bit more mistakes, I think. 

DUFFY: Thank you, Layla. And if I could take the moderator’s privilege to add on to what you said, James, in response to your question, I also think especially in the foreign affairs community there can be this pressure to be taking in the information in real time because the information is available in real time. If it’s not your job to take the information in in real time, and it is not your job to expose yourself to that type of imagery, I think there’s also truly a hard question to be asked.  

Which is to say, do I have—what is it serving for me to be sort of engaging in this in a sort of real-time OSINT flow, as opposed to not engaging in that particular space and really—and relying on those whose job it is to vet and verify what information I should see. And I feel that could be counterintuitive in our space a little bit, because we’re used to wanting to be in the know right away. But secondary trauma, based on looking at these images, is very real. And so I also think we all have to ask ourselves at this moment what our media diet is and what the—and exactly what it is serving with those choices. 

So, Sam, I want to turn over to you to ask both on the technicalities, but also going specifically to Dan’s question, right, because this is an area where you all have focused so much. Where are you seeing technical gaps in the current capacity to do things like verification? And also, how are you thinking about very targeted deepfake use for intelligence purposes, national security purposes, economic gain, whatever it might be? 

GREGORY: Yes. I think, you know, the general observation on this is, at the moment, we’re encouraging people to try and spot these with their eyes or their ears. And that’s a terrible move in even the medium term, right? So it’s going to be hard to distinguish certainly images and audio in a human way. The problem is that the tools are not adequate to do it in machine-based way. So we’re a bit stuck in a bind at the moment if we want to rely on technology broadly, right? The detection tools for image, audio, and video need to be used as a suite of tools and you need to have the ability to interpret them.  

And we neither have that suite of tools nor do we have it in the hands of a range of journalists, right, which we want as well, right, in order to be able to have a range of different ways of analyzing and have a shared collective understanding of what’s happening. And the reasons for that are technical, right? That it’s challenging to do detection. It’s because of the range of ways people create these media. And it’s because of the range of ways in which they’re shared, including in compressed ways on social media and edited with other content. So it’s a rather depressing story, because our eyes and ears deceive us, but also the tools are not yet there. So how do we do that? 

To the question of the private deepfakes, I think there’s two observations to make here. This is where a lot of this is happening now, in very everyday terms, right? We’re seeing a lot more audio scans with voice cloning, you’re seeing some deceptive sort of real-time deepfakes. I think there’s two things to observe there. Is the detection gap applies there as well, and it’s even more likely to apply to an individual, right? It’s very unlikely you’re going to be able to sort of sit down and run an audio scanner on a fake voice to you as an individual. And I wouldn’t trust, like, an online image generator to analyze an image you’re sent that might be deceptive.  

The other thing to ask is when it doesn’t matter whether you can detect it or not. And so in the case of nonconsensual sexual images, the point is not usually that it is faked. The point is that it is harassing or can create harm when it’s exposed, or creates harm just by its existence and its deployment towards you. And detection doesn’t matter. So I think we have a detection gap, but we also have places where detection doesn’t matter at all. 

DUFFY: Really helpful. Thank you, Sam. And, OK, I think—everyone, I think we have time for one more question, because I definitely want to give our panelists a minute to wrap up as well. And so, Dina, can I turn it over to you for one more question? 

OPERATOR: We’ll take the next question from Gustave Lipman. 

Q: Thank you very much for your expertise. This has been very informative. 

Understanding that there’s all kinds of misinformation and disinformation regarding the war, we’re also in an election year coming up in 2024. Understanding that there has been significant efforts to provide misinformation and disinformation coming from overseas, with the Hunter Biden investigation getting heavier and with the Trump investigation continuing, what do you see as things that the media has learned from the mistakes around the Hunter Biden laptop, the failure to identify that as an actual story? What has changed moving forward that provides better protections for media outlets to get to the real truth, as opposed to being manipulated? 

DUFFY: I certainly open it up to any of our panelists to speak to that. I think I might add, though—if you don’t mind—I might add another aspect to that question. Which is that there are currently a number of lawsuits and subpoenas that have been wielded against many of the sort of leading information analysis organizations and universities in the space. And so one of the things as we look to elections, right, we’re not just talking about the 2024 elections but, you know, at CFR we think—you know, thinking globally. You know, in the coming year we’re actually going to have the largest election field globally in a generation. We won’t have another electoral field this big until 2048. We have, I think at this point, seventy-eight countries with eighty-three national level elections affecting 3.9 billion people. So 49 percent of the world’s population are going to the polls next year.  

And I think one thing that your—Gustave, your question really leads to, is also this relationship between media and platforms, right, and between political dynamics and platforms, is also now influencing how people can do information research. Not only around the domestic elections, but also globally. Because a lot of those researchers serve as a central hub to connect global researchers to the American platforms. So anyway, I wanted to, like, add that nuance because I think sometimes we—it’s a little in the weeds, but it’s actually going to be very important in this coming electoral cycle. And so, Layla, Tiffany, Sam, any of you want to speak to that? 

HSU: Yeah, I can. I’ll keep it short this time. We’re thinking a lot about the election mega cycle globally next year. And I think one of the things we’re thinking a lot about is how to consult sources who are familiar with these sorts of situations. You know, like you mentioned, Kat, a lot of researchers are under great political pressure nowadays. So that’s been a little bit difficult just in terms of lawsuits and harassment, and that sort of thing causing, like, a deterrent. But generally, we’re relying on the expertise of sources. Layla mentioned the API issue. That’s going to be a big deal going forward. Researchers are really pushing to get better access to the platform, because that’s where a lot of the disinformation/misinformation is going to arise. 

DUFFY: Layla, Sam, anything else you want to add? Because I also want to give everyone a second to wrap up, so. 

GREGORY: I’ll give just thirty seconds, which is we run a rapid response task force that gets cases of deepfakes and suspected deepfakes. And a lot of what we’re seeing are the kind of dismissing of real claims using the cover of AI. And so I suspect in this coming electoral cycle, there are going to be a lot of politicians dismissing real footage with claims of AI, and putting the pressure on the public and the media to do the debunks on that. 

DUFFY: Yeah. The other—and I think another thing that will—that is really up in the air right now in terms of electoral security and information manipulation is that we currently have, I think it’s at least five cases, in front of the Supreme Court in the United States that could fundamentally impact the way that platforms make decisions around content. But one of them in particular, a case that was—I think, is now Murphy v Missouri, but was previously I think Biden v Missouri. At the moment, there is—there is an injunction that has been stayed that prevents the Department of Homeland Security, the FBI, CISA, those sort of leading electoral safety agency in the United States, from engaging with leading social media platforms on questions around electoral misinformation or disinformation, as well as public health misinformation or disinformation.  

And although there is an injunction, the realpolitik impact of that has been a very strong chilling effect that is now being created between the U.S. government’s ability to engage with social media companies where they are seeing information operations occurring around the elections. And what’s notable about that is that that’s the only country in the world that can’t speak to these social media companies. Every other country in the world can engage with the U.S.-based social media platforms. But at the moment, the executive branch is sort of being enjoined from that. So—or is under threat of injunction, I should say more accurately. So that’s another wrinkle that I think is helpful to consider. 

With that, I know we’re at time. And so if I could just ask each of you just, you know, thirty seconds, one minute, to just wrap up and leave our audience sort of with your final thoughts. 

MASHKOOR: Sure. I’ll go ahead. I think this also ties into the previous question a little bit, but I think the question of disinformation, how it impacts conflicts, elections, is closely tied to the question of content moderation and how platforms navigate that issue. And it’s a very difficult challenge. The efforts to moderate and remove disinformation must also protect legitimate speech. And so the difference between under-moderation, over-moderation, and effective moderation is very significant. And it’s more pronounced in extraordinary circumstances, like war, like elections. And threading that needle is difficult, but it must be done and it must be done precisely. 

And the one other just quick closing comment I want to make is I think another important part of this conversation is the preservation of content. There’s a lot of allegations on both sides of war crimes. And we need to be discussing whether platforms bear a duty to facilitate archiving content so that when it comes time to examine these claims, we have the documentation and the footage required to make assessments. 

DUFFY: Thank you.  

Sam. 

GREGORY: Yeah. I think we’re seeing the early lines of how synthetic media is going to play out. And we need to be investing in the responses. But I also want us to put this in the bigger picture of AI governance. A lot of the issues we’re talking about in synthetic media require us thinking about this bigger pipeline that goes back to the AI models, questions of transparency and disclosure, that actually don’t just sit with the platforms. They sit all the way back in the pipeline of responsibility to the people building the models and the tools. So I think that’s really important to put in that bigger frame.  

DUFFY: Thank you. 

And Tiffany. 

HSU: So my cohort of misinformation/disinformation reporters and I are thinking a lot nowadays about motivation and momentum. Why is someone spreading a particular piece of information? How much is popularity or groupthink contributing to that? And I would just encourage everyone to also think along those lines when evaluating the authenticity or really the value of a piece of content that they come across that they’re suspicious about. 

DUFFY: Fantastic. Well, thank you all very much for joining us today. I’m grateful for your participation. I’m grateful for your many years of work in this space. I would like to thank all of our members and guests for joining today’s virtual meeting. Please note that the video and transcript of today’s meeting will be posted on CFR’s website in short order. And everyone who is a member of the Council, if you have any follow ups please feel free to reach out directly to me. You know where to find me. 

Thank you all so much for joining. Have a great day. Take care everyone. Bye. 

(END) 

Top Stories on CFR

Mexico

Organized crime’s hold on local governments fuels record election violence; Europe’s cocaine pipeline shifting to the Southern Cone.

Defense and Security

John Barrientos, a captain in the U.S. Navy and a visiting military fellow at CFR, and Kristen Thompson, a colonel in the U.S. Air Force and a visiting military fellow at CFR, sit down with James M. Lindsay to provide an inside view on how the U.S. military is adapting to the challenges it faces.

Myanmar

The Myanmar army is experiencing a rapid rise in defections and military losses, posing questions about the continued viability of the junta’s grip on power.