Webinar

Reporting on AI and the Future of Journalism

Tuesday, June 20, 2023
Speakers

Head of Global Communications & Marketing, Google DeepMind

Senior Technology Reporter, San Francisco Examiner

Introductory Remarks

Vice President for National Program and Outreach, Council on Foreign Relations

Host

Senior Fellow, Council on Foreign Relations

Dex Hunter-Torricke, head of global communications & marketing at Google DeepMind, discusses how AI technology could shape reporting the news and the role of journalists, and Benjamin Pimentel, senior technology reporter at the San Francisco Examiner, discusses framing local stories on AI in media. The webinar is hosted by Carla Anne Robbins, senior fellow at CFR and former deputy editorial page editor at the New York Times. 

TRANSCRIPT

FASKIANOS: Thank you. Welcome to the Council on Foreign Relations Local Journalists Webinar. I am Irina Faskianos, vice president for the National Program and Outreach here at CFR.

CFR is an independent and nonpartisan membership organization, think tank, publisher, and educational institution focusing on U.S. foreign policy. CFR is also the publisher of Foreign Affairs magazine. As always, CFR takes no institutional positions on matters of policy.

This webinar is part of CFR’s Local Journalists Initiative, created to help you draw connections between the local issues you cover and national and international dynamics. Our program aims to put you in touch with CFR resources and expertise on international issues and provides a forum for sharing best practices.

Again, today’s discussion is on the record. The video and transcript will be posted on our website after the fact at CFR.org/localjournalists, and we will share the content after this webinar.

We are pleased to have Dex Hunter-Torricke, Benjamin Pimentel, and host Carla Anne Robbins to lead today’s discussion on “Reporting on AI and the Future of Journalism.” We’ve shared their bios with you, but I will highlight their credentials here.
Dex Hunter-Torricke is the head of global communications and marketing at Google DeepMind. He previously worked in communications for SpaceX, Meta, and the United Nations. He’s a New York Times bestselling ghostwriter and frequent public commentator on the social, political, and organizational challenges of technology.

Benjamin Pimentel is a senior technology reporter for the San Francisco Examiner covering Silicon Valley and the tech industry. He has previously written on technology for other outlets, including Protocol, Dow Jones Marketwatch, and Business Insider. He was also a metro news and technology reporter at the San Francisco Chronicle for fourteen years. And in 2022, he was named by Muck Rack as one of the top ten crypto journalists.

And finally, Carla Anne Robbins, our host, is a senior fellow for CFR—at CFR, excuse me. She is the faculty director of the Master of International Affairs Program and clinical professor of national security studies at Baruch College’s Marxe School of Public and International Affairs. Previously, she was deputy editorial page editor at the New York Times and chief diplomatic correspondent at the Wall Street Journal.

Welcome, all. Thank you for this timely discussion. I’m going to turn it now to Carla to start the conversation, and then we will turn to all of you for your questions and comments. So, Carla, take it away.

ROBBINS: Thank you so much, Irina. And thank you so much to you and your staff for setting this up, and to Dex and to Ben for joining us today.

You know, I am absolutely fascinated by this topic—fascinated as a journalist, fascinated as an academic. Yes, I spend a lot of time worrying whether my students are using AI to write their papers. So far, I don’t know.

So, as Irina said, Dex, Ben, and I will chat for about twenty-five minutes and then throw it open to you all for questions. But if you have something that occurs along the way, don’t hold back, and post it, and you know, we will get to you. And we really do want this to be a conversation.

So I’d like to start with Ben. I’m sure everyone here has already played with ChatGPT or Bard if they get off the waitlist. I’ve already needled Dex about this. You know, I asked ChatGPT, you know, what questions I should be asking you all today, and I found it sort of thin gruel but not a bad start. But, Ben, can you give us a quick summary of what’s new about this technology, generative AI, and why we need to be having this conversation today?

PIMENTEL: Yes. And thank you for having me.

AI has been around for a long time—since after the war, actually—but it’s only—you know, November 30, 2022, is a big day, an important date for this technology. That’s when ChatGPT was introduced. And it just exploded in terms of opening up new possibilities for the use of artificial intelligence and also a lot of business interest in it.

For journalists, of course, quickly, there has been a debate on the use of ChatGPT for reporting and for running a news organization. And that’s become a more important debate given the revelations and the disclosures of an organization like AP, CNET, and recently even insiders now saying that they’re going to be using AI for managing their paywall or in terms of deciding whether to offer a subscription to a reader or not.

For me personally, I think the technology has a lot of important uses in terms of making newsgathering and reporting more efficient/faster. For instance, I come from a—I’m going to date myself, but when I started it was before—when I started my career in the U.S.—I’m from the Philippines—it was in June 1993. That was two months after the World Wide Web became public domain. That’s when the websites started appearing. And around that time, whenever I’m working nights to—you know, that was before websites and before Twitter. To get a sense of what’s going on in San Francisco, especially at night—and I’m working at night—I would have to call every police station, fire department, hospital from Mendocino down to Santa Cruz to get a sense of what’s going on. It’s boring. It’s a thankless job. But it actually helped me. But now you can do that with technology. I mean, you now have sites that can pull from the Twitter feed of the San Francisco Police Department or the San Francisco Fire Department to report, right, on what’s going on. And AI now creates a possibility of actually pulling that information and creating a news report that in the past I would have to do it, like a short 300-word report on, hey, Highway 80 is closed because of an accident. Now you can automate that.

The problem that’s become more prominent recently is the use of AI and you don’t disclose it. I was recently in a, you know, panel—on a panel where an editor disclosed—very high on the technology, but then also said, when we asked him are you disclosing it on your site: Well, frankly, our readers don’t care. I disagree vehemently that when you’re—if you’re going to use it, you have to disclose it. Like, if you are pulling information and creating reports on, you know, road conditions or a police action, you have to say that AI created it. And it’s definitely even more so for more—for bigger stories like features or, you know, New Yorker-type of articles. You wouldn’t want—I wouldn’t want to read a New Yorker article and not know that it was done by an AI or by a chatbot.

And then for me personally, I worry about what it means for young reporters, younger journalists, because they’re not going to go through what I went through, which in many ways is a good, right? You don’t have to call every police station in a region to get the information. You can pull that. You can use AI to do that. But for me, I worry when editors and writers talk about, oh, I can now write a headline better with AI, or write my lede and nut graf with AI, that’s worrisome because, for me, that’s not a problem for a journalist, right? Usually you go through that over and over again, and that’s how you get better. That’s how you become more critically minded. That’s how you become faster; I mean, even develop your own voice in writing a story. I’ll stop there.

ROBBINS: I think you’ve raised a lot of important questions which we will delve into some more.

But I want to go over to Dex. So, Dex, can you talk a little bit more about this technology and what makes it different from other artificial intelligence? I mean, it’s not like this is something that suddenly just we woke up one day, it was there. What makes generative AI different?

HUNTER-TORRICKE: Yeah. I mean, I think the thing about generative AI which, you know, has really, you know, wowed people has been the ability to generate content that seems new. And, obviously, how generative AI works—and we can talk much more about that—a lot of what it’s creating is, obviously, based on things that exist out there in the world already. And you know, the knowledge that it’s presenting, the content that it’s creating is something that can seem very new and unique, but, obviously, you know, is built on training from a lot of previous data.

I think when you experience a generative AI tool, you’re interacting with it in a very human kind of way—in a way that previous generations of technology haven’t necessarily—(audio break). You’re able to type in natural language prompts; and then you see on many generative AI tools, you know, the system thinking about how to answer that question; and then producing something very, very quickly. And it feels magical in a way that, you know, certainly—maybe I’m just very cynical having spent so long in the tech industry, but you know, certainly I don’t think lots of us feel about a lot of the tools that we take for granted. This feels qualitatively different from many of the current systems that we have. So I think because of that, you know, over the last year, as generative AI—(audio break)—starts to impact on a lot of different knowledge-type industries and professions.

And of course, you know, the media industry is, you know, one of those professions. I think, you know, lots of reporters and media organizations are obviously thinking not just how can I use generative AI and other AI tools as part of my work today, but what does this really mean for the profession? What does this mean for the industry? What does this mean for the economics over the long term? And those are questions that, you know, I think we’re all still trying to figure out, to an extent.

ROBBINS: So I want to ask you—you know, let’s talk about the good for a while, and then we get into the bad. So, you know, I just a piece in Neiman Reports, which we’ll share with everybody, that described how a Finnish newspaper, Yle, is using AI to translate stories into Ukrainian, because it’s now got tens of thousands of people displaced by the war. The bad news, at least for me, is Buzzfeed started out using AI to write its quizzes, which I personally didn’t care much about, and then said but that’s all we’re going to use it for. But then it took a nanosecond and then it moved on to travel stories. Now, as a journalist, I’m worried—I mean, as it is the business is really tight. Worried about displacement. And also about—you know, we hear all sorts of things. But we can get into the bad in a minute. 

You know, if you were going to make a list of things that didn’t make you nervous, that, you know, Bard could do, that ChatGPT could do, that makes it—you know, that you look at generative AI and you say, well, it’s a calculator. You know, we all used to say, oh my God, you know, nobody’s ever going to be able to do a square root again. And now everybody uses a calculator, and nobody sits around worrying about that. So I—just a very quick list. You know, Ben, you’ve already talked about, you know, pulling the feed on traffic and all of that. You know, give us a few things that you really think—as long as we disclose—that you think that this would really be good, particularly for, you know, cash-strapped newsrooms, so that we could free people up to do better work? And then, Dex, I’m going to ask you the same question.

PIMENTEL: City council meetings. I mean, I started my career—

ROBBINS: You’re going for the boring first.

PIMENTEL: Right, right. School board meetings. Yeah, it’s boring, right? That’s where you start out. That’s where I started out. And, if—I mean, I’m sort of torn on this, because you can use ChatGPT or generative AI to maybe present the agenda, right? The agenda for the week’s meeting in a readable, more easily digestible manner, instead of having people go to the website and try to make sense of it. And even the minutes of the meeting, right, to present it in a way that here’s what happened. Here’s what they decided.

I actually look back—you know, like you said, and like I said, it’s boring. But it’s valuable. For me, the experience of going through that process and figuring out, OK, what did they decide? Trying to reach out to the councilman, OK, what did you mean—I mean, to go deeper, right? But at the same time, given the budget cuts, I would allow—I would accept a newsroom that decides, OK, we’re going to use ChatGPT to do summaries of these things, but we’re going to disclose it. I think that’s perfectly—especially for local news, which has been battered since the rise of the web. 

I mean, I know this because I work for the Chronicle and I work in bureaus in the past. So that’s one positive thing, aside from, you know, traffic hazard warning. That it may take a human reporter more time. If you automate it, maybe it’s better. It’s good service to the community. 

ROBBINS: Dex, you have additions to the positive list? Because we’re going to go to the negative next. 

HUNTER-TORRICKE: Yeah, absolutely. I mean, look, I think that category of stuff which, you know, Ben might talk about as boring, you know, but certainly, I would say, is useful data that just takes a bunch of time to analyze and to go through, that’s where AI could be really, really valuable. You know, providing, you know, analysis, surfacing that data. Providing much broader context for the kinds of stories that reporters are producing. Like, that’s where I see systems that are able to parse through a lot of data very quickly being incredibly valuable. You know, that’s going to be something that’s incredibly useful for identifying local patterns, trends of interest that you can then explore further in more stories.

So I think that’s all a really positive piece. You know, the other piece is just around, you know, exposing the content that local media is producing to a much wider audience. And there, you know, I could see potential applications where, you know, AI is, you know, able to better transcribe and translate local news. You know, you mentioned the Ukrainian example, but certainly I think there’s a lot of, you know, other examples where outlets are already using translation technology to expose their content to a much broader and global audience. I think that’s one piece. You know, also thinking about how do you make information more easily accessible so that, you know, this content then has higher online visibility. You know, every outlet is, you know, desperately trying to, you know, engage its readers and expose, you know, a new set of readers to their content. So I think there’s a bunch of, you know, angles there as well.

ROBBINS: So let’s go on to the negative, and then we’re going to pass it over because I’m sure there’s lots of questions from the group. So, you know, we’ve all read about the concerns about AI and disinformation. There have been two recent reports, one by NewsGuard and another by ShadowDragon that found that AI-created sites and AI-created content, filled with fabricated events, hoaxes, dangerous medical advice, you’ve got that on one hand. So there was already, you know, already an enormous amount of disinformation and bias out there. You know, how does AI make this worse? And do we have any sense of how much worse? Is it just because it can shovel a lot more manure faster? Or is there something about it that makes this different? Ben?

PIMENTEL: I mean, as Dex said, generative AI allows you to create content that looks real, like it was created by humans. That’s sort of the main thing that really changes everything. We’ve been living with AI for a number of years—Siri, and Cortana, and all that. But when you listen to them, you know that it’s not human, right? Eventually you will have technologies that will sound human, and you can be deceived by it. And that’s where the concern about disinformation comes up. 

I mean, hallucinations is what they call it in terms of they’re going to present you—I don’t know if you ever search yourself on ChatGPT, and they spit out a profile that’s really inaccurate, right? You went to this university or what. So that’s a problem. And the thing about that, though, is the more data it consumes, it’ll get better. That’s sort of the worrisome, but at the same time positive, thing. Eventually all these things will be fixed. But at the same time, you don’t know what kind of data they’re using for these different models. And that’s going to be a major concern. 

In terms of the negative—I mean, like I said, I mentioned the training of journalists is a concern to me. I mean, I mentioned certain things that are boring, but I think—I also wonder, so what happens to journalists if they don’t go through that? If they already go to a certain level because, hey, ChatGPT can take care of that so you don’t have to cover a city council meeting? Which, for me, was a positive experience. I mean, I hated that I was doing it, but eventually looking back that was good. I learned how to talk to a city politician. I learned to pick up on whether he’s lying to me or not. And that enables me to create stories later on in my career that’re more analytical, you know, more nuanced, more sensitive to the needs of my readership. 

Another thing is in journalism we know there is no such thing as absolute neutrality, right? Even and especially analytical stories, your point of view will come up. And that brings up the question, OK, what point of view are we presenting if you have ChatGPT write those stories? Especially the most analytical ones, like features, a longer piece that delves into a certain problem in the community and tries to explore it. I worry that you can’t let ChatGPT or an AI program do that without questioning whether, OK, what’s the data that is the basis of this analysis, of this perspective? I’ll stop there.

ROBBINS: So, Dex, jump in anywhere on this, but I do have a very specific technical thing. Not that I want to get into this business but, you know, I’ve written a lot in the past about disinformation. And it’s one thing for hallucinations, where they’re just working with garbage in so you get garbage out, which is—and you certainly saw that in the beginning with Wikipedia, which has gotten better with crowdsourcing over time. But from my understanding of these reports from NewsGuard and ShadowDragon, that there were people who were malevolently using AI to push out bad information. So is this—how is generative AI making that easier than what we just had before?

HUNTER-TORRICKE: I mean, I think the main challenge here is around how compelling a lot of this content seems, compared to what came before, right? So, you know—you know, I think Ben spoke to this—you know, a lot of this stuff isn’t exactly news. AI itself has been around for a long time. And we then had manifestations of these challenges for quite a long time with the entire generation of social media technology. So like deepfakes, like that’s something we’ve been talking about for years. The thing about deepfakes which made it such an interesting debate is that for years every time we talked about deepfakes, everyone knew exactly what a deepfake was because they were so unconvincing. You know—(audio break)—exactly what was a deepfake and what wasn’t. Now, it’s very different because of the quality of the experience. 

So, you know, a few weeks ago you may have seen there was a picture that was trending on Twitter of the pope wearing a Balenciaga jacket. And for about twenty-four hours, the internet was absolutely convinced that the pope was rocking this $5,000 jacket that was, like, perfectly color coordinated. And, you know, it was a sort of—you know, it was a funny moment. And of course, it was revealed that it had been generated using an AI. So no harm done, I guess. But, like, it was representative of how—(audio break)—are being shared. Potentially it could have very serious implications, you know, when they are used by bad actors, you know, as you described, you know, to do things that are much more nefarious than simply, you know, sharing a funny meme.

One piece of research I saw recently which I thought was interesting, and it spoke to what some of these challenges might look like over time, I believe this was from Lancaster University. It compared how trustworthy AI-generated faces of people were compared to the faces of real humans. And it found that actually amongst the folks they surveyed as part of this research, that faces of AI-generated humans were rated 8 percent more trustworthy than actual humans. And, you know, I think, again, it’s a number, right, that, you know, I think a lot of people laugh at because, you know, we think oh, well, you know, that’s kind of funny and—(audio break)—of course, I can tell the difference between humans and AI-generated people. You know, I’m—(audio break)—were proved wrong when they actually tried to detect the differences themselves.

So I do think there’s going to be an enormous number of challenges that we will face over the coming years. These are issues that, you know, certainly on the industry side, you know, I think lots of us are taking very seriously, certainly governments and regulators are looking at. Part of the solution will have to be other technologies that can help us parse the difference between AI-generated content and stuff that isn’t. And then part of that, I think, will be human solutions. And in fact, that may actually be the largest piece, because, of course, what is driving disinformation are a bunch of societal issues. And it’s not always going to be as simple as saying, oh, another piece of technology will fix that.

ROBBINS: So I want to turn this over to the group. And I’ve got lots more questions, but I’m sure the group has—they’re journalists. They’ve got lots of questions. So the first question is from Phoebe Petrovic. Phoebe, can—would you like to ask your question yourself? Or I can read it, but I always love it when people ask their own questions.

Q: Oh, OK. Hey, everyone.

So, I was curious about how we might—just given all the reporting that’s been done about ChatGPT and other AI models hallucinating information, faking citations to Washington Post articles that don’t exist, making fake—totally make up research article citations that do not exist, how can we ethically or seriously recommend that we use generative AI for newsgathering purposes? It seems like you would just have to factcheck everything really closely, and then you might as well have done the job to begin with and not get into all these ethical implications of, like, using a software that is potentially going to put a lot of us out of business? 

ROBBINS: And Phoebe, are you—you’re at Wisconsin Watch, right?

Q: Mmm hmm. And we have a policy that we do not—at this point, that none of us are going to be using AI for any of our newsgathering purposes. And so that’s where we are right now. But I just wonder about the considerable hallucination aspect for newsgathering, when you’re supposed to be gathering the truth.

ROBBINS: Dex, do you want to talk a little bit about hallucinations?

HUNTER-TORRICKE: Yeah, absolutely. So I think, you know, Phoebe has hit the nail on the head, right? Like, that there are a bunch of, you know, issues right now with existing generative AI technology. You do have to fact-check and proof absolutely everything. So it is—it is something that—you know, it won’t necessarily save you lots of time if you’re looking to just generate, you know, content. I think there are two pieces here which, you know, I think I would focus on. 

One is, obviously, the technology is advancing rapidly. So these are the kinds of issues which I expect with future iterations of the technology we will see addressed by more sophisticated models and tools. So absolutely today you’ve got all those challenges. That won’t necessarily be the case over the coming years. I think the second piece really is around thinking what’s the value of me experimenting with this technology now as a journalist and as an organization? It isn’t necessarily to think, oh, I can go and, you know, replace a bunch of fact-heavy lifting I have to do right now as a reporter. I think it’s more becoming fluent with what are the things that generative AI might conceivably be able to do that can help integrate into the kind of work that you’re doing? 

And I expect a lot of what I think reporters and organizations generally will use generative AI for over the coming years, will actually—to be doing some of the things that I talked about, and that Ben talked about. You know, it’s corralling data. It’s doing analysis. It’s being more of a researcher rather than as a co-writer, or entirely taking over that writing. I really see it as something that’s additive and will really augment the kind of work that reporters and writers are going, rather than replacing it. So if you do it from that context and, you know, obviously, you know, it does depend on you experimenting to see what are all the different applications in your work, then I think that might lead to very different outcomes.

ROBBINS: So we have another question, and we’ll just move on to that. And of course, Ben, you can answer any question you want at any time. So—

PIMENTEL: Can I add something on that? It’s almost like the way the web has changed reporting. In the past, like, I covered business. To find out how many employees a company has or when it was founded, I would have to call the PR department or the media rep. Now I can just go quickly to the website, where they have all the facts about the company. But even so, I still double check if that’s an updated information. I even go to the FCC filings to make sure. So I see it as that kind of a tool, the way the web—or, like, when you see something on Wikipedia, you do not use that as a source, right? You use that as a starting point to find other sources.

ROBBINS: So Charles Robinson from Maryland Public Television. Charles, do you want to ask your question?

Q: Sure. First of all, gentlemen, appreciate this.

I’m working on a radio show on ChatGPT and AI. And one of the questions that I’ve been watching in this process is the inability of AI and ChatGPT to get the local nuances of a subject matter, specifically reporting on minority communities. And, Ben, I know you being out in San Francisco, there’s certain colloquialisms in Filipino culture that I wouldn’t get if I didn’t know it. Whereas, like, to give you an example, there’s been a move to kind of, like, homogenize everybody as opposed to getting the colloquialisms, the gestures, and all of that. And I can tell you, as a Black reporter, you know, it’s the reason why I go into the field because you can’t get it if all I do is read whatever someone has generated out there. Help me understand. Because, I’m going to tell you, I write a specific blog on Black politics. And I’m going to tell you, I’m hoping that ChatGPT is not watching me to try and figure out what Black politics is.

ROBBINS: Ben.

PIMENTEL: I mean, I agree. I mean, when I started my career, the best—and I still believe this—the best interviews are face-to-face interviews, for me. We get more information on how people react, how people talk, how they interact with their surroundings. Usually it’s harder to do that if you’re, you know, doing a lot of things. But whenever I have the opportunity to report on—I mean, I used to cover Asian American affairs in San Francisco. You can’t do that from a phone or a website. You have to go out into the community. And I cover business now, which is more—you know, I can do a lot of it by Zoom. But still, if I’m profiling a CEO, I’d rather—it’d be great if I could meet the person so that I can read his body language, he can react to me, and all that.

In terms of the nuances, I agree totally. I mean, it’s possible that ChatGPT can—I mean, as we talked about—what’s impressive and troubling about this technology is it can evolve to a point where it can mimic a lot of these things. And for journalism, that’s an issue for us to think about because, again, how do you deal with a program that’s able to pretend that it’s, you know, writing as a Black person, or as a Filipino, or as an Asian American? Which, based on the technology, eventually it can. But do we want that kind of reporting and journalism that’s not based on more human interactions?

ROBBINS: So thank you for that. So Justin Kerr who’s the publisher of the McKinley Park News—Justin, do you want to ask your question?

Q: Yes. Yes. Thank you. Can folks hear me OK?

ROBBINS: Absolutely.

Q: OK. Great. So I publish the McKinley Park News, which is, I call it, a micro-local news outlet, focusing on a single neighborhood in Chicago. And it’s every beat in the neighborhood—crime, education, events, everything else. And it’s all original content. I mean, it’s really all stuff that you won’t find anywhere else on the internet, because it’s so local and, you know, there’s news deserts everywhere. A handful of weeks ago, I discovered through a third party that seemingly the entirety of my website had been scraped and included in these large language models that are used to power ChatGPT, all of these AI services, et cetera. 

Now, this is in spite of the fact that I have a terms of service clearly linked up on every page of my website that expressly says: Here are the conditions that anyone is allowed to access and use this website—which is, you know, for news consumers, and no other purpose. And I also list a bunch of expressly prohibited things that, you know, you cannot access or use our website for. One of those things is to inform any large language model, algorithm, machine learning process, et cetera, et cetera, et cetera. 

Despite this, everything that I have done has been taken from me and put into these large language models that are then used in interfaces that I see absolutely no benefit from—interfaces and services. So when someone interacts with the AI chat, they’re going to get—you know, maybe they ask something about the McKinley Park neighborhood of Chicago. They’re not—you know, we’re going to be the only source that they have for any sort of realistic or accurate answer. You know, and when someone interacts with a chat, I don’t get a link, I don’t get any attention, I don’t get a reference. I don’t get anything from that. 

Not only that, these companies are licensing that capability to third parties. So any third party could go and use my expertise and content to create whatever they wanted, you know, leveraging what I do. As a local small news publisher, I have absolutely no motivation or reason to try to publish local news, because everything will be stolen from me and used in competing interfaces and services that I will never get a piece of. Not only that, this—

ROBBINS: Justin, we get—we get the—we get the point.

Q: I guess I’m mad because you guys sit up here and you’re using products and services, recommending products and services without the—without a single talk about provenance, where the information comes from. ChatGPT doesn’t have a license to my stuff. Neither do you.

ROBBINS: OK.

Q: So please stop stealing from me and other local news outlets. That’s—and how am I supposed to—my question is, how am I supposed to operate if everything is being stolen from me? Thank you very much.

ROBBINS: And this is a—it’s an important question. And it’s an important question, obviously, for a very small publisher. But it’s also an important question for a big publisher. I mean, Robert Thompson from News Corp is raising this question as well. And we saw what—we saw what the internet did to the news business and how devastating it’s been. So, you know, it’s life and death—life and death for some—life and death for a very small publisher, but it’s very much life and death for big publishers as well. So, Dex, this goes over to you.

HUNTER-TORRICKE: Yeah, sure. I mean, I think—you know, obviously I can’t comment on any, you know, specific website or, you know, terms and conditions on a website. You know, I think, you know, from the deep mind perspective, I think we would say that, you know, we believe that training large language models using open web content, you know, creates huge value for users and the media industry. You know, it leads to the creation of more innovative technologies that will then end up getting used by the media, by users, you know, to connect with, you know, stories and content. So actually, I think I would sort of disagree with that premise.

I think the other piece, right, is there is obviously a lot of debate, you know, between different, you know, interests and, you know, between different industries over what has been the impact of the internet, you know, on, you know, the news industry, on the economics of it. You know, I think, you know, we would say that, you know, access to things like Google News and Google Search has actually been incredibly powerful for, you know, the media industry. You know, there’s twenty-four, you know, billion visits to, you know, local news outlets happening every month through Google Search and Google News. You know, there’s billions of dollars in ad revenue being generated by the media industry, you know, through having access to those platforms.

You know, I think access to AI technologies will create similar opportunities for growth and innovation, but it’s certainly something which I think, you know, we’re very, very sensitive to, you know, what will be the impacts on the industry. Google has been working very, very closely with a lot of local news outlets and news associations, you know, over the years. We really want to have a strong, sustainable news ecosystem. That’s in all of our interest. So it’s something that we’re going to be keeping a very close eye on as AI technology continues to evolve.

ROBBINS: So is—other than setting up a paywall, how does—how do news organizations, you know, protect themselves? And I say this as someone who sat on the digital strategy committee at the New York Times that made this decision to put up a paywall, because that was the only way the paper was going to survive. So, you know, yes, Justin, I understand that payrolls or logins kill your advertising revenue potential. But I am—yes, and we had that debate as well. And I understand the difference between your life and the life of the New York Times. Nevertheless, Justin raises a very basic question there. Is there any other way to opt out of the system? I mean, that’s the question that he’s asking, Dex. Is there?

HUNTER-TORRICKE: Well, you know, I think what that system is, right, is still being determined. Generative AI is, you know, in its infancy. We obviously think it’s, you know, incredibly exciting, and it’s something that, you know, all of us—(audio break)—today to talk about it. But the technology is still evolving. What these models will look like, including what the regulatory model will look like in different jurisdictions, that is something that is shifting very, very quickly. And, you know, these are exactly the sorts of questions, you know, that we as an industry—(audio break)—is a piece which, you know, I’m sure the media industry will also have a point of view on these things. 

But, in a way, it’s sort of a difficult one to answer. And I’m not deliberately trying to be evasive here with a whole set of reporters. You know, we don’t yet know what the full impacts really will be, with some of the AI technologies that have yet to be invented, for example. So this is something where it’s hard to say this is a definitively, like, model that is going to produce the greatest value either for publishers or for the industry or for society, because we need to actually figure out how that technology is going to evolve, and then have a conversation about this. And different, you know, communities, different markets around the world, will also have very different views on what’s the right way, you know, to protect the media industry, while also ensuring that we do continue to innovate? So that’s really how I’d answer at this stage.

ROBBINS: So let’s move on to Amy Maxmen, who is the CFR Murrow fellow. Amy, would you like to ask your question?

Q: Yeah. Hi. Can you hear me?

ROBBINS: Yes.

Q: OK, great.

So I guess my question actually builds on, you know, what the discussion is so far. And part of my thought for a lot of the discussion here and everywhere else is about, like, how AI could be helpful or hurtful in journalism. And I kind of worry how much that discussion is a bit of a distraction. Because, I guess, I have to feel like the big use of AI for publishers is to save money. And that could be by cutting salaries further for journalists, and cutting full-time jobs that have benefits with them. Something that kind of stuck with me was that I heard another—I heard another talk, and the main use of AI in health care is in hospital billing departments to deny claims. At least, that’s what I heard. So it kind of reminds me that, you know, where is this going? This is going for a way for administrators and publishers to further cut costs. 

So I guess my point is, knowing that we would lose a lot if we cut journalists and kind of just—you know, and cut editors, who really are needed to be able to make sure that the AI writing isn’t just super vague and unclear. So I would think the conversation might need to shift away from the good and the bad of AI, to actually, like, can we figure out how to fund journalists still, so that they use AI like a tool, and then also to make sure that publishers aren’t just using it to cut costs, which would be short-sighted. Can you figure out ways to make sure that, you know, journalists are actually maybe paid for their work, which actually is providing the raw material for AI? Basically, it’s more around kind of labor issues than around, like, is AI good or bad?

HUNTER-TORRICKE: I think Amy actually raises, you know, a really important, you know, question about how we think conceptually about solving these issues, right? I actually really agree that it’s not really about whether AI is good or bad. That’s part of the conversation and, like, what are the impacts? But this is a conversation that’s about the future of journalism. You know, when social media came along, right, there were a lot of people who said, oh, obviously media organizations need to adapt to the arrival of social media platforms and algorithms by converting all of their content into stuff that’s really short form and designed to go viral. 

And, you know, that’s where you had—I mean, without naming any outlets—you had a bunch of stuff that was kind of clickbaity. And what we actually saw is that, yeah, that engaged to a certain extent, but actually people got sick of that stuff, like, pretty quickly. And the pendulum swung enormously, and actually you saw there was a huge surge in people looking for quality, long-form, investigative reporting. And, you know, I think quality journalism has never been in so much demand. So actually, you know, even though you might have thought the technology incentivized and would guide the industry to one path, actually it was a very different set of outcomes really were going to succeed in that world. 

And so I think when we look at the possibilities presented by technology, it’s not as clear-cut as saying, like, this is the way the ecosystem’s going to go, or even that we want it to go that way. I think we need to talk about what exactly are the principles of good journalism at this stage, what kind of environment do we want to have, and then figure out how to make the technology support that.

ROBBINS: So, Ben, what do you think in your newsroom? I mean, are the bosses, you know, threatening to replace a third of the—you know, a third of the staff with our robot overlords? I promised Dex I would only say that once. Do you have a guild that’s, you know, negotiating terms? Or you guys are—no guild? What’s the conversation like? And what are you—you know, what are the owners saying?

PIMENTEL: I mean, we are so small. You know, the Examiner is more than 150 years old, but it’s being rebuilt. It’s essentially just a two-year-old organization. But I think the point is—what’s striking is the use of ChatGPT and generative AI has emerged at a time when the media is still figuring out the business model. Like I said, I lived through the shift from the pre-website world, World Wide Web world, to—and after, which devastated the newspaper industry. I mean, I started in ’93 with the year that the website started to emerge. Within a decade, my newspaper back then was in trouble. And we’re still figuring it out. Dex mentioned the use of social media. That’s what led to the rise of Buzzfeed News, which is having problems now.

And there are still efforts to figure out, OK, how do we—how do we make this a viable business model? The New York Times and more established newspapers have already figured out, OK, a paywall works. And that works for them because they’re established, they’re credible, and there are people who are willing to pay to get that information. So that’s an important point. But for others, the nonprofit model is becoming also a viable alternative in many cases. Like, in San Francisco there’s an outlet called Mission Local, actually founded by a professor of mine at Berkeley. Started out as a school project, and now it’s a nonprofit model, covering the Mission in a very good way.

And you have other experiments. And what’s interesting is, of course, ChatGPT will definitely be used by—you know, as you said—at a time when there’s massive cuts in newsroom, they’re already signaling that they’re going to use it. And I hope that they use it in a responsible way, the way I explained it earlier. There are—there are important uses for it, for information that’s very beneficial to the community that can be automated. But beyond that, that’s the problem. I think that’s the discussion that the industry is still having.

ROBBINS: So, thank you. And we have a lot of questions. So I’m going to ask—I’m going to go through them quickly. Dan MacLeod from the Bangor Daily News—Dan, do you want to ask your question? And I think I want to turn it on you, which is why would you use it, you know, given how committed you are and your value proposition, indeed, is local and, you know, having a direct relationship between reporters and local people?

Q: Hi. Yeah.

Yeah, I mean, that’s really my question. We have not started using it here. And the big kind of question for us is that the thing that, you know, we pride ourselves on, the thing our audience tells us that it values about us, is that we understand the communities we serve, we’re in them, you know, people recognize the reporters, they have, like, a pretty close connection with us. But this also seems to be, like, one of those technologies that is going to do to journalism what the internet did twenty-five years ago. And it’s sort of, like, either figure it out or, you know, get swept up. Is there anything that local newsrooms can do to leverage it in a way that maintains its—this is a big question—but sort of maintains its sort of core values with its audience? 

My second question is that a lot of what this seems to be able to do, from what I’ve seen so far, promises to cut time on minor tasks. But is there anything that it can do better than, like, what a reporter could do? You know, like a reporter can also back—like, you know, research background information. AI says, like, we can do it faster and it saves you that time. Is there anything it can do sort of better?

ROBBINS: Either of you? 

HUNTER-TORRICKE: Yeah, so—yeah, go ahead. Sorry, go ahead, Ben.

PIMENTEL: Go ahead. Go ahead, please.

HUNTER-TORRICKE: Sure. So one example, right? You know, I’ve seen—(audio break)—using AI to go and look through databases of sport league competitions. So, you know, one, you know, kind of simple example is looking at how sport teams have been doing in local communities, and then working out, by interpreting the data, what are interesting trends of sport team performance. So you find out there’s a local team that just, you know, won top of its league, and they’ve never won, you know, in thirty years. Suddenly, like, that’s an interesting nugget that can then be developed into a story. You’ve turned an AI into something that’s actually generating interesting angles for writing a story. It doesn’t replace the need for human reporters to go and do all of that work to turn it into something that actually is going to be interesting enough that people want to read it and share it, but it’s something where it is additive to the work of an existing human newsroom.

And I honestly think, like, that is the piece that I’m particularly excited about. You know, I think coming from the AI industry and looking at where the technology is going, I don’t see this as something that’s here to replace all of the work that human reporters are doing, or even a large part of it. Because being a journalist and, you know, delivering the kind of value that a media organization delivers, is infinitely more complex, actually, than the stuff that AI can deliver today, and certainly for the foreseeable future. Journalists do something that’s really, really important, which is they build relationships with sources, they have a ton of expertise, and that local context and understanding of a community. Things that AI is, frankly, just not very good at doing right now. So I think the way to think about AI is as a tool to support and enhance the work that you’re doing, rather than, oh, this something that can simply automate away a bunch of this.

ROBBINS: So let’s—Lici Beveridge. Lici is with the Hattiesburg American. Lici, do you want to ask your question?

Q: Sure. Hi. I am a full-time reporter and actually just started grad school. And the main focus of what I want to study is how to incorporate artificial intelligence into journalism and make it work for everybody, because it’s not going to go away. So we have to figure out how to use it responsibly.

And I was just—this question is more for Benjamin. Is there any sort of—I guess, like a policy or kind of rules or something of how you guys approach the use of, like, ChatGPT, or whatever, in your reporting? I mean, do you have, like, a—we have to make sure we disclose the information was gathered from this, or that sort of thing? Because I think, ethically, is how we’re going to get to use this in a way that will be accepted by not just journalists, but by the communities—our communities.

PIMENTEL: Yes. Definitely. I think that’s the basic policy that I would recommend and that’s been recommended by others. You disclose it. That if you’re using it in general, and maybe on specific stories. And just picking up on what Dex said, it can be useful for—we used to call it computer-assisted reporting, right? That’s what the web and computers made easier, right? Excel files, in terms of processing and crunching data, and all that, and looking for information. 

What I worry about, and what I hope doesn’t happen, is—to follow up on Dex’s example—is, you know, you get a—it’s a sports event, and you want to get some historical perspective, and maybe you get the former record holders for a specific school, or whatever. And that’s good. The ChatGPT or the web helps you find that out. And then instead of finding those people and maybe doing an interview for profiles or your perspective, you could just ask ChatGPT, can you find their Instagram feed or Twitter feed, and see what they’ve said? And let the reporting end there. I mean, I can imagine young reporters will be tempted to do that because it’s easier, right? Instead of—as Dex said, it’s a tool as a step towards getting more information. And the best information is still going face-to-face with sources, or people, or a community.

Q: Yeah. Because I know, like, I was actually the digital editor when—for about fifteen years. And, you know, when social media was just starting to come out. And everything was just, you know, dive into this, dive into that, without thinking of the impact later on. And as we quickly discovered, you know, things like we live in a place where there’s a lot of hurricanes and tornadoes. So we have people creating fake pictures of hurricanes and tornadoes. And, you know, they were submitting as, you know, user-generated content, which it wasn’t. It was all fake stuff. So, you know, we have to—I just kind of want to, like, be able to jump in, but do it with a lot of caution.

PIMENTEL: Definitely, yes.

ROBBINS: Well, you know, I thought Ben’s point about Wikipedia is a really interesting one, which is any reporter who would use Wikipedia as their sole source for a story, rather than using it as a lead source, you know, I’d fire them. But it is an interesting notion of —do you use this as a lead source, knowing that it makes errors, knowing that it’s lazy, knowing that it’s just a start, versus—and that is a—you know, that’s not even ethics. That’s your just basic sort of the rule that we also have to do inside the newsroom, which then to me raises a question for Dex, which is do we have any sense of how often—you know, this term of hallucinations. I mean, how often does it make mistakes right now? Do you have a sense of with Bard how often it makes mistakes? Certainly everybody has stories of fake sources that have showed up, errors that have showed up. Do we have a sense of how reliable this is? And, like, my Wikipedia page has errors in it, and I’ve never even fixed it because I find it faintly bemusing, because they’re really minor errors. 

HUNTER-TORRICKE: Right, yeah. I mean, I don’t have any data points to hand. Absolutely it is something that we’re aware of. I expect that this is something that future iterations of the technology will continue to tackle and to, you know, diminish that problem. But, you know, going back to this bigger point, right, which is at what point can you trust this, I think you can trust a lot of things you find there. But you do have to verify them. And certainly, you know, as journalists, as media organizations, I mean, there’s a big much larger responsibility to do that than folks, you know, who may be looking at these experimental tools right now and using it, you know, just to share for, you know, fun and amusement. You know, the kinds of things that you’re sharing are going to really have a huge societal impact.

I do think when you look at the evolution of tools like Wikipedia, though, we will go through this trajectory where, you know, at the beginning people will—a lot of folks will think, oh, this is really, like, not that reputable, because it’s something that’s been generated in a very novel way. And there are other more established, you know, formats where you would expect there to be a greater level of fact checking, a greater level of verification. So, you know, obviously, like, the establishment incumbent example to compare against Wikipedia back in the day was something like Encyclopedia Britannica. And then a moment was reached, you know, several years into the development of Wikipedia, where then research was finding that on average Wikipedia had fewer errors in it than Encyclopedia Britannica. 

So we will absolutely see a moment come when AI will get more sophisticated, and we will see the content generally being good enough and with more minor errors which, you know, again, technology will continue to diminish over time. And at that point, I think then it will be a very, very different proposition than what we have today, where absolutely, you know, all of these tools are generally labeled with massive caveats and disclaimers warning that they’re experimental and that they’re not, you know, at the stage where you can simply trust everything that’s been put through them.

ROBBINS: So Patrick McCloskey who is the editor-in-chief of the Dakota Digital Review—Patrick, would you like to ask your question? We only have a few minutes left. No, Patrick is—may not still be with us.

So we actually only have three minutes left. So do you guys want to sum up? Because we actually have other questions, but they look long and complicated. So would you like to have any thoughts? Or maybe I will just ask you a really scary question, which is: We’re talking about this like it is Wikipedia or like it is a calculator. And that, yes, it’s going to have to be fixed, and we have to be careful, and we have to disclose, and we’re being very ethical about it. We’ve had major leaders of the tech industry have put out a letter that have said: Stop. Pause. Think about this before it destroys society. Is there some gap here that we need to be thinking about? I mean, this is—they are raising some really, really frightening notions. And are we perhaps missing a point here if we’re really just talking about this as, well, it’ll perfect itself. Dex, do you want to go first, and then we’ll have Ben finish up? 

HUNTER-TORRICKE: Yeah. So, I mean, the CEO of Google DeepMind signed a letter recently, I think this might be one of the several letters that you referenced, you know, which called on folks to take the potential extinction risks associated with AI as seriously as other major global existential risks. So, for example, the threat of nuclear war, or a global pandemic. And that doesn’t mean at all that we think that that is the most likely scenario. You know, we absolutely believe in the positive value of AI for society, or we wouldn’t be building it. 

It is something that if the technology continues to mature and evolve in the way that we expect it will, with our understanding of what is coming, it is something that we should certainly take seriously though, even if it’s a very small possibility. With any technology that’s this powerful, we have to apply the proportionality principle and ensure that we’re mitigating that risk. If we only start preparing for those risks, you know, when they’re apparent, it will probably be too late at that point.

So absolutely I think it’s important to contextualize this, and not to induce panic or to say this is something that we think is likely to happen. But it’s something that we absolutely are keeping an eye on amongst very, very long-term challenges that we do need to take seriously.

ROBBINS: So, Ben, do you have a sense that—I mean, I have a sense, and I don’t cover this. I just read about it. But I have the sense that these industries are saying, yes, we’re conscious that the world could end, but, you know, we’d sort of like other people to make the decision for us. You know, regulate us, please. Tell us what to do while we continue to race and develop this technology. Is there something more? Are they—can we trust these industries to deal with this?

PIMENTEL: I mean, the fact that they used the phrase “extinction risk” is really, I think, very important. That tells me that even the CEOs of Google, DeepMind, and OpenAI, and Microsoft know—don’t know what’s up ahead. They don’t know how this technology is going to evolve. And of course, yes, there will be people who—in these companies, including Dex, who will try to ensure that we have guardrails, and policies, and all that. My problem is, it’s now a competitive landscape. It becomes part of the new competition in tech. And when you have that kind of competition, things get missed, or shortcuts are done. We’ve seen that over and over again. And that’s where you can’t leave this to these companies, not even to the regulators. I mean, the communities have to be involved in the conversations.

Like, one risk of AI—it goes beyond journalism—that I’ve heard of, which is for me partly one of the most troubling, is the use of AI for persuasion. And on people who don’t even know that they’re being—they’re communicating with an AI system. The use of AI to, in real time, figure out how to sell you something or convince you about a political campaign. And, in real time, figure out how you’re reacting and adjust, because they have the data, they know that if you say something or respond in a certain way, or you have a facial expression—a certain kind of facial expression, they know how to respond. That, for me, is even scarier. That’s why the European Union just passed the—which could be the law—called AI Act, which would ban that, the use of AI for emotional cognition recognition and manipulation, in essence.

The problem, again, is this has become a big wave in tech. Companies are scrambling. VCs are scrambling to fund the startups or even existing companies with mature programs for AI. And on the other hand, you have the regulators and the concerns about the fears of what is the impact. Who’s going to win? I mean, which thread is going to prevail? That’s the big question.

ROBBINS: So this has been a fabulous conversation. And we will invite you back probably—you know, things are moving so fast—maybe in six months. Which is a lifetime in technology. I just really want to thank Dex Hunter-Torricke and Ben Pimentel. It’s a fabulous conversation. And everybody who asked questions. And sorry we didn’t get to all of them, but it shows you how fabulous it was. And we’ll do this again soon. I hope we can get you back. And over to Irina.

FASKIANOS: Thank you for that. Thank you, Carla, Dex, and Ben. Just to—again, I’m sorry we couldn’t get to all your questions. We will send a link to this webinar. We will also send the link to the Nieman Report that Carla referenced at the top of this.

You can follow Dex Hunter-Torricke on Twitter at @dexbarton, and Benjamin Pimentel at @benpimentel. As always, we encourage you to visit CFR.org, ForeignAffairs.com, and ThinkGlobalHealth.org for the latest developments and analysis on international trends and how they are affecting the United States. And of course, do email us to share suggestions for future webinars. You can reach us at [email protected].

So, again, thank you all for being with us and to our speakers and moderator. Have a good day.

ROBBINS: Thank you all so much.

(END)

Top Stories on CFR

Mexico

Organized crime’s hold on local governments fuels record election violence; Europe’s cocaine pipeline shifting to the Southern Cone.

Defense and Security

John Barrientos, a captain in the U.S. Navy and a visiting military fellow at CFR, and Kristen Thompson, a colonel in the U.S. Air Force and a visiting military fellow at CFR, sit down with James M. Lindsay to provide an inside view on how the U.S. military is adapting to the challenges it faces.

Myanmar

The Myanmar army is experiencing a rapid rise in defections and military losses, posing questions about the continued viability of the junta’s grip on power.