Artificial Intelligence in Journalism

Friday, May 10, 2024

Assistant Professor of Journalism and Emerging Media Studies, College of Communications, Boston University; Founder, Critical Internet Studies Institute

Fellow, Berkman Klein Center for Internet and Society, Harvard University

Founder and CEO, Future Today Institute


Senior Fellow, Council on Foreign Relations

This event was part of the 2024 CFR Local Journalists Workshop, made possible through the generous support of the John S. and James L. Knight Foundation.


ROBBINS: Morning, everybody. Great to see you all. Met several people last night who were somewhat regular participants in our Local Journalists Webinar, and I hope more of you will be. One of the downsides of the Local Journalists Webinar is I don’t get to see people; I get to hear you all. So I hope that you will participate more in the future.

I love this conference. I am still a total journalist freak. I miss the business, the daily business of it. So it’s wonderful to be here and it’s wonderful to have this great panel.

So welcome to this morning’s on-the-record discussion on “Artificial Intelligence in Journalism.” I’m Carla Robbins. And we are very lucky to be joined today by three fantastic experts. You have their complete bios, so I’m just going to give you a few highlights.

Dr. Joan Donovan is an assistant professor of journalism and emerging media studies at BU, and a leader in the field of internet and technology studies, online extremism, media manipulations, and disinformation campaigns. She’s also the founder of the nonprofit Critical Internet Studies Institute, the coauthor of Meme Wars: The Untold Story of the Online Battles Upending Democracy in America, and perhaps most important she coinvented the beaver emoji.

DONOVAN: Thank you. (Laughter.) It’s true.

ROBBINS: So—(laughter)—

DONOVAN: Look inside your phones. It’s there. (Laughter.) It’s my Oprah moment. (Laughter.)

ROBBINS: Mehtab Khan is a fellow at the Berkman Klein Center for Internet and Society at Harvard University. Her work falls at the intersection of intellectual property, in particular copyright and trademark law, internet law, privacy, antidiscrimination, and law and ethics of data-drive technologies. Her recent scholarship includes articles on developing an accountability framework for large-scale AI datasets, regulating generative AI framework—AI speech tools, and the impact of AI on creative industries. Dr. Khan was previously the program director for the Yale/Wikimedia Initiative on Intermediaries—on Intermediaries and Information.

And Amy Webb is a quantitative futurist—and I actually understand what that means because she explained it to me—and CEO of the Future Today Institute, and a professor of strategic foresight at NYU’s Stern School of Business. She’s also a visiting fellow at Oxford University’s Säid School of Business, a fellow in the U.S.-Japan Leadership Program, a foresight fellow in the U.S. Government Accountability Office Center for Strategic Foresight, a former visiting Nieman Fellow where her research received a Sigma Delta Chi award, and a graduate of the Columbia School of Journalism. Her most recent book, The Genesis Machine, examines the future of gene-editing biotech and synthetic biology. And she was a—were you—what papers were you at?

WEBB: You’ll hear from me a lot today how much I dislike journalism. But for—(laughter)—for a couple of hot minutes I was at the Wall Street Journal in Hong Kong. I was at Newsweek in what I like to call the before time in Tokyo. Yeah.

ROBBINS: So all these people know of which they speak, OK. (Laughs.)

So, Joan, Mehtab, Amy, and I will chat up here for about thirty-five minutes, and then we’re going to open things up for questions, and we’ll go until 10:45.

So let’s talk about the bad, the ugly, and the—potentially the good of AI in journalism, and I’d like to start with how AI is changing the stories we are now covering. And, Joan, I want to talk with you, not about the beaver although we may get to that. We’ve heard a lot of concern about how AI is supercharging disinformation in the 2024 elections, and we all heard about the Joe Biden AI-generated robocall before the New Hampshire primary. How seriously should we take that threat? And are the tech companies doing anything to stop it? Yesterday, TikTok announce it’s going to be labeling AI-generated content. Meta, you know, Instagram, Threads, Facebook, you know, said the same thing last month. YouTube now has rules requiring creators to disclose AI-generated videos, although, I mean, that’s self-regulation. X so far, predictably, has said nothing about this.


ROBBINS: Is this really a big problem? And is it going to be much, much worse because of AI?

DONOVAN: Yeah. And I caution you to think about this, you know, as something much more banal, a kind of technology that twists how we should think about reality online. So if you’re familiar, the Met Gala happened the other—a couple nights ago, and people were sharing this beautiful dress that Katy Perry had been wearing. That was an AI-generated image, right? And so the banality of AI is actually where we need to be worrying, because ultimately it’s not going to be, you know, other places they use AI, like in Hollywood, right—it’s not going to be like a Will Smith Men in Black/Independence Day kind of thing where, like, the White House blows up. It’s going to be much more everyday, small changes over time.

There was a great story in the BBC about some pictures that had been circulating of Trump just hanging out with Black people. Sounds normal, maybe. (Laughter.) But the images are essentially him hanging out, you know, with a group of Black men, and the only way that you’d know that these are AI-generated images is that Trump only has three fingers. Other instances of Black people campaigning for Trump, for instance, were being circulated, except one of the campaigners had three arms. And so AI right now is like searching for a fake Louis Vuitton bag or something. You know, like, it’s like Gucci but it’s spelled with two O’s. And so what you are being taught to do is look at the images really closely and see if the teeth look as if they’re just white stripes or if they actually are, you know, human teeth, and look at the ears to see if they have two, or you know, look at the hands, because AI produces these artifacts.

But as the AI industry develops—and this is what I’m watching very closely—that people now are involved in quality control of AI. So the AI-generated image might look a certain way, but they have people going in with Photoshop to make sure that those kind of artifacts are not there anymore. And where I’m finding the most progress in AI is in the industry of pornography, right? And so when we’re trying to understand emerging technologies, we actually have to look in the industries where people are motivated and there are resources to innovate. And so the other thing that’s happening with AI that’s really important to understand is that this isn’t necessarily a question of, you know, what is Facebook or Google going to do next; the question is really what are politically-motivated actors going to learn from the pornography industry about making cheap fakes and deepfakes, and then distributing them mostly through social media?

And I think that it’s really important on the local journalism level to realize that this is going to show up in your reporting. Whenever there’s a scandal or conflict, there’s going to be a question of, well, is that photo real? Is that photo or video genuine? Is that person really involved in that scandal? And Danielle Citron talks about this as the liar’s dividend because liars can say—of a true photo or true event that happened they can say, well, this is just AI. This is crazy. But we have to now be really attentive to how this is going to develop, especially as local elections, district attorneys in particular, heat up, because we start to see in more, let’s say, germane areas like highschoolers are now deepfaking their classmates and sexually harassing them. We’ve also had the case of I think there was a mayor that there was a scandalous phone call that he was involved in which was actually AI-generated by someone in—a constituent that didn’t like him. And so it’s not just the case that this is—you don’t need a huge industry. You don’t need a ton of resources. And this is going to change the face of sourcing online as we know it.

ROBBINS: So, Amy, Mehtab, jump in here. Joan’s come up with a few examples. What more reasons do our readers need to understand AI? You know, how is it going to potentially affect their lives?

KHAN: I think it’ll affect your everyday work in a lot of different ways, not just the stories that you cover but what you write, and where that goes, who shares it, what they use it for. What AI is doing is changing the information landscape in a way that we are struggling to keep up in terms of regulation and policy. So now, you know, when you write something, it’s not just going to be published, and then read by somebody, or shared, or tweeted, or like, but it can be used to further train AI models. And that is a big concern because that raises a whole other set of issues.

We are struggling with finding attribution for who wrote what. We are struggling with properly, you know, compensating those who might have labored to produce some work and put it on the internet. We’re struggling with this, you know, balancing of rights and interests in the public’s ability to make use of, you know, an AI tool that is built upon other people’s work, but also give these people the incentive to create that in the first place. So the work that you create is not just going to be used or consumed in the same way anymore.

ROBBINS: So is that—so we’ll go—we’ll jump into the impact on the—on the business of AI a little bit—business of journalism a little—a little further on. But in terms of stories that people are—might be covering, you know, why—if I’m a journalist, why should I spend the time figuring out AI and spend the time explaining it to readers? I mean, how does it affect people’s lives?

KHAN: It’s already affecting people’s lives. I think that the new part is that it’s now generative, and that is changing you ability to make uses of it. So it was already being used to make hiring decisions or check if you’re eligible for health services or Social Security, so it was already in place. But now the nature of the predictive technology has changed slightly, that it’s become generative. Now anybody can use that to create more content. And that’s how it’s going to change everybody’s lives in their own ways, whether professionally or personally or just with how they interact with each other.

ROBBINS: So if you were going to do stories, I mean, people, of course, suffer from the insurance companies, and now it’s just not somebody who is on the other side of the phone denying you benefits; it may be a generative AI that’s making decisions like that. So those are good stories—explainer stories that might have great value for people.


WEBB: First of all, let me thank you for doing the work that you’re doing. Nobody wants to do local news because there’s no glory—it’s all glory and guts and no renumeration in it. So thank you. You are all doing very important work.

Second of all, I just want to acknowledge that I know that you are resource-constrained. So as I offer my insights today, I understand the situation that the vast majority of you are in. So if you hear me make any suggestions, the suggestions that I’m making I believe are plausible and achievable.

OK. With those disclaimers, two thoughts. Let me first address your last question, and then I’ll go back to a couple questions earlier. And that is: How much explaining should there be? Why bother explaining? What’s the point?

So, look, journalists became journalists and you went into news because you’re good at journalism. And we’ve—this is a very complex operating environment for every single business, and that certainly includes journalism. Suddenly, we expect everybody to be an expert on artificial intelligence. And suddenly, we expect everybody to have a meaningful conversation about artificial intelligence. This is a(n) incredibly challenging technology which, by the way, is not a singular technology. And when people today talk about generative AI, really that’s a shorthand for automation.

So part of the challenge is—I work in a field called strategic foresight, so my job is to—we build quantitative models and we use data to build out—to find trends, which for us are not trendy; they’re longitudinal indicators of change. That’s what we use to develop scenarios. And ultimately, all this goes back to strategy.

So what I’m seeing right now is widespread confusion among the people who are making decisions. And as journalists, you are the ones talking to the people making the decisions. So if you—and again, I recognize the kind of crappy position that’s put—that this puts every journalist in, because expecting every journalist to suddenly know enough about AI to explain it is equivalent to expecting every journalist to enough—to know enough about quantum computing, right? I’m sure nobody in this room would raise their—I mean, maybe. Like, is anybody in this room like, I’m on it, quantum, I’m your—I’m your gal? Probably not. (Laughter.) OK. But so—but so how is that—how is that any different from AI? OK.

So to answer your first question, that is the position that just about everybody is in, and AI has become buzzy because of you. I will tell you that in—so OpenAI announced, right, that it had something it was releasing to the public, and that was the moment of critical mass. That is when artificial intelligence crept into the consciousness of public. Did anything materially change? No. There was no breakthrough. And actually, there have been many breakthroughs. A huge breakthrough happened yesterday. DeepMind has cracked the code on molecules, so that the next thing that’s coming in AI—I’ll get to your part in just a moment—is the convergence between AI and biology. Nobody is going crazy about generative bio.

So part of what’s happening here is OpenAI made an announcement. Suddenly, there was a chatbot that anybody could experiment with for free. And then the usual suspects in journalism—sorry—went out and did the usual types of salacious stories: Well, I tried to trick it into telling me it loves me. I tried to trick it into making it be whatever. And now, suddenly, everybody’s talking about it.

The harsh truth here is that journalism—again, apologies; I’m on your side. But also, like, everybody in journalism had their eyes off the prize for unforgivably too long a period of time. In 2022, everybody forgets this now, but OpenAI sent out a press release saying we’ve made something that is an existential threat to humanity. Go back and look at the press—not now, because we’re very important and you should listen to us. (Laughter.) But at some point, like, go back and look at it. It was—it was so catastrophically dangerous they can’t release it, OK? That was 2022.

So my point is this. We’ve been talking about the idea of artificial intelligence for decades. It’s mostly been anthropomorphized and it’s mostly been a hellscape, dystopian, you know, terrible future. There are issues to be tackled, but if you don’t have—again, I recognize we’re all starting from a place of how could or would we possibly know anything about any of this stuff, and AI is something that I’ve studied for twenty years, so like—and I don’t even work in the field. So I know barely enough to justify having this conversation with you. But you have to do better, and you have to know more so that you can tell the stories that are worth telling. The vast majority of coverage I’m seeing on AI is feeding directly into the people who financially are incentivized to have you be their mouthpiece, which is why you’ve seen a parade of people telling Congress please, please regulate us. There’s not going to be any real regulation or real enforcement. Why is this happening? So that the policy can be written in a way that financially incents these companies.

I’ll stop, but the—yeah.

ROBBINS: OK. But that—I mean, that’s a compelling argument criticizing people who actually cover technology, but most people are not going to be covering technology. And most small—

WEBB: No, but to your point about the—

ROBBINS: Most smaller newspapers can’t even afford to have a reporter to cover technology. That said, AI and generative AI is going to change our lives. I mean, the example of they’re going to use generative AI to decide who gets an insurance benefit and who doesn’t. That’s a good story to cover, and it’s a good story that a local newspaper should be covering. So—

WEBB: I agree. But if you don’t—

ROBBINS: Let’s not scare people off and say you can’t cover it because you can’t explain quantum computing.

WEBB: I’m not saying don’t—I’m not saying you can’t cover it. I’m saying every journalist—so forget newsrooms for a moment. If every one of the—if every one of you, if your goal and job is to gather information for the—quality information for the purpose of more informed people, then you yourself have to become more informed. And I know it’s like—again, that’s why I’m saying I totally get everybody in this room is resource-constrained. I get that. You’re going to have to carve out an extra 5 percent of the total pie of your time and energy resources so that you can get a little smarter on this. Reading everybody else’s stuff is just going to create an echo.

Let me—one last thing. So, to the deepfakes and the cheap fakes and everything else, so we—every year we publish a giant trend report that launches at South by Southwest. Again, these are not trendy things; these are the long term. One of the things that we found in our research is the emergence of a deepfake event, OK? So right now you can have a singular photo—Katy Perry wasn’t at the Met, whatever. The speed with which we are all moving now and the incentives by which—the incentives being used to publish quick, fast, go, go, go, go, and the lack of tools to—for—and the lack of accountability, but the lack of tools for traceability create a vulnerability in the information ecosystem. And what we see plausibly coming is a deepfake event. So imagine a bad actor using an automated system to generate not one image of a bombing somewhere or a break-in somewhere, but, like, 10,000, and flooding—

DONOVAN: That’s already happening, though. So, like, if you go online and you look at pictures of Gaza, there are AI-generated images meant to evoke emotion of children in the rubble hugging cats. There’s no gore, but they’re, like, obviously trapped underneath a building or something. That’s already out there. But I would challenge this idea because every disinformation campaign is itself an event. Like, we study all of these as events in time that can be historicized, and I think the tools that we should be talking about, then, for journalists to understand are some basic understanding of provenance, open-source intelligence that you yourself can start to track back and understand, well, where did this image come from. And if there’s no evidence that this image came from somewhere, you have to say, well, who is the source? Not the second source, not the third source, not the journalist that picked it up, but where is the first time we’ve seen this image?

And you know, to your point about understanding emerging technologies, the most basic way to understand AI and what’s happening here—remember a couple of years ago it was big data, and then it was algorithms, and then it was large language models, and every couple years they change the names of these things that they’re working on so as to hope and—create a hope and hype cycle around whatever the next thing is. I mean, if you look back at the history of Facebook and ten years of Zuckerberg going to Congress, he keeps saying AI content moderation is on its way. Where, you know—so a lot of times what technologists are doing is muddying the waters, especially the CEOs and the—and the communications staff at these large tech companies, and they muddy the waters on purpose so that it becomes really hard for people to unpack what’s actually happening, exactly where the innovations are. In particular, like, if you follow genetics, it’s very important to understand the role of AI in protein folding and new genetic technologies, but it’s not the same AI that we’re talking about that is a commercial product by OpenAI/Microsoft or any of these other big companies.

WEBB: Could I—could I just finish my thought on the deepfake event briefly? It’s not just photos, and this has not happened yet. So it’s a constellation of different types of information that can all be generated at—sort of almost simultaneously by a single—

DONOVAN: But this is ahistorical. What you’re talking about is if you can plant information online people will—brains will explode.


WEBB: So I’ll just—you guys can chitchat with me later if you’d like to know the rest.


DONOVAN: I think you’re hyping an event or this, like—you’re playing into the exact same thing you’re criticizing journalists for doing by saying, you know, there’s a national security risk to this.

ROBBINS: Joan, let Amy finish her—and then—and then let me—I’m going to ask my next question, OK?

DONOVAN: All right.

WEBB: I’m cool. Let’s just move—if you want to know more, ask me later. Why don’t we move on?

ROBBINS: OK. Let’s go on to a very practical question which has been raised here, which is: How is AI changing the way we do our job as journalists for ill and potentially for good? Is this going to put us all out of business individually? I mean, we already see certain news organizations—although “news organizations” perhaps with air quotes—which are using generative AI to produce stories. Do we all have to worry that we’re going to lose our jobs to this? Can we use AI potentially to track down deepfakes or cheap fakes? You know, is this a tool that we can use for good, or is it all just a disastrous—who wants to jump in? Mehtab?

KHAN: I can jump in. You can certainly use it for good. I think there’s a lot of benefit in it being an aid for your everyday work, whether it’s like summarizing, doing search queries for you, or you know, editing. There is a, you know, benefit to using these tools.

But also, as far as the nature of your job is concerned, I think that there are two aspects to this. The first is what is already unique about the job, that is already, you know, given legal cover. And drawing from copyright law, you know, creativity is protected; facts are not. So if you—the way that you disseminate information on the internet, the factual aspects of it are not protected by copyright. So if your articles or your work is also, you know, forming the foundation or being the source of input to train other AI models without, you know, your permission or compensation, then there’s little that you can do about that in that you can’t really stop web scrapers from taking millions of articles off of news websites. And there are ongoing lawsuits right now instigated by news companies because of this very risk. So I think what is protected is going to change and what happens to your work is going to change.

ROBBINS: How many people here use generative AI to organize their research or in some way in their work?

WEBB: So here’s what I’m hearing from executives. There’s a tension between wanting to resolve the bottom line—which is to say improve the financial operations of an organization—through attrition, because typically the highest—I mean, you—a lot of you know this. The highest cost in any organization is typically the people, so the salary lines and all of the benefits that go along with them. So there is a clear incentive to try to automate as much as possible and reduce headcount through replacement.

Now, I think that’s actually a strategic advantage for you, because in a smaller organization headcount is always going to be a problem. So the question is, can you flip the script on this and instead think about your topline growth, which is new opportunities to bring in revenue, new—without—again, without crossing any lines. But are there different ways for you to improve your productivity, bring in revenue streams, and things like that? I think the answer is yes, but you need to look at the current transition that we’re in as analogous to the transition we were in at the dawn of the commercial internet. A lot of news organizations at that point should have immediately thought through what are future business models that will make sense—so where are we vulnerable, where is there a strategic opportunity, and you know, where can we start small? You don’t have to have a huge budget to do that. That’s something that you could do sitting on the subway just thinking, thinking about where is there a competitive advantage.

I see almost no—I mean, honestly, I don’t see a single news organization doing that for artificial intelligence as we make this transition right now, which is why I’ve seen either no movement or news organizations selling access to their archives. People magazine just did this. They were a client of ours sixteen years ago, you know, and at that point I remember Time Inc. was—had this idea, the whole organization, to produce 500 new pieces of content a day, and that was before AI. This was going to be all through—because they were going to flood the internet with content, and that was going to be part of the business model. You know, now that’s happening all over the place and we wind up with—it doesn’t help things.

So the point is that the thing that comes next beyond traditional advertising and search is going to be something else. There’s a different way that value will be created using those data, but there’s no plan, again. So if news organizations don’t have a plan, then you’re really at the end of a very long value chain where other people get to make decisions about how the business works. So what I would say is what is useful—like, in what ways can you automate or gain new productivity? If you’re in a situation where somebody approaches you about selling your archive, don’t. Once the data are trained, you lose the ability to derive revenue forever. So it’s a—these are terrible deals that are being offered. Instead, sit on your archive, or if you’re a new local news organization what type of—again, like, what could you be creating through the normal course of your business that could create value for others, and can that be commercialized in a way that doesn’t blur any lines? And if this is, like, not your thing, then find some folks in your ecosystem and just start these conversations with them. I guarantee you that step alone puts you so far ahead of what I’m seeing in many of the largest media organizations around the world.

ROBBINS: So, Joan, is there—are there useful tools here that can make our jobs easier, or is it all just kind of—are the robots going to put us out of work?

DONOVAN: Yeah. Yeah, I don’t—I definitely agree with Amy on the point about AI, at least that generative AI is code for automation. And we’ve been dealing with this since the birth of the factory, this idea. I mean, Musk’s major goal is a machine that builds machines, right? That’s what he wants the Tesla factory to do.

And with the internet, you know, when you have to start to live in a splintered world here where the internet is going to proceed in a certain direction, that doesn’t necessarily mean your coverage has to or your sources have to. But in terms of generative models, what’s happening here is math, OK? So when we talk about generative AI, what we’re talking about is large language models that predict the next word in a sentence. What does that mean? There’s no color. There’s no, you know, human, you know, ideas about language. And so where we see AI happening is usually in real estate reporting and in sports reporting because, you know, people just want to know, you know, the scores or whatnot. So there are places in which AI is going to, on certain beats, move faster than other places.

But that, I think, behooves you, then, to learn more words and learn how to be a writer that, you know, is sought after, that people want to read your work because it’s interesting and it’s thought-provoking. I know when I’m looking at, you know, students who are using generative AI, they don’t understand that these models are using a lot of similar words like the world “realm.” I’ve never heard an undergraduate say the world “realm.” (Laughter.) Or “delve,” or you know, “circle back,” right, because these are, like, kinds of speak that happen in—you know, in these other online environments. And I’ve been playing with these tools for years thinking about, well, how are we going to be able to spot this.

But I think the most important thing you could do as a journalist is learn to use them as an expert system. So what made AI and large language models kind of interesting for let’s say lawyers is you’d have a case file with 10,000 documents; wouldn’t it be great to not just have to search by keyword, but search through AI and, you know, if you’re looking in a document—say you’re the defendant for someone, you know, January 6 and you search for the word “riot.” Normal keyword systems are just going to pull back the word “riot.” AI might look for other words as well, like “insurrection,” “uprising,” you know. And so what’s cool about AI when you have a large trove of source material to sort through is it’s going to look through that material much more like a research assistant might and pull out relevant things.

And so I think that there’s—you know, where I see this innovation happening is Ron Suskind is working on an AI system to query all of the data that he can find on January 6. And he’s got all of the—you know, those boring congressional reports and, you know, even audio from secret recordings that journalists had made of people in the Trump world, and you can search it in such a way that feels like, OK, I’m actually digging into this and getting the most relevant results. So in your journalism, you should think about creating an expert system that you can use to query all of the source materials and the notes that you’ve taken over the years on a certain beat, and it just helps you more quickly summarize and understand what’s in those documents. And so it can supercharge you in a way that a really great research assistant might.

And so those are the things that I think are useful for journalists to understand. But if AI is the thing that you’re reporting about, then at the end of the day it’s math. The deepfakes that make people’s faces do different things like face swap, it’s just math. Voice prints, it’s just math. And that’s why lawmakers really need to be focusing on biometric information privacy. The real threat of AI that journalists, and local journalists too, need to be thinking about is theft of identity—not theft of your financial identity, which we have all kinds of rules about, but the actual essence of who you are: your face, your voice, and even if you’ve written a lot online your style of writing. So you can ask AI to write me a song in the flavor of Kendrick or Drake—you can pick your fighter—(laughter)—and it will, right? And so that kind of essence of who you are is cooptable by these machines. And so it’s incumbent upon local journalists to be able to explain that and the real everyday threat of it to your local constituents.

ROBBINS: So I’m going to throw this open in a minute. I just have one more quick question. All this is very helpful. This is really, as Amy pointed out, an incredibly complicated topic. We do have to, as journalists, get smarter about it to—whatever it is that we’re explaining. What do we read? What sources do we have? We all want to get smart so that we can explain things. Where do we go for information that you trust? You guys are, obviously, three great sources, so we have a good beginning here. What do you recommend? Where do you go for information?

WEBB: You know, it’s no different from anything you already know. You want primary-source information and you need somebody to help you make sense out of it. And that has to be a non-politically-motivated—and not politics, but you need somebody who’s not on the side of anything. And that’s really challenging right now because artificial intelligence everybody has an opinion on, and a lot of the people that have the opinions don’t have informed opinions. So I think if you can—you know, Quanta, which is actually a publication focused on math, has very good, accessible information. MIT Tech Review has done a pretty good history of this. And Karen Hao, who’s left but she was there for a while, I thought the criticism that she had was balanced.

But ultimately, you—some of you are reporters and some of you are on the management side of things. For those of you who are reporters, it’s tricky because in the—we advise some of the world’s largest insurance companies, and I work directly with their CEOs. So, yes, there is some AI stuff happening in the underwriting process. But that has—that has been happening for a very, very long time. (Laughs.) And also, like, insurance companies are super behind when it comes to tech.

And the point about the large language models, those get trained through something called reinforcement learning with human feedback. And those humans are almost exclusively not in the United States. They tend to be in other countries where English is not their first language. So some of the weirdness that we’ve seen is because you’ve got—so if you want to screw around, ask an image generator to—I asked an image—for two years, image generator to show me a CEO of a—that’s a woman. Like, so, basically, I was trying to get it to show me a CEO who was a woman—Fortune 500 company, small company, bank. I came up with all these permutations. I finally typed in, exasperated, show me the CEO of a company that makes tampons, OK? And a year ago it just showed me a bunch of men again, but this time they were holding, like—(laughter)—one thing looked like a giant shower poof and the other thing—(laughter)—so, like, this year when I did it again, it was like, a tampon growing out of a bush and, like, some other weird things. (Laughter.)

The reason is—has a high probability of being because the people tagging the images—there’s a whole process that I won’t bore you with that requires humans embedded in this entire process. Those people are probably teenage—like, young men in places like Nigeria or Rwanda, where a lot—or Pakistan—where a lot of that work is being done, and they’ve probably never encountered a tampon before, so they didn’t label that information correctly. That’s the type of, like, really nuanced information that I think helps everybody understand a little bit better the challenges with this enormous transition that we will all be in for the next, you know, decade or so.

So that’s what I would say. And if you can figure out ways to get past the, like, doomsday people, and the this, and the politics, and just sort of get to these layers of understanding versus the abstraction—which is where most people are now—that will help a lot.

ROBBINS: Mehtab, sources, expertise, just basic understanding, what do you recommend?

KHAN: I mean, I would also—you know, adding to what Amy said—look at primary sources, look at regulatory agencies that are coming up with reports and commentary on different aspects in different industries.

ROBBINS: Which regulatory agencies matter?

KHAN: The FTC and NIST right now are really active.

WEBB: But don’t try to read the NIST guidance. (Laughs.) It’s very long.

KHAN: It’s a good source to see who’s writing about it and who they’re citing.

Just for, like, summaries or just, like, to get a sense, I would recommend Stanford’s Human-Centered AI Institute has this, you know, newsletter that does a good roundup of what’s happening. The AI Now Institute also sends out periodic legislative updates that just breaks down the basics of what’s happening. And also, in reading all of this—and I think it’s really also important to keep up with what’s new in the technology. So some of this literature might be, you know, more accessible depending on, you know, what field you look at. Some computer scientists, you know, do write with social scientists. You know, look at their collaborative work and see what they’re writing about. And keep up with what is new in the technology.

So, also, I found it interesting what Amy said about language and about the competency or the shortcomings of this technology based on who is training it. I think that language is not as much an issue as is the fact that a lot of the labeling that goes on is with material that’s violent or exploitative or harmful to the person who’s doing the labeling. So they have to sift through millions and millions of datapoints based on what’s already on the internet, which is a lot of bad content. And so I think that, actually, English language competency is, you know, the highest competency in how these AI tools operate. It’s the other languages that we have to be concerned about, that are not easily, you know, forming the bases of these technologies, so the AI tools don’t work as well on non-English languages.

ROBBINS: Joan, quickly, last, but sources to make this more accessible to people?

DONOVAN: Yeah. I think—yeah, making it more accessible, every one of you should have on speed dial a college professor that is tracking this. I check in with reporters—I probably do let’s say maybe two hours a week of just checking in with reporters who are working on stories, trying to understand something, not for quoting and not for attribution but just to give a grounded sense of what’s happening. Go to your local colleges’ colloquia and talks on AI. You might not understand it from the get, but if you look at what sociologists, anthropologists are doing to understand AI, debates about ethics and responsible AI, these things are all available and usually open to the public. So making relationships with local universities and colleges is crucial for doing any kind of journalism that requires you to stay on top of an emerging field. So rather than develop sources or do all of this homework, you can also just ask a professor for twenty minutes or say I’m going to drop by campus, can I get you a cup of coffee. Or graduate students that are at the cutting edge of these fields often will give you hours of their time—(laughter)—to explain to you the nuances of things, and in those cases you should give them a quote because they need to be rewarded in some way. (Laughter.)

So, yeah, that’s my advice, is get friendly with someone who’s already following this, and then check in with them periodically to say, you know, what’s up, what’s happening, what are you seeing. Because, you know, I haven’t written about my work on understanding AI innovation through online pornography, but I definitely have been talking to journalists about it, being, like: Did you see this thing? You know, there’s a lot of memes in that space. So generating an understanding around that, and then just being part of a community of thinkers I think is incredible important—incredibly important for figuring out what the right stories to tell are and what the hype is as well. Because the hype stories of, you know, I made AI fall in love with me, I made AI tell me it was going to destroy the world, those are—what do they call them in journalism, like stunting? You know, it’s like this—it’s a stunt. It’s not a—it’s not really any deep, thoughtful analysis of what’s happening and where it could potentially collide with people’s everyday lives.

Like, I’m getting a question from a lot of people right now that are like, why is search broken in Instagram? And I’m like, it’s not broken; it’s just they’ve added AI into everything in Meta products. And one of the ways in which we see these new technologies emerge is that they will roll them out when they’re half-cooked because getting people used to them creates lock—that is, you get locked in and you be like—are like, once you figure out how to use it, you stick with it; but also, it stalls any kind of policy debate from happening because if you’re been using a technology for two years getting politicians to then retroactively say, oh, maybe, you know, dropping a bunch of scooters in the middle of Times Square was a bad idea, maybe, you know, Lime should be responsible for all of the injuries, you know, it becomes much harder to do after deployment. And so understanding why technology is emerging in the spaces and places that it is at the right times is, I think, critical for understanding its relationship to the future of regulation, which in the case of tech has been very, very minimal, in the U.S. at least.

ROBBINS: Great. So let’s turn it over to you all. I’m sure you all have a lot of—a lot of questions. And put your hands up, there’s a mic, and if you could identify yourself it would be great. I think we have a woman right here who had her hand up. That would be great.

Q: Hi. My name’s Emily Cureton Cook. I’m with Oregon Public Broadcasting.

Do you think it’s inevitable that eventually generative AI will be able to coopt the essence and the style that we’re talking about and be—because now it’s like, I can tell—I can’t necessarily tell bad writing or clichéd writing from AI writing, but I think you can tell good writing from it. Photos are trickier. But is that sort of inevitable, or is that a hype story? And also, what tools exist for provenance as the AI gets more sophisticated?

DONOVAN: Well, Adobe has a very large content authenticity initiative that is a constellation of a bunch of different tech companies. But there’s nothing more effective, especially with video—I mean, with images of—or video you take a screenshot—but using TinEye, which is a reverse image search tool that also allows you to sort by time, and it’ll catch variations on a theme. And so that, I found, is one of the most hopeful or helpful tools. There isn’t anything out there that can spot with any competency, outside of plagiarism checkers, what’s happening with large language models.

But I would say that as you are thinking about protection of journalism, I think what’s going to happen online is you’re going to get a whole bunch of spam. We’re even seeing it already on Reddit. People are using AI to implant more and more threads or conversations about certain products, because that means that if you ask for, you know, generative AI to say something, it’ll equate that product with these keywords and the product-placement ad will show up in your generative AI answer. And so people are already what we might call data poisoning Wikipedia and Reddit to, you know, basically get their products into these large language models.

WEBB: Could I—could I—just because we’re running short; I don’t want to interrupt—but could I flip this around? I get that everybody’s worried about themselves being replaced. What if, instead, you used a system for aggressive versioning? By that I mean personalization has never really happened, but you have different people that approach content in different ways. So what if you produced one story, and facets of that story, you know, were attractive not just to sort of like everybody in Oregon, but you could scale a story to a million people by slightly tweaking that story in ways that are more responsive without changing the journalism or the reporting or anything else? I would love to orient everybody in this room toward a future in which you’re using these tools for leverage to increase the impact that your work is having on communities, and the only way to get impact is to get them to interact with your content.

Q: I’m less worried about being replaced because, like, I have to call people and build—but, like, there’s all this stuff—actually, if AI could just write a story to the end and get it over the finish line, great. (Laughter.) (Off mic.) But I’m worried about credibility. (Comes on mic.) I’ll just be loud. I’m worried about, like, you know, if something can coopt the essence or style and then also simulate and people are online, it’s not so much that I’ll be replaced; it’s that I’ll be mimicked or impersonated.

WEBB: So I’ve written four books. One of them was on AI. All of them were part of book one and two, which are two corpuses that were just destroyed. So—and then somebody has—several people have built generative AIs of me, so AmyGPTs. I’m sure you’ve never thought of this before, but it’s not like anybody called me to say, like, hey, is that cool if we take your stuff that’s copyrighted and, you know? Bits and pieces of phrases show up, for the most part. And I don’t speak Italian, but there’s an Italian version of me now floating around. I don’t know how close that is.

Again, this is—this is very long-horizon technology. So today these systems don’t work very well. Could you mimic—if you had the ability to train the system yourself—and people have already done this—yeah, you could get it much closer. True idea generation, you know, yeah, that’s probably something that you can—you can do. I mean, I’ve already started to do that.

So, again, the question is, where do you use all of this to create value? Everybody knows the horror stories and the problems. Local journalism to survive is going to have to figure out a way to generate value that is totally not dependent on advertising and subscriptions.

KHAN: Just to add, I think the question right now is not whether it can mimic you and whether that’s, like, desirable, but to what extent your essence can be captured by a tool, and for what purpose. Is it just so, you know, it’s like a greeting on your website, or is it so that it can attend meetings on your behalf, or is it that it can sign legal contracts on your behalf? So there are degrees to it, and that’s, I think, the more pertinent question than whether it’s being done or not.

WEBB: Can I ask you a question really quickly?

KHAN: Sure.

WEBB: Did you see the Walmart negotiator pilot? There was—

KHAN: I did not.

WEBB: —a(n) AI system they built to read and negotiate contracts with vendors. And the human vendors—

ROBBINS: Ooh, I know I made the right decision not to go to law school. (Laughter.)

WEBB: —the human—yeah. They were running this pilot because the procurement process can take a very, very long time, so this was meant to help speed that up.

KHAN: They’re making it efficient.

WEBB: Yeah. And the people on the other end, once they—they were happier dealing with an AI system versus a human because humans have to take lunch breaks. And you know, it was just—(snaps fingers)—it was much faster.

KHAN: I haven’t seen this, but I’ve seen a variation of this, and I don’t think you can enter into legal contracts like that.


KHAN: Maybe you can have discussions, but I—it’s not been tested, so I wouldn’t recommend it.

WEBB: So you can get to the point of negotiation and then a human would still have to—

Q: Yeah. But do you know, like—

KHAN: But you have to decide what—if you have a meeting of the minds for there to be a contract.

WEBB: Yeah.

ROBBINS: Let’s move on to the woman in the back here, who has a question.

Q: Hi. Good morning. Jacqueline Charles. I’m with the Miami Herald.


ROBBINS: Oh my God, you’re Jacqui Charles. You’re great. (Laughter.)

Q: Thank you.

And I actually have a real story. And this is what—for those of us who deal in a fast-paced, news-breaking cycle, this is very scary technology. So I have been mimic(ked). I have been impersonated. I have had the fake tweets that have been sent out, sent around. That could really create a crisis. And they are easy to detect because you can just go and look for the Miami Herald website and you see that it wasn’t—you know, there wasn’t a tweet, or that there was not a tweet on me, and I’ll send out a tweet that says I see that the factory is working. But recently there was an incident where it looks like it was an AI-generated video that was very damaging—it was not me, but it involved someone and it was very startling. And I’m just thinking about if it was in one of these situations where I was so, like, ready to break this news, or another journalist, and we went and wrote the story, that would have been very damaging based on a video that—you know? So how do you tell the deepfakes? And what’s your advice to journalists who operate in this sort of fast pace, other than stop and take a breath? Because I’m just afraid that we’re going to fall for this more often than not.

ROBBINS: Before you answer that, I just have one question. The people who have been impersonating you and fake-tweeting you, is it on a particular story? Is it for your Haiti coverage, or for some—

Q: It’s for my Haiti coverage. And they know that the Miami Herald has credibility, and so whenever they are playing—they are doing the political, you know, machinations, they will either say that we said something or I wrote something, you know, with the idea of creating a crisis withing a crisis.


WEBB: What would you normally do? So you’ve got breaking news. Somebody comes to you, a source comes to you, a reporter comes to you, she’s like, I don’t know, I got this stuff. Would you print it right away?

Q: No. You know, you—

WEBB: So what would be the—

Q: Well—

WEBB: Let’s do a thought experiment.

Q: Yeah. So—

WEBB: So what would be the process?

Q: Well, my process is that, you know, you start making phone calls and you start, you know, reaching out to find out if this is the case. But you said something earlier in terms of, you know, the way for liars to say, no, it wasn’t me, that wasn’t—

DONOVAN: The liar’s dividend.

Q: You know, that’s not me in that video. So what I’m also wondering, you know, if somebody sends something out that’s potentially career ending, which is what this thing was, even how do their bosses, you know, make the decision that this was not really them and that this was AI-generated? I mean, I’m just—you know, I’m seeing how this play(s) out and just figuring out with this technology.

WEBB: It’s the same thing that you would do—I mean, you would do the same thing. It may be slightly harder, but the point is you would—you would have to corroborate that information the same way that you would corroborate any big story.

DONOVAN: Well, I would also add that one of the things about these AI-generated videos and whatnot is that all of the videos have a creation set of metadata. So you can, like, left-click and look at “get info.” And a lot of people don’t go the extra step of faking the metadata. So one place is—we see this a lot in disinformation campaigns, where we’ll see recycleds or recontextualized; you know, there was a forest fire that happened and people are saying, you know, the rainforest is on fire, but the picture’s from seven years ago. Looking at the metadata I would say nine times out of ten solves the problem of when was this video created and is it a legitimate video.

The other thing they might do is wipe the metadata, you know, completely so that there’s no artifacts of when it was created and what it was—what software it was created with. That’s also an indication that it’s a fake.

The other thing you can do is smaller newsrooms can contract with IT security people who are trained in looking at this, because cybersecurity folks are trained in looking at manipulated media and also understanding the particular kind of what they call artifacts that come from Photoshopped images or AI-created images and data.

And so there are steps you can take beyond sourcing that are part of the media itself when it’s traveling across the internet. And sometimes the—you know, people are toying with laws that will require tech companies to have this provenance tracing as even videos are moved from one platform to another, so it behooves you to understand a little bit about what is the data associated with the media that you’re looking at.

ROBBINS: Is Poynter or are there—I mean, a lot of these newsrooms don’t have big budgets to contract with people.

DONOVAN: Yes, you can also send it to a fact-checking organization—Poynter, the International Fact-Checking organization. There are groups of people that do this kind of work full-time now. And especially in places like the Philippines, where there is an entire industry of disinformation that you can just purchase—pay to play—there are also people who are very good at fact-checking and video and audio, as well as image forensics.

ROBBINS: And is there also—are there—are there training courses, free training courses, that are going on right now to—

DONOVAN: Yes. Knight Foundation offers a free training institute that you can take an e-course. Folks like Jane Lytvynenko, if you search her name, there are different videos of her online doing trainings. I will try to send a list of resources—


DONOVAN: —that can be forwarded to you all post-conference of some of these videos. Several of them are already available on, which is a website I used to run, where you have these, you know, how do you inspect the data on Telegram, for instance, or other places like that.

ROBBINS: Great. Super.

Right here. You’ve had your hand up for a while. (Laughs.) Thank you.

Q: Thank you. Hi. I’m Ethan Baron from the Mercury News in California, which just sued OpenAI and Microsoft, along with seven other sister/brother companies/newspapers.

And so in terms of like what to cover about AI and the effect on society and the economy and such, like, you know, there’s the ulterior motives that, I think, Amy, you mentioned about some of these companies talking about, you know, Skynet situations, that robots are going to get us all. But then Sam Altman, the CEO of OpenAI, was just the other day sort of like, you know, talking sky-is-falling stuff about jobs, and there does seem to be—already there’s some impact happening on jobs. But I’m really—like, I’m sort of struggling to sort out how much attention to give to that element of the effects of AI. So probably Amy, what—like, what do you see happening, like, jobswise—AI taking jobs, taking so many tasks that certain, you know—fewer jobs are needed, all that kind of stuff, and in what fields? And so—and what areas—if we want to—want to cover AI responsibly, what areas of that job issue are worth looking at, you know, more closely than others?

WEBB: When we say we’re on the record, are we—what does that mean? I’d like to say something, but I’d like it to be not—

ROBBINS: It’s very hard to go on background with this many people in the room. (Laughter.)

WEBB: All right. Then let me—that’s fair. That’s fair. I should know better.

Let me say this. There was a famous report that came out by a very large professional services organization that purported to have a prediction of the number of jobs that would be lost over what period of time, and that is partially what got this ball rolling. And added onto that was the sort of insanity around prompt engineers who were suddenly making $400,000 a year and stories about entire populations of call centers being relieved of their humans. A lot of this stems back to that initial report.

What would it take for that number to be true? You would have to—the world would have to be static. There would have to be no—you would have to not be able to do a regression. By that I mean there would have to be no variables in play. So, like, from the get go I take a look at that report and I’m like, I wouldn’t—I would not use this data. Like, I wouldn’t use this information.

So the better thing to do is to look at the community that you’re covering, what are the industries there, and which of those industries has a higher probability of needing to change. Because the bottom line is that—so I would say that Gen X, Gen Z, whatever, we’re all part of Gen T, which is the transition generation. And we’re in the middle of the beginning of what I would call technology super cycles or economic super cycles, when a singular event creates longstanding change that can last years or decades. What’s happening right now is the convergence of three big areas of technology. AI is one of them. So that just starts to shift jobs.

So what we know to be true is robotics haven’t come far enough yet, which means that jobs that require hands, those will be around for a while—plumbers. And there’s already scarcity in the trades. So, like, that a story that I haven’t seen told. You know, so there’s that. Certain areas of white-collar work—we already know that—will be eliminated.

The big story that’s not being told is just the story of transformation. So it’s not about all these jobs going away; it’s about the jobs that will change. That’s as wonderful story to tell because it—first of all, it’s the truest of the other stories—(laughs)—and then it orients people into how might their future look different than today, and what does that mean. And it helps businesses think a little bit differently, things like that.

Q: The thing that I struggle to understand is, like, if you’re automating all these tasks, how does that—


Q: (Comes on mic.) If you’re automating all these tasks, how does that not reduce overall job numbers?

WEBB: I’ll tell you. Because in a field like insurance or banking, these are enormous, enormous corporations. So to get from a proof of concept like an AI prototype, like a—like a deep learning prototype that can automate parts of the underwriting process—and again, I don’t want to burn time explaining all of this, but if you’re in a large insurance company, the amount of compliance, the amount of legal review, there’s so much that has to happen to get that concept into an actual concrete proven system, that’s years, many years. And during that period of many years where all of that’s happening, there’s going to be much, much, much more change. So anybody who is willing to say all these jobs are going to be gone and all these jobs, it’s bullshit. There’s no way—either they are—they are missing critical pieces of information like how a company works in the real world, you know, or they’re just speculating.

DONOVAN: Yeah. And I think one thing I’d like to add to that is, does anybody know what ATM stands for? Automated teller machine. So if you go back into the early advertising around ATMs, it was, you know, a picture of a woman pregnant saying you’ll never have to, you know, shut down the bank, you know, again. And so the ATM comes from this legacy of describing certain types of menial labor or this kind of work as not useful, so this is why we, you know, anticipate AI is going to hollow out some kinds of middle management, right?

But what it misses in terms of, you know, any kind of industry that’s customer facing is that if you are a small-business owner and you go to the bank, you want to have a bit of a relationship with those folks. You don’t necessarily just want to be interacting with machines, you know. And so what we’re, you know, thinking through is this moment of, you know, what we’ve gone through in the past around McDonaldsization of work, right; is like, if you can build a better hamburger by streamlining the process, you know, that’s what these little AI software machines are going to be inserted to do. And it’s really going to depend on, I think, some of these large, legacy, big, bulky institutions are not going to be able to adapt to that, and then what’s going to happen is an emergence of companies that are going to just build infrastructure so that you can, you know, do insurance in a more boutique type of way. And so it’s going to lead to the proliferation, I think, of a bunch of other jobs where people do take advantage of that transformational moment and realize that a little bit of technology might be helpful to make the process more efficient.

But what you lose from that is the human face of institutions. You lose from that the customer service. You lose from that the minutiae of what it means to, you know, interact with or want to be part of, you know, different institutions. And so, unfortunately, as journalism shifts it’s really incumbent upon you to think about, you know, style; voice; what is it, the beat that you’re doing that nobody else could do because you are so on top of things. And think about maybe going indie or going small rather than going large and scaling, because I think these legacy institutions are going to have a really hard time adapting.

ROBBINS: Just to quickly—yeah, sure, Mehtab.

KHAN: Just one quick note. I think it’s also important to pay attention to how automation is going to change the standards we expect from existing jobs. So especially in, like, expert areas like health care, legal advice, business advice, financial advice, if it’s automated, what are our standards of care and expectation of veracity, accuracy, reliability? That’s going to change.

ROBBINS: In the back and then here.

Q: My name’s Laura Guido. I’m from the Idaho Press.

I think many of us already deal with issues around, like, trust in the media. I mean, that’s definitely been something the last few years. And then AI has kind of—people’s idea of AI, especially, is already kind of eroding some of that trust. We had a local news outlet recently that was accused of using AI because they just had an editing error and, like, a quote was repeated or something. As we create policies around our usage of AI, do you have any tips for, like, helping maintain—explain it in a way that helps, like, ensure trust from our readers?

ROBBINS: That’s a great question. And I—and can you also voice your question? Because we only have six minutes left, so we’ll—

Q: Well, I’m Lici—(comes on mic)—Lici Beveridge And I’m a full-time journalist, but I’m also a graduate student, and I’m studying communication and artificial intelligence.

And I’m finding it very difficult to find resources on artificial intelligence, especially in the school that I’m in, because it’s a—it’s a forbidden subject because they don’t know enough about it to say, OK, let’s embrace it. And I think—I’d kind of like ideas of how to kind of pitch it where, you know, we need to open these conversations, we need to talk about, it we can’t stick our head in the sand. How can we make it, like, real that—you know, to bring it out?

And my other question is, can I put you guys on my speed dial?

ROBBINS: No, you only get—you only—you only get one. No, no.

Q: I just want to put them in my speed dial. Can I put you in my speed?

ROBBINS: No, no, no. No, OK. (Laughs.) All right.

Q: That was it.

ROBBINS: I think—I think those are both great—both great questions, but I think this question of trust and, you know, the notion that artificial intelligence—we do have to come up with clear standards for readers because we—the loss of trust in journalism, like the loss of trust in all institutions, is so—and then—

DONOVAN: Yeah. I think, you know, one of the things that I’m trying to do at BU is build out an institute for sustainable journalism with Brian McGrory, who used to be at the Globe. And one of the things I’ve noted over the years with the—social media being integrated into everybody’s everyday lives and mobile phones is that everybody’s a journal-ish now, right? (Laughter.) Like, people are reporting on things. You know, an event breaks out like last night, or there was commencement at Howard University and there was trouble, and people immediately turned on their cellphones, and you could watch, you know, from every vantage point how they were shutting down commencement.

So what I think is really important here to understand is that we actually need to educate folks in high school and earlier in college, like freshman compulsory courses, on how to do journalism, right? And so one of the things that I want you to come away from this thinking is that your job is so integral that we have not understood the revolution in information that is happening that has made journalism a kind of art, a kind of trade that everybody needs to know a little bit about, especially around sourcing, fact-checking, media literacy, digital literacy. So, you know, I really want to build out a constellation of participatory media courses that allow younger and younger people to tell what is good journalism from the bullshit, right?

And I think it’s really important for us to understand that the revolution in journalism is that everybody is now equipped with the capacity not just to author something, but to publish it to millions of people. And that is an amazing, powerful way of being in the world. And I think it’s, you know, as you are experiencing this transition—because you might be sort of the last of a generation of professionalized journalists and things might be shifting now. But I do think that younger folks are very interested in journalism, want to do better, and I think if we reimagine what constitutes journalism qua democracy, I think we’ll get out ahead of all of it.

ROBBINS: Mehtab? That’s great. Mehtab, quickly, can you come up with sort of a set of declared ethics rules that we can explain to readers?

KHAN: It would have to involve a lot of different institutions and also them agreeing on what are, like, basic principles. What I’ve seen some outlets starting to do is that they’ve started disclosing if parts or a whole or if any AI tools were used in the preparation or writing of any material that’s put out there. Social media platforms are starting to tag or label content that might be AI-generated. If these practices are institutionalized and streamlined, then we—then we are moving toward standards and expectations. So right now it’s being done in a haphazard way where people are voluntarily sharing what they’ve used, something synthetic or something AI-generated, but this needs to be mandated in some form. And it depends on which institution we’re talking about.

ROBBINS: Amy, I’m going to leave the last word to you. I know you’ve—you have been—what’s a positive Cassandra? You have been warning journalists that they are so behind the time for a long time, from the moment you—from when you first arrived at Columbia Journalism School. How do you answer this question here?

WEBB: So just being very pragmatic, it’s transparency and accountability. So to your original question, is there a possibility to, with every single story that goes out, put a little few sentences somewhere or to even change the architecture of the story so that it’s abundantly clear what was used and what wasn’t? That alone isn’t going to do it because distrust in institutions is on the rise. So that is where—so the transparency is: Here’s how we did it. And I would not just do that for AI; I would do that in general so that you create a public awareness of how the information is being found.

The accountability piece is extremely important as a counterpoint to that. So who’s in charge, have that person very directly to your constituency talk repeatedly about what does trust mean. My hunch is that journalists care a lot more about trust than the readers do until a moment happens when distrust is the result of something. So to combat that, it’s about—you know, Steve Jobs always had the exact same thing on every single time. It was part of his visual branding. Accountability should be part of the visual brand of every news organization, and having the leader of that organization publicly proclaim in many different ways over and over and over again that they are accountable. We’ve seen that—like, I’ve seen that happen in a couple of small instance(s). It’s more common outside of journalism. But Jessica, who started The Information—she was a former Journal reporter, Jessica, last name starts with an L. Lessin, I think.

ROBBINS: Lessin.

WEBB: So, you know, she for a long time at the beginning of that was the public face, and constantly talked about how they were creating the content, what they were doing. And it helped build some of that momentum and trust in the audience.

So, again, like, those are two things that are very practical, that don’t cost any money, that you could all start doing tomorrow or today.

ROBBINS: And we will get to your question offline. (Laughter.) Promise.

Thank you so much. It’s been a great panel and great questions. And we move on. (Applause.) Thanks. (Applause.)


Top Stories on CFR


Against a backdrop of widespread violence, a record number of voters will look to elect Mexico’s first woman president in a June election that polls predict will go to Claudia Sheinbaum.


In his inaugural address, Taiwan’s new president Lai Ching-te signaled broad continuity on cross-strait issues. China, however, is likely to respond with increased pressure. 

Election 2024

The European Union (EU) began implementing the Digital Services Act (DSA) this year, just in time to combat online disinformation and other electoral interference in the dozens of elections taking pl…