Economics

Technology and Innovation

  • Russia
    EU Crackdowns on Big Tech, After Prigozhin, Two Years of Taliban, and More
    Podcast
    Major technology companies rush to comply with the European Union (EU) Digital Services Act, which makes online platforms responsible for moderating harmful content; questions mount about the Russian private military company Wagner Group after its leader Yevgeny Prigozhin is reportedly killed in a plane crash; the Taliban enters its third year in power since the U.S. military evacuated from Afghanistan; and Iranian Foreign Minister Hossein Amir-Abdollahian visits Saudi Arabia as the former rival countries to normalize relations.
  • Latin America
    Latin America This Week: August 9, 2023
    Hot winters in the Andes and Southern Cone threaten Latin America’s advantages; Panama, Costa Rica, Mexico aim to ride U.S. semiconductor industrial policy coattails; Colombia’s new ceasefire agreement remains fragile.
  • Competitiveness
    Building a Competitive U.S. Workforce
    Play
    Panelists discuss the increasing demand for technical talent in the current age of automation, how to foster a competitive workforce, and resources available to state and local governments through the CHIPS and Science Act. TRANSCRIPT FASKIANOS: Welcome to the Council on Foreign Relations State and Local Officials Webinar. I’m Irina Faskianos, vice president for the National Program and Outreach here at CFR. We’re delighted to have participants from forty-nine states and U.S. territories for today’s conversation, which is on the record. CFR is an independent and nonpartisan membership organization, think tank, publisher, and educational institution focusing on U.S. foreign and domestic policy. CFR is also the publisher of Foreign Affairs magazine. And as always, CFR takes no institutional positions on matters of policy. Through our State and Local Officials Initiative, CFR serves as a resource on international issues affecting the priorities and agendas of state and local governments by providing analysis on a wide range of policy topics. For today’s discussion, we are going to be talking about “Building a Competitive U.S. Workforce,” and we have an amazing panel of speakers today. Bo Machayo is the director of U.S. government and public affairs at Micron Technology. He has a decade of experience as a public policy and public engagement advisor the local, state, and federal levels of U.S. government, and has had a number of positions including in the office of Virginia Senator Mark Warner, Loudoun County’s Board of Supervisors, and in the Obama administration. David Shahoulian is the director of workforce policy and government affairs at Intel Corporation. Previously, he worked at the Department of Homeland Security on border and immigration policy. He’s also served on the House Judiciary Committee for over ten years. Dr. Rebecca Shearman is the program director for technology innovation and partnerships at the National Science Foundation. Previously, she was an assistant professor in the biology department at Framingham State University and holds a Ph.D. in evolution and developmental biology from the University of Chicago. We also will be joined by Abi Ilumoka, who currently serves as a program director for engineering education in the Division of Undergraduate Education at NSF. And prior to that, she was a professor of electrical and computer engineering at the University of Hartford in Connecticut. And finally, I’m happy to introduce Sherry Van Sloun, who is the national intelligence fellow at CFR. Previously, she served as a deputy assistant director of national intelligence for human capital at the Office of the Director of National Intelligence for nine years. And she’s also held various positions with the National Security Agency and served in the U.S. Army as a signals analyst for eight years. Sherry is going to be moderating this conversation. She brought this great panel together, and can talk a little bit about her research, and basically the provisions for state and local governments and the CHIPS and Science Act. We will then open it up for questions and turn to all of you. Again, this is a forum where we can share best practices. So we do want to hear from you. You can either write your question or raise your hand when we get there. So, Sherry, over to you to take it away. VAN SLOUN: Thanks so much, Irina. And thanks to you and your staff for putting this webinar together. I really feel lucky to be here today. I want to say thanks to Becky, Bo, David, and Abi for being here as well. I know your schedules are busy, so we really appreciate you taking the time out of your day. And then I want to thank all of you who joined today. I think it’s great to have all of us here to talk about this important topic. So a little context. My last few assignments in the intelligence community revolved around building talent pipelines to meet the emerging demands of intelligence work. So my time here at CFR, I’ve spent some time looking into the implementation of the CHIPS and Science Act, specifically the human capital aspect of the act. My focus has really been around the need to build semiconductor manufacturing talent but, to be clear, the CHIPS and Science Act covers many other STEM workforce advancements and future technologies, from AI, to biotechnology, to quantum computing. So today, we have Becky and Abi here from NSF to share about the broader reach the CHIPS and Science Act gave the NSF regarding cultivating workforce, and then Bo and David to dive into some of the semiconductor manufacturing perspective around talent. So looking forward to this. And I think we’re going to kick it off with going to Becky and Abi at the NSF. Let me start here, and say the NSF has been involved in promoting science for many decades. It’s been active in supporting workforce development through your directorate of STEM education. And what the CHIPS Act legislation did was create the director of technology, innovation, and partnerships. And one of those new programs under that new directorate is the Experiential Learning for Emerging and Novel Technologies, which is the ExLENT program. Which I think, Becky, you helped to create that program. So we’re glad you’re here. So can one of you share how the ExLENT program works, the timelines you’ve laid out, and the impact you’re hoping to see over time? And then specifically maybe you could focus a little bit for a minute on the semiconductor workforce specifically, and how the ExLENT program will help to build this much-needed body of talent for the U.S. SHEARMAN: Sure, Sherry, happy to jump in. You’re correct, I was involved in the development of the ExLENT program. And we are super excited about it. So TIP is—which is the acronym of our new directorate—just celebrated its first birthday very end of the spring. And we’re really in just our first funding cycle of ExLENT. So you read out the full acronym, right? So this is really centered around experiential learning. And we’re named emerging and novel technologies. So emerging technologies really are those technologies that we—you know, we point to the CHIPS and Science Act and say that’s, you know, what we’re interested in funding. But we did keep it kind of open. So, novel technologies, right? We are kind of allowing the community to tell us, look, this may not fall precisely in the line of these emerging technologies, but we need to be building a workforce that can do X, Y, and Z. And we specifically developed this program with a few things in mind. We need to build a workforce that is nimble in its ability to get training as expertise evolves, as our technologies evolve. And we’ve got to engage all Americans in the STEM enterprise, if they’re interested in being in the STEM enterprise. For us to be really competitive, everyone needs to have access to a good STEM education. And then we also built it around the fact that we felt like we really need to be bringing organizations across different sectors together to do this correctly, right? We need to have those experts in education, but we also need to have those industry partners who understand the needs of the industry and the needs of a specific company. So the program is really designed to address those things. It’s very broad. So we allow the applicant—who can be from academia, they can be from the private sector, they can be nonprofits, we’re really trying to reach everybody here. They can say: This is the population we’re trying to reach. So maybe it’s, you know, middle school/high school students. Maybe it’s adult learners at any point in their educational career, and trying to get them hands-on experience that’s going to give them some credential, expose them to something so that they, if they choose, can kind of be on that educational path towards a good-paying job in an emerging tech field. And of course, the semiconductor industry is central to that, right? We don’t have a specific call-out to semiconductors, but we highlight it as one of the emerging technologies. VAN SLOUN: And Becky, thank you. So can you share a little bit more with the audience about, like, how they would go about engaging with you on a proposal? What is the process that folks do there? I know you have calls, but can you explain that a little bit about how a call goes out and then what that looks like once it closes? SHEARMAN: Absolutely. So we have a solicitation out. And if I’m allowed to drop something into the chat, I’m happy to share the link and you can go right to it. And there’s—we have deadlines. In fact, our next deadline is September 14. So if anyone’s really interested and has nothing to do in the next month, you can take a look at the solicitation and consider applying for the program. It outlines—the solicitation will outline everything you need to do but, basically, you’re writing up a proposal, submitting it through our standard process at NSF through a site called research.gov. And then your proposal goes through a merit review process, where we bring in experts from the community that will include people with the expertise in education, expertise in industry. You know, we try to have a very broad cross-sector expertise represented on that panel. And they review all the proposals and give us recommendations and feedback around where we should make our funding decisions. The best thing to do if you go to that solicitation, there are links on that first page to an inbox and to program officers that you can reach out to. A good place to start is just reaching out to them and trying to connect, and have an initial conversation. VAN SLOUN: Thank you. And if I recall, your first grant announcement will be announced soon, right? SHEARMAN: Very soon. VAN SLOUN: And then the call in September will be announced later this year or early next year. Super. OK. Thank you very much, Becky. Bo, let’s move to you and, you know, really kind of diving into semiconductors specifically. You know, your role allows you to see kind of across Micron and how it’s working with partners to build the talent pipeline that you all need for your existing locations and where you’re also expanding at new locations across the country. Can you share a little bit about how Micron has responded to the passing of the CHIPS Act legislation, specifically here in New York? And how you’re tracking that talent pipeline gaps at all levels of the manufacturing lifecycle? MACHAYO: Yeah. Thanks, Sherry, for that question. And it’s great to be a part of this discussion. Per, you know, your conversation, we’re happy at Micron. Thanks to the CHIPS and Science Act and also thanks to the incentives from the, you know, states and localities, we were able to make investments of, you know, in New York, of $100 billion over the course of the next couple of decades. And a big part of that is around how we can address the talent pipeline needs. You know, we’ll have 9,000 direct jobs and over 40,000 indirect jobs due to economic activity that will happen in the central New York region. But we know that all those—you know, that talent won’t be able to come directly from central New York. It will have to be a whole of New York approach, but also a regional approach across the northeast. And so specifically in New York, we’ve, you know, been able to, you know, establish partnerships from what we’re calling the K through gray level, really making sure that from K-12 we’re doing interactive activities and sponsoring what we call chip camps, that are unique to Micron and we’re able to make sure that we are, you know, engaging young K through eight, you know, students to be able to really understand the jobs that are available in semiconductor industry. Another thing that we’re doing specifically in New York is really working on kind of both curriculum development and how we can partner with schools. As a part of our announcement, we made a commitment to doing $10 million into the steam school, which is a local initiative that will focus on both career—or, both technical kind of education, but also kind of an engineering pathway to assure that, you know, we can get students interested in the semiconductor industry early on. We’re also—you know, have half of those jobs are going to be technician jobs, and the other half will be engineering jobs. So how we’re partnering with, you know, local building trades unions through our PLA to make sure that we’re educating folks, establishing certificate programs so that we can make sure that folks who are looking to transition to the semiconductor industry, thanks to the investment that we’re making there, how can folks be part of the Micron experience? And then also, how are we doing that with community colleges and also higher ed institutions, as well? And so we partnered with the SUNY system in New York, and also the CUNY system in New York to make sure that we’re building the pipeline from a community college there. Particularly investing in creating clean rooms at Onondaga Community College and then utilizing the existing clean rooms across the state. We also established a couple of regional networks for New York, especially the Northeast University Semiconductor Network, to really make sure that we’re taking, you know, what individual community colleges and higher ed institutions have to be able to make sure that we’re addressing those gaps. You know, that is—these are kind of examples of ways. And as a matter of fact, earlier this week when I was in central New York we also are able to partner with the local museum, a science and technology museum in central New York, to create a semiconductor exhibit so that kids from K-12 can actually be able to understand what a semiconductor is, what a memory chip is, and multiple different ways and avenues to be able to attract talent to be able to come and to meet the gaps that we have throughout the semiconductor industry. And so those are just a couple of ways in which we’re looking to build partners and to address some of the needs that we’ll have in New York. VAN SLOUN: Thanks, Bo. That’s fantastic. David, I’m going to turn to you now. I just got back from Portland, Oregon last week, where I was able to get a tour of Intel’s fab and their innovation center. And it was really incredible to see firsthand the different kinds of talent needed to make this industry possible. Can you share a little bit about the makeup of Intel’s workforce? I think many people will be surprised that the bulk of it really isn’t Ph.Ds., but how you’re building efforts for a talent pipeline needed for your major investment in Ohio, specifically. I know it was a huge one for you guys. I know, the Ohio State University is kind of the hub of that consortium there, but—which makes me very proud. I’m a Buckeye. But can you talk a little bit about that and what’s happening there? SHAHOULIAN: Sure. Happy to do that. So, first of all, thank you for having me. It is a pleasure to be here. Second, like Bo mentioned, you know, we’re excited about the opportunity the CHIPS and Science Act provides. And, you know, because of that, and the incentives that we’re getting from the federal government and the state governments, you know, we are right now building—expanding all of our sites, and building a new greenfield site in Ohio. So yes—on your first question, yes. People are generally surprised to hear about the makeup of our manufacturing workforce. Let me just—to just give it—summarize it really quickly, right, each of our fabs is generally around 1,500 positions that we create for that fab. About 60 to 70 percent of those jobs are for semiconductor technicians. These are individuals that can have an associate’s degree, but in some cases we don’t even require that. A certificate would do. And in some cases, you know, we hire people with even less than that to be technicians. These are people that oversee and troubleshoot the manufacturing process and then all of the support systems, like the electrical, water, gas, and air filtration systems that, you know, support manufacturing operations. So that’s, like—that’s the bulk of the jobs that we will be creating with our new factories. The other—the remainder is about 20 to 25 percent, you know, individuals with bachelor’s degrees in electrical engineering, computer science. And then it’s about, you know, somewhere between 5 and 10 percent individuals with advanced degrees. I will just want to say—just add a little caveat for Oregon, right? Because Oregon is a location where we do manufacture, but we also develop our manufacturing technology, there we do—you know, there is a higher ratio of Ph.Ds. So there, you know, there are more advanced degree folks. Second, with respect to Ohio, we’re very excited about the work that we’re doing there. One of the reasons we chose Ohio as a site was because of the great educational system that already existed there and their history with advanced manufacturing. When we announced that we were going to be building there, we immediately committed $50 million into, sort of, you know, expanding that education ecosystem that already exists. And that’s, you know, modernizing the curricula, creating modules that are semiconductor specific, providing semiconductor manufacturing equipment, helping build clean rooms. These are all the things that are necessary to train individuals and give them, you know, hands-on training in our industry. We’ve already awarded 17.7 million dollars of that. That has gone to eight collaborations involving almost 80 schools across the entire state of Ohio. We’re really proud of that effort. One of them—just to give you two examples—one of them is being led by Columbus State Community College. They’re working with every other community college system in the state of Ohio to create semiconductor technician curricula with shared credits, right, that can be shared across all of the different institutions. There’s another one that’s being led by the Ohio State University, I should have said, The Ohio State University. Forgive me for that. Right, they’re partnering with nine other universities to create an education and research center for the semiconductor industry to lead on innovation and education. So, you know, these are the—of course, the things that are necessary, you know, to create the education ecosystem that will help not only us but our suppliers, and then other semiconductor companies across the country. VAN SLOUN: So do you—thanks, David. Do you think that what you’re doing in Ohio, you’ve got quite the consortium, like you’ve just talked about. Is that going to be enough to be able to source the talent pipeline for that fab and the outlying things that are going to happen around that fab in Ohio? Or is there a way that other—that you’re going to reach into other areas, like Bo mentioned a regional approach, to that space in Ohio? SHAHOULIAN: Yeah, so that is—you know, that is a regional approach, in the sense that we’ve reached out to all of Ohio. We are also—we also have interest from other universities in the rest—you know, the remainder of the region. Purdue, Michigan, you know, other universities in the Midwest. You know, what we’ve asked is for them to help partner with the Ohio universities, and, you know, working on trying to build those partnerships and those collaborations. You know, we’ve also, you know, collaborated with NSF, right? So, you know, when NSF got $200 million to build out the education ecosystem, you know, we know Micron partnered and put some money on the table. We did as well. You know, we matched 50 million dollars in funding to create $100 million partnership with NSF to sort of also bring those opportunities nationwide to any school, not just ones where we’re operating. So NSF has already rolled out two programs with that funding. And, you know, we anticipate they will be rolling out more this year. And, you know, schools anywhere in the country will be able to apply for that funding. VAN SLOUN: That’s fantastic. Thank you very much, David. That’s very helpful, I think, for the audience today. Becky, if we could come back to your or Abi, it seems to me that the U.S. wants to be a leader in this industry, for semiconductors specifically. It’s going to take a village, right? I mean, how do we best prepare the partnerships between private sector, academia, and community organizations to really find ways to bring exposure to this kind of work? I know Micron and Intel are doing their great work, but is there anything that NSF is doing kind of to get this message out and get excitement built around this industry? SHEARMAN: So I’ll start, but then I really do want to invite Abi to join me and add anything she may have. She sits in a different place than I do at NSF. I can at least speak from the from TIPs directorate. I know we’ve been doing a lot. So TIP stands for Technology, Innovation and Partnerships. And we are very much interested in really trying to move emerging tech innovations into practice kind of at speed, at scale. And a big part of that includes making sure we’re thinking about the workforce needed to do that successfully, right? And so everything that’s coming out of TIP is really emphasizing these partnerships. So even when it comes to workforce development, we feel like we’re not going to be able to do this well unless we’re really engaging all the people who bring some sort of expertise to it. And I think when you listen to David and Bo talk about what they’re doing, right, they’re talking about doing this in partnership, in collaboration. And you know, the ExLENT program in particular is—so, I guess, let me start by saying I just—with TIP being a new directorate and all the attention that has brought, we’re trying to bring these different sectors who maybe aren’t used to talking with each other into the same room. And all of our programs that are coming out are doing that, and ExLENT is no exception there. And we are trying to get the community thinking beyond—although, you know, Intel and Micron are absolutely central to the success—but we’re trying to get the—as is, you know, The Ohio State. But we also recognize that if we want to educate the domestic workforce, there’s a lot of other organizations that could bring real value. So we are being very intentional about reaching out to community organizations, to nonprofits that are thinking a lot about reaching specific communities to get folks who would never consider themselves someone who would be in this space, have a job, you know, in a in a semiconductor manufacturing plant, working for Intel, right? It just—it wouldn’t occur to them that that’s something that they would do. We’re trying to create those pathways to reach out and give them some initial exposure and bring them into the fold so the opportunities are there for them, if they want them. And we’re also including those industry partners and the large universities, but we think that the more different perspectives we can get together in a room the better we’re going to be able to diversify the pathways and reduce the barriers to those jobs. And that’s what ExLENT is really trying to do. And, like I said, I’d love to—I’d love to give Abi an opportunity to share anything from her perspective at NSF, if she wants. ILUMOKA: Thank you, Rebecca. I agree. I agree with everything Rebecca has said. What I would like to add is that in addition to ensuring that the content is being provided, and experiential learning is being provided to students across the spectrum of academic levels, we in the education directorate are focused on ensuring that evidence-based teaching and learning practices are brought into the classrooms. We want to ensure that the right environments are available to students, the right kinds of support for learning, right kinds of assessment. And so we have partnered with TIP on some innovative opportunities, known as DCLs, dear colleague letters. These are opportunities that bring together programs in the education directorate and programs in the TIP directorate to fund investigators that are focused on not just teaching, in the case of semiconductors, how to design chips, but also how to teach the design of chips. I taught the design of chips for twenty years before I joined NSF, so I know exactly how challenging that is. You know, designing structures that you can’t see, essentially, and you’re having to refine and redesign to ensure that they work—to test and ensure that they work. And so in the education directorate, we have held a number of events to get the public excited about chip design, and chip design education. In May, we had a workshop to which we invited folks in academia, all the way from universities to kindergarten. And we had a wonderful attendance. Over three hundred people showed up for the workshop. It was a two-day workshop. And folks were invited to brainstorm on how to teach microelectronics at all levels. So a lot of interesting information came out of that. We had participants from industry, Intel, Micron, and so forth. We had participants from government and from academia. So that was a very successful event. We have a second webinar on the eighth of August along the same lines. So we have currently two DCLs. And I’ll put the links in the chat, dear colleague letters. One is called Advancing Microelectronics Education, which looks at ways in which you can actually teach this stuff to folks who don’t have the extensive math, and physics, and chemistry background. The second thing we’re doing is making sure that we integrate these opportunities with existing programs in the education directorate. For example, the IUSE program is Improving Undergraduate STEM Education. It is a well-established program in the directorate, and it looks at innovations for teaching and learning in STEM in general. Now, by bringing this program into play with the ExLENT program, then we attract investigators that have an interest not just in the content, the chip design, but also in how to teach the chip design. Now, that confluence brings up very exciting, very interesting proposals on ways in which you can present this material to folks who are not experts at all, or are not in the domain. So I hope that answers your question on how to get folks excited. We have a couple of workshops and webinars scheduled going forward that will draw in participants from all over the country. And we generally keep pretty good notes on what goes on at those workshops, the kinds of questions, the kinds of ideas that are shared, and move forward on those to help the community grow. VAN SLOUN: Abi, that’s fantastic. Thank you very much. It’s really helpful. If you could put those things in—the links in the chat, that would be fantastic for the folks listening in today. Irina, it’s 3:30. Do you want me to turn this over to you for Q&A? FASKIANOS: Yes, I think that would be great. Let’s go to all of you now for questions. You can either write your question in the Q&A box. If you do that, please include your affiliation. Or you can raise your hand, and I’ll recognize you, and then you can ask your question. And don’t be shy. We really want to hear from you. Right now, we have no questions, which I think people are just collecting their thoughts. So Sherry, if you have one—another question while people are thinking about what they want to ask. VAN SLOUN: I’m actually—oh, ahead. FASKIANOS: We do have one question. Raised hand from Usha Reddi. And if you could identify yourself and unmute yourself. And you’re still muted. There you go. Q: Thank you. So my name is—I’m from Kansas. I’m Senator Usha Reddi, but I’m also a public school teacher, elementary school. And I also am part of several nonprofits which advocate for STEM learning, especially for young women and girls. So I wanted to know, can anybody apply for these NSF grants? And do you have to be a doctorate or affiliated with a university? Can it be a teacher? Can it be a nonprofit organization? Who is eligible for these types of grants? SHEARMAN: Sure. Can I just jump in? VAN SLOUN: Yeah, please do Becky. SHEARMAN: OK. So that is a great question. I’m so glad that you asked that. So I guess in reality it depends. NSF historically, you know, makes grants to academic institutions. We are trying to change that quite a bit. So for a lot of our—for a lot of our funding opportunities you can be something other than an academic institution to submit. But you would have to look at the eligibility, right? So some are some types of organizations are not eligible. For example, the federal government can’t apply for an NSF grant, right? But nonprofits, some local government offices, if they’re related to education, can apply for these for these funding opportunities. So those opportunities definitely exist. And if there’s a program that you’re specifically interested in, I would encourage you to reach out to a program officer associated with that program. And if you can sort of Google the program if you happen to know it—if you’re familiar with the program, it’ll direct you to a contact. FASKIANOS: Fantastic. Let’s go next to the raised hand from Mayor Melissa Blaustein. Q: Hi, everyone. Thanks for a great session. I really appreciate it. And actually, Sherry, I was so happy to see—(inaudible)—intelligence. I’m coming to you from the Naval Postgraduate School. I’m a student at CHDS right now, the master’s program for local governments on homeland security. And in that vein, I’m wondering—I’m from a smaller municipality. Sausalito is quite small, but very well known. And we don’t often think about the issues of how we can attract hiring for these types of industries, but I’d love to hear maybe from Bo and David a little bit about what you’re seeing smaller communities or policies do to attract these type of people, or perhaps if remote working is being qualified or considered for folks who want to pursue a career in chips and semiconductors. And any advice any of you have as well for smaller local governments to attract a conversation around this type of topic. Thanks again for your time. Really appreciate it. VAN SLOUN: Bo, do you want to take that first? And then, David, if you want to chime in, that’d be great. MACHAYO: Yeah, no, I think—so we are investing in, you know, Boise and in—Boise and in in central New York, and in Onondaga County, but in a small town called Clay. But one of the things that we have been—we had found successful, and I’ll focus on the New York model, was working with the state and the locality to come up with something called a Community Investment Framework. So it was a partnership between Micron, the state, and the locality to really look at how are we investing in things that the community needs. Everything from housing, to workforce, to childcare, and really kind of focusing on what those barriers to entry were, to ensure that folks could be able to work in the semiconductor network. And then also using that as a model to say, what around—like, what will we be able to do similar to that model in Boise? And how do we make sure it’s a whole-of-state approach and also kind of a regional approach to invest in these barriers to entry to the semiconductor network? And how can Micron do—Micron play their role in that? And so in the—(inaudible)—in particular, we decided to invest $250 million of that $500 million over the—and then committed to raising the other 150 (million dollars). And the state put in 100 (million dollars), and the locality also put in some of those dollars to ensure that we meet those needs and those barriers. And to be able to make sure that over the course of the next couple of decades, as we implement our project, that we are providing and addressing—whether that’s a skills gap, or a barriers to workforce gap, or providing or investing in childcare or whatnot—to make sure that we’re able to attract talent from across the area. And then also making sure to kind of work with our localities and other localities that are surrounding to make sure that we’re also partnering with them to do the exact same thing, and to replicate that model. And that’s something that we’ve found successful, is that just intentional partnership to make sure that we are kind of building up that next generation of workforce to have those skills that are necessary. But I’ll turn it over to David to talk a little bit about what Intel is doing. SHAHOULIAN: Yeah, thanks, Bo. You know, I don’t want to speak for Micron. I assume this is also true. We sort of take a both-and approach to building up the education ecosystem in across the country, right? I mean, we have national partnerships. You know, like Micron, Intel partnered with NSF. We put in money, along with government money, to create, you know, grant opportunities for schools across the country to apply for if they, you know, wanted to get into the semiconductor space, or they wanted to, you know, up their game in that space. And then both companies, right, we also have regional partnerships, right? Particularly in the communities in which we, you know, build facilities, we dedicate a lot of our effort. Partly because, you know, the reality is with technicians, you know, community colleges are only going to build technician programs for their communities if there are facilities nearby where their community members can work. You know, you don’t see community colleges far from semiconductor spaces actually bringing on semiconductor programs, you know, if there isn’t a job anywhere in in that area for the community members who go to that school. So that is—so that is why we worked really closely with the local community colleges in Oregon, Arizona, New Mexico, now in Ohio, to build programs near the facilities. That said, you know, we are happy to share their certificate programs, the curricula, the—you know, the associate degree program curricula with any community college that that wants to build that. You know, I’ll say we’re also partners with the American Semiconductor Academy, right? Which is, you know, along with the SEMI Foundation is working to try to build curricula that is shared across, you know, all universities so that, you know, again universities, and community colleges, and other educational institutions can basically start or upgrade their semiconductor-related curricula much more easily. So I just want to say that, you know, there are—there are both opportunities near where we are, and national opportunities as well. FASKIANOS: Fantastic. So we have a written question from Shawn Neidorf. What is the career path for a person who comes in as a semiconductor processing technician? What does a career in semiconductors look like for a person with an associate’s or less education? And then a related comment/question from Alison Hicks, who is the mayor of Mountain View, a Silicon Valley city and home of Google headquarters. The big thing I hear from constituents regarding barriers to jobs is getting a first job after getting an engineering degree. People tell me there are 100 more applicants for many, if not most, jobs, and they can barely even get interviews. They feel their resumes are being auto-screened out if they don’t have a degree from Stanford, Berkeley, et cetera. So they rarely make it even the first step of the hiring process, let alone getting a job. Can your programming do anything about that? I know engineers who give up and don’t even work in the field. They’re not just applying in the Bay Area. They’re applying throughout the United States. So if you could speak to both of those, that would be great. SHAHOULIAN: Bo, do you want me to go first, or do you want to do it? MACHAYO: You can take it first. SHAHOULIAN: You know, I’ll just go very quickly. So, first of all, you know, at least the engineers in the semiconductor space, particularly electrical engineers, I mean, that the unemployment rate for electrical engineers right now is, I think, at 1 percent. I mean, it is full employment. So we are desperate for talent. (Laughs.) So I’m happy to have a conversation offline. I don’t know whether the engineers you’re speaking to have semiconductor skills or not. But, you know, we have strategic partnerships with many universities across the country. And that goes from the MITs and Berkeleys of the world to, you know, the Arizona States and Oregon States, or, you know, an Ohio State now, where we have two—we have partnerships with Historically Black Colleges and Universities and other MSIs to help build their engineering and computer science programs. And we hire directly from those, and we sponsor undergraduate research and things like that to really kind of build the talent pipeline. I would just say, for technicians, I—you know, the technicians I’ve met love the job, right? It’s a different lifestyle than I think many other jobs, right? It’s like, basically, they do these rotating weeks where they do three days on four days off, or four days on three days off, so you got like three or four days in a row off, and then, you know, they work either 36 or, like, 40-some hours a week in those jobs. They are jobs that, you know, we have—you know, we’re not paying six-figure starting salaries, but we have lots of technicians who do earn, with an associate’s degree or even less, more than six—I mean, you know, over 100,000 (dollars) a year. And that’s just base salary. You know, with us you’re getting stock options, you’re getting annual and quarterly bonuses. So it is, again, a really good life. And we have people with, you know, high school diplomas who are earning over six figures—you know, who are earning six figures. MACHAYO: Yeah so, you know, I’ll add to what David was saying. For us, in terms of what does a career look like, you have your technician pathways, you’ve got your engineering pathways. But, you know, holistically for us for to attract this next generation of talent and to also be able to get folks who are looking to transition from an industry and come to Micron, you know, we want to make sure that, you know, the jobs that are available at Micron, are skill-based. And so not necessarily looking at the levels of degrees of what folks have, but to be able to make sure that the skills can easily translate to work at Micron. So for example, you know, we’ve been really successful in this with the veterans community, where we have about a two times higher national average in terms of hiring veterans than kind of other tech companies as well. And so being able to attract those folks, not only because they align with, you know, the skill set that we have, but also the values that Micron has and, you know, the values that are aligned throughout the entire semiconductor industry as well. We also are able to utilize our existing footprint to be able to have folks have the opportunities at different fab locations across the U.S. A great thing that we’ll be able to do is having our, you know, fab in Manassas—in Manassas, Virginia, our R&D site and our new manufacturing fab in Idaho, and then also our four fabs that would be in New York. Having the ability for folks to go from site to site, and to be able to learn the different aspects, both from the kind of legacy fabs to the—to the leading edge as well, on both the R&D. And then also our international footprint as well. And so, we have that—you know, we are looking at this as an opportunity to be able to ensure that we, you know, allow more folks to be a part of the semiconductor industry, but also, you know, making sure that we’re—you know, as we create, you know, the 50,000 jobs in New York, the, you know, 17,000 jobs in in Idaho, looking at it from a regional approach. You know, Intel will be making—has made announcements across the country as well. So have other folks in the semiconductor industry. And so we know it’s going to need to be an all-hands approach that we’ll be able—that, you know, we need to make—think about things as regional, both northwest and northeast, and, you know, making sure that we’re incorporating, you know, everyone to be able to be a part of this industry. And that’s going to be, you know, us working with localities like the ones you’re part of, and the institutions as well, to be able to make sure that we are attracting talent early on, and then also making sure that, you know, we’re addressing, and having, and equipping the skill sets necessary to come and work into the industry. FASKIANOS: Fantastic. The next written question is from Gail Patterson-Gladney, Van Buren County commissioner in Michigan. Where the materials come from for the semiconductors? Are they recycled after use? I do not know much about the semiconductor, but am willing to learn more. Where do I educate myself and community members about programs? VAN SLOUN: Can we go to David for that? And we’ll start with David. SHAHOULIAN: If I had the answers to those questions, I’d be happy to answer them. (Laughs.) I am the workforce policy lead. And so I don’t know about our materials, and I just—yeah. I’m happy to let Bo try to take it. MACHAYO: (Laughs.) Yeah, so from a supplier standpoint, you know, there’s going to be materials suppliers, there’s going to be, you know, chemical suppliers that will be needed for the semiconductor industry to be successful. A huge part of that will be, you know, how successful are we going to be—the Microns, the Intels, the Samsungs, the TSMCs of the world, of making sure that we’re investing in building up these fabs that are needed to manufacture folks. And then ultimately the suppliers will need to be able to kind of co-locate around us, and also make sure that we’re equipping those talent—those folks that are going to be at, you know, all of our fabs. And we’ll need all of those suppliers, both chemical and material suppliers, to be effective. And so, you know, those folks are constantly—I’ll speak for Micron, but I think this is probably true for Intel as well—will be at our fabs throughout the duration of our construction phases, and as we get chips out the doors. And are important to kind of continue to make sure that we have the leading-edge chips that are coming out of their facilities. So, you know, happy to—there’s a supplier page on Micron’s site that you’re more than—you’re more than welcome to visit to kind of learn about the suppliers. We’ve been doing webinars both kind of regionally and throughout the state as well, to be able to, you know, talk to folks about what’s going to be needed as we kind of implement our two projects, our two investments in the U.S. FASKIANOS: Thank you. I’m going to take a question from Eno Mondesir, who is executive health officer in the health department in Brockton, Mass. If you can unmute yourself. Q: Good afternoon. I am posing this question perhaps to Bo or to David, or anyone. I wonder if—how do you see AI affecting hiring human subjects? Maybe not now, but maybe two to five years down the road? SHAHOULIAN: Is your question—sorry, you don’t mind, you know, is your question about AI in the hiring process when it comes to screening applicants, for example? Or do you mean AI, you know, potentially replacing— Q: I mean replacing human labor force. SHAHOULIAN: Yeah. Well, let me just say, I mean, I think all of the semiconductor companies see AI as a value-add, right? You know, these are very complex—you know, designing and manufacturing semiconductors is the most difficult human endeavor on the planet, or among them, right? I mean, it is the most complicated process there is. So to the—to the degree that AI can help us perfect chip designs, perfect software and coding that goes with those, you know, discover flaws, those things, you know, those are absolutely beneficial to the industry. You know, at this point in time, we don’t foresee that, you know, really supplanting, you know—(laughs)—our employees, right? I mean, you need workers, again. You know, fabs, right—again, every factory, I just pointed out, creates at least 1,500 to 2,000 jobs. A lot of the work that’s done in the fab is already automated, right? You have robots that move the chips around. The lithography tools, you know, themselves—the etching tools, the chemical layering, you know, all of that happens basically automatically. The work is for, you know, people, right, that is all about maintaining that process, you know, troubleshooting, discovering flaws, tuning the machines. I mean, that work will continue, right? We’re not at a point where that work gets supplanted anytime soon. I don’t know if, Bo, you want to add anything. VAN SLOUN: Bo, do you want to add anything to that? MACHAYHO: Yeah, you know, I agree. I think the job—the economic impact and the jobs that we’ve relayed on the figures for our investments in both Boise and New York, we anticipate, you know, remain the same. And to make sure—and we know that, you know, AI is an important thing kind of moving forward in the semiconductor industry, and for Micron particularly. You know, memory chips are going to be important for AI, and in that conversation. But really believe and have seen, you know, throughout the globe the economic impact that’s been made from the investment of the semiconductor industry in terms of jobs, both direct and indirect jobs, and believe that would continue. FASKIANOS: Great. So we have a written question—or, comment from David Di Gregorio, who’s an administrator at Tenafly High School, and also as a councilman in Englewood Cliffs. And he wants to work with you all. He’s responsible for engineering and design. So I will share his contact information with you all after this. We have a written—or, sorry, a raised hand from Michael Semenza in the office of Representative Puppolo. If you want to go next, and unmute yourself. There you go. Q: Hello. Good afternoon. Are you able to hear me? FASKIANOS: We can hear you. Yes, we can. Go ahead. Q: OK, great. I apologize. Would you be able to repeat the question real quick? FASKIANOS: Oh, I thought you were asking a question. You had raised your hand? Q: Oh, I don’t know how that happened. I’m sorry. FASKIANOS: Oh, OK. No problem. That’s, you know, technology, it’s sometimes—we’ll go next to Senator Javier Loera Cervantes. Q: Hello? FASKIANOS: Yes, we can hear you. Q: Hi, my name’s Anelli (ph). I’m actually the digital director representing Senator Javier Loera Cervantes from the state of Illinois. First, I’d just like to say thank you to everyone who did come out today, because I know this is a sort of the first step, and taking initiatives to our curriculums, to our districts. We did discuss a lot education. And I just had a quick question. Especially for New York and sort of your approaches to discussing with principals how to bring these initiatives to the schools, when you essentially decide which districts to sort of work with, what does that—what does that approach look like? Do you sort of target low-income communities? Ones that just kind of tend to work more vigilantly with your company? Or just sort of sort of what’s the approach that you take when you want to bring these initiatives and change of curriculums to the districts in New York? MACHAYO: Yeah, so it’s been a kind of an all-hands approach. Obviously, we want to make sure that we are investing in the community in which we are going to be at, but know that especially in New York it’ll be a kind of an all-hands and all-state effort, both kind of central New York, where we’re located, downstate in the city, and then also in Albany, and Buffalo, and Rochester, and really an all-encompassing approach. And so, you know, we both work with the New York State Department of Education and local—our local K-12 superintendents and school systems to be able to make sure that we’re identifying and sharing exactly what is needed in terms of curriculum development, but also how are we spurring the interest of—to make sure that we’re getting a diverse set of employers and workforce, not only to be interested in the semiconductor industry and working directly for Micron, but also for the suppliers and the other indirect jobs that will be associated with Micron that are going to be important for Micron to thrive and succeed there. And so it is working with kind of everyone, and identifying, in New York, you know, a handful of places right now that we can have a prototype. And knowing—and then expanding, and knowing, and understanding that this project is going to, you know, take a couple of decades to make sure that we’re—to make sure that we are implementing our project correctly, both kind of in in New York and then also in Boise. And so knowing that it’ll expand, and the partnerships will expand as well throughout the entire state. VAN SLOUN: Irina, are there any more questions? FASKIANOS: Yes. We have a question from Ernest Abrogar, who is the—let’s see, I have lost it—the research specialist at Oklahoma Department of Commerce. How can suppliers to semiconductor manufacturers participate to provide educational or practicum opportunities to those areas that don’t have a major fab facility nearby? VAN SLOUN: David, do you want to—do you want to take a first shot at that? SHAHOULIAN: Sure. Look, I mean, we have suppliers in every state in the union, and the territories as well. So, you know, we partner with our suppliers in many different ways. You know, we work with suppliers, you know, to grow their businesses, to improve their practices, to, you know, ensure compliance, right? And we work with them also on workforce, you know, development strategies as well. You know, we do that. A lot of our suppliers are co-located or near located to our facilities, but a lot of them are not, I guess most are not. And so we are happy to partner them on these efforts. Again, there are—you know, we’re happy to share, you know, the curriculum, the modules, the things that we have designed in partnership with the schools that have been our partners, right? We’re happy to share that with other educational institutions. So if there’s, you know, a curricula or something that you, you know, want to—you know, want to take or modify, you know, or expand on in Oklahoma, you know, we’re happy to assist with that. VAN SLOUN: Great. Bo, you have anything to add? MACHAYHO: Yeah, no, I’d share that too. I mean, I think anything that you—anything that you’re doing in Oklahoma, or any state in the country, if you’re focusing on, you know, education and investing in semiconductor education, if you are focusing on, you know, incentives for suppliers in certain states, and are looking to attract that part of the industry, I think, you know, we’d be happy to talk to you and figure out how we can kind of partner together in states—in states that we are currently investing in for the manufacturing side. But understand that, you know, we’ll need to also work with other states to make sure that we have the suppliers and their downstream suppliers that will be helpful for us to be successful. FASKIANOS: So, we have one other question that just came in from council member Anita Barton. Do either of your companies plan to get together with any universities in Penna? SHAHOULIAN: Not sure. I understand that. Universities—say the last part? FASKIANOS: Companies plan to get together with any universities in Penna. Maybe Pennsylvania? I’m not— VAN SLOUN: I’m thinking that’s what it is, yeah. FASKIANOS: Yeah, I’m thinking it’s probably Pennsylvania. MACHAYO: So I can take that. I mean, we—so we launched our—along with the NSF director, and Senator Schumer, and our CEO, Sanjay, and, you know, some of our other leadership team, we were able to launch the Northeast University Semiconductor Network. And there are universities that are a part of that network that are based in Pennsylvania. And we are kind of—again, understand that it’s going to be a regional approach to be able to attract the semiconductor folks—or, the next generation of semiconductor workforce to work at Micron. And so happy to partner in that way as well. And we also just recently launched a northwest one as well to kind of do the same thing, look at states within our footprint region to be able to make sure that we’re attracting the workforce that’s needed. FASKIANOS: Great. VAN SLOUN: David—(inaudible)—on Pennsylvania, or? SHAHOULIAN: You know, I know that we have been in some conversations with Pennsylvanian institutions. I cannot tell you right now which ones they are, because I have not been part of those conversations. But, you know, given our proximity—the proximity to Ohio, I know that in the western part of the state, there has been some interest. I would just say, again, we are participating with NSF in, you know, ensuring that there is funding available to, you know, schools nationwide. VAN SLOUN: Thanks, David. So I think we only have a few minutes left. And I’m going to turn to Irina to close this out. But I just wanted to say thank you to, you know, Becky, David, Bo. You guys have been fantastic in sharing information that’s going to help, I think, across the entire United States thinking about semiconductors, and the need to build this pipeline and get excitement around this. And I’m really excited to hear about some of the programs you all have going on. So thank you so much. Irinia, I’m going to turn to you to close us out here. But thank you for joining us. FASKIANOS: Yes. And thank you all. This is a great hour discussion. We appreciate you taking the time, and for all the great comments and questions. We will be sending out links to the resources that were mentioned. And we will go back to Becky, David and Bo, and Sherry for anything else that they want to include, along with a link to the—this webinar and the transcript. And as always, we encourage you to visit CFR.org, ForeignAffairs.com, and ThinkGlobalHealth.org for more expertise and analysis. And you can also email [email protected] to let us know how CFR can support the important work that you are doing in your communities. So thank you again for joining us today. We appreciate it. VAN SLOUN: Thanks, everyone.
  • Cybersecurity
    Schrödinger’s Hacking Law and Cyber Burnout: Capacity Building in U.S. Cybersecurity
    Recruiting problems in cybersecurity will continue until private and public sector organizations make defenders' mental health a priority and policymakers address the poorly written Computer Fraud and Abuse Act. 
  • Artificial Intelligence (AI)
    AI Meets World, Part Two
    Podcast
    The rapid emergence of artificial intelligence (AI) has brought lawmakers and industry leaders to the same conclusion: regulation is necessary to ensure the technology changes the world for the better. The similarities could end there, as governments and industry clash on what those laws should do, and different governments take increasingly divergent approaches. What are the stakes of the debate over AI regulation?
  • Technology and Innovation
    Reporting on AI and the Future of Journalism
    Play
    Dex Hunter-Torricke, head of global communications & marketing at Google DeepMind, discusses how AI technology could shape reporting the news and the role of journalists, and Benjamin Pimentel, senior technology reporter at the San Francisco Examiner, discusses framing local stories on AI in media. The webinar is hosted by Carla Anne Robbins, senior fellow at CFR and former deputy editorial page editor at the New York Times.  TRANSCRIPT FASKIANOS: Thank you. Welcome to the Council on Foreign Relations Local Journalists Webinar. I am Irina Faskianos, vice president for the National Program and Outreach here at CFR. CFR is an independent and nonpartisan membership organization, think tank, publisher, and educational institution focusing on U.S. foreign policy. CFR is also the publisher of Foreign Affairs magazine. As always, CFR takes no institutional positions on matters of policy. This webinar is part of CFR’s Local Journalists Initiative, created to help you draw connections between the local issues you cover and national and international dynamics. Our program aims to put you in touch with CFR resources and expertise on international issues and provides a forum for sharing best practices. Again, today’s discussion is on the record. The video and transcript will be posted on our website after the fact at CFR.org/localjournalists, and we will share the content after this webinar. We are pleased to have Dex Hunter-Torricke, Benjamin Pimentel, and host Carla Anne Robbins to lead today’s discussion on “Reporting on AI and the Future of Journalism.” We’ve shared their bios with you, but I will highlight their credentials here. Dex Hunter-Torricke is the head of global communications and marketing at Google DeepMind. He previously worked in communications for SpaceX, Meta, and the United Nations. He’s a New York Times bestselling ghostwriter and frequent public commentator on the social, political, and organizational challenges of technology. Benjamin Pimentel is a senior technology reporter for the San Francisco Examiner covering Silicon Valley and the tech industry. He has previously written on technology for other outlets, including Protocol, Dow Jones Marketwatch, and Business Insider. He was also a metro news and technology reporter at the San Francisco Chronicle for fourteen years. And in 2022, he was named by Muck Rack as one of the top ten crypto journalists. And finally, Carla Anne Robbins, our host, is a senior fellow for CFR—at CFR, excuse me. She is the faculty director of the Master of International Affairs Program and clinical professor of national security studies at Baruch College’s Marxe School of Public and International Affairs. Previously, she was deputy editorial page editor at the New York Times and chief diplomatic correspondent at the Wall Street Journal. Welcome, all. Thank you for this timely discussion. I’m going to turn it now to Carla to start the conversation, and then we will turn to all of you for your questions and comments. So, Carla, take it away. ROBBINS: Thank you so much, Irina. And thank you so much to you and your staff for setting this up, and to Dex and to Ben for joining us today. You know, I am absolutely fascinated by this topic—fascinated as a journalist, fascinated as an academic. Yes, I spend a lot of time worrying whether my students are using AI to write their papers. So far, I don’t know. So, as Irina said, Dex, Ben, and I will chat for about twenty-five minutes and then throw it open to you all for questions. But if you have something that occurs along the way, don’t hold back, and post it, and you know, we will get to you. And we really do want this to be a conversation. So I’d like to start with Ben. I’m sure everyone here has already played with ChatGPT or Bard if they get off the waitlist. I’ve already needled Dex about this. You know, I asked ChatGPT, you know, what questions I should be asking you all today, and I found it sort of thin gruel but not a bad start. But, Ben, can you give us a quick summary of what’s new about this technology, generative AI, and why we need to be having this conversation today? PIMENTEL: Yes. And thank you for having me. AI has been around for a long time—since after the war, actually—but it’s only—you know, November 30, 2022, is a big day, an important date for this technology. That’s when ChatGPT was introduced. And it just exploded in terms of opening up new possibilities for the use of artificial intelligence and also a lot of business interest in it. For journalists, of course, quickly, there has been a debate on the use of ChatGPT for reporting and for running a news organization. And that’s become a more important debate given the revelations and the disclosures of an organization like AP, CNET, and recently even insiders now saying that they’re going to be using AI for managing their paywall or in terms of deciding whether to offer a subscription to a reader or not. For me personally, I think the technology has a lot of important uses in terms of making newsgathering and reporting more efficient/faster. For instance, I come from a—I’m going to date myself, but when I started it was before—when I started my career in the U.S.—I’m from the Philippines—it was in June 1993. That was two months after the World Wide Web became public domain. That’s when the websites started appearing. And around that time, whenever I’m working nights to—you know, that was before websites and before Twitter. To get a sense of what’s going on in San Francisco, especially at night—and I’m working at night—I would have to call every police station, fire department, hospital from Mendocino down to Santa Cruz to get a sense of what’s going on. It’s boring. It’s a thankless job. But it actually helped me. But now you can do that with technology. I mean, you now have sites that can pull from the Twitter feed of the San Francisco Police Department or the San Francisco Fire Department to report, right, on what’s going on. And AI now creates a possibility of actually pulling that information and creating a news report that in the past I would have to do it, like a short 300-word report on, hey, Highway 80 is closed because of an accident. Now you can automate that. The problem that’s become more prominent recently is the use of AI and you don’t disclose it. I was recently in a, you know, panel—on a panel where an editor disclosed—very high on the technology, but then also said, when we asked him are you disclosing it on your site: Well, frankly, our readers don’t care. I disagree vehemently that when you’re—if you’re going to use it, you have to disclose it. Like, if you are pulling information and creating reports on, you know, road conditions or a police action, you have to say that AI created it. And it’s definitely even more so for more—for bigger stories like features or, you know, New Yorker-type of articles. You wouldn’t want—I wouldn’t want to read a New Yorker article and not know that it was done by an AI or by a chatbot. And then for me personally, I worry about what it means for young reporters, younger journalists, because they’re not going to go through what I went through, which in many ways is a good, right? You don’t have to call every police station in a region to get the information. You can pull that. You can use AI to do that. But for me, I worry when editors and writers talk about, oh, I can now write a headline better with AI, or write my lede and nut graf with AI, that’s worrisome because, for me, that’s not a problem for a journalist, right? Usually you go through that over and over again, and that’s how you get better. That’s how you become more critically minded. That’s how you become faster; I mean, even develop your own voice in writing a story. I’ll stop there. ROBBINS: I think you’ve raised a lot of important questions which we will delve into some more. But I want to go over to Dex. So, Dex, can you talk a little bit more about this technology and what makes it different from other artificial intelligence? I mean, it’s not like this is something that suddenly just we woke up one day, it was there. What makes generative AI different? HUNTER-TORRICKE: Yeah. I mean, I think the thing about generative AI which, you know, has really, you know, wowed people has been the ability to generate content that seems new. And, obviously, how generative AI works—and we can talk much more about that—a lot of what it’s creating is, obviously, based on things that exist out there in the world already. And you know, the knowledge that it’s presenting, the content that it’s creating is something that can seem very new and unique, but, obviously, you know, is built on training from a lot of previous data. I think when you experience a generative AI tool, you’re interacting with it in a very human kind of way—in a way that previous generations of technology haven’t necessarily—(audio break). You’re able to type in natural language prompts; and then you see on many generative AI tools, you know, the system thinking about how to answer that question; and then producing something very, very quickly. And it feels magical in a way that, you know, certainly—maybe I’m just very cynical having spent so long in the tech industry, but you know, certainly I don’t think lots of us feel about a lot of the tools that we take for granted. This feels qualitatively different from many of the current systems that we have. So I think because of that, you know, over the last year, as generative AI—(audio break)—starts to impact on a lot of different knowledge-type industries and professions. And of course, you know, the media industry is, you know, one of those professions. I think, you know, lots of reporters and media organizations are obviously thinking not just how can I use generative AI and other AI tools as part of my work today, but what does this really mean for the profession? What does this mean for the industry? What does this mean for the economics over the long term? And those are questions that, you know, I think we’re all still trying to figure out, to an extent. ROBBINS: So I want to ask you—you know, let’s talk about the good for a while, and then we get into the bad. So, you know, I just a piece in Neiman Reports, which we’ll share with everybody, that described how a Finnish newspaper, Yle, is using AI to translate stories into Ukrainian, because it’s now got tens of thousands of people displaced by the war. The bad news, at least for me, is Buzzfeed started out using AI to write its quizzes, which I personally didn’t care much about, and then said but that’s all we’re going to use it for. But then it took a nanosecond and then it moved on to travel stories. Now, as a journalist, I’m worried—I mean, as it is the business is really tight. Worried about displacement. And also about—you know, we hear all sorts of things. But we can get into the bad in a minute.  You know, if you were going to make a list of things that didn’t make you nervous, that, you know, Bard could do, that ChatGPT could do, that makes it—you know, that you look at generative AI and you say, well, it’s a calculator. You know, we all used to say, oh my God, you know, nobody’s ever going to be able to do a square root again. And now everybody uses a calculator, and nobody sits around worrying about that. So I—just a very quick list. You know, Ben, you’ve already talked about, you know, pulling the feed on traffic and all of that. You know, give us a few things that you really think—as long as we disclose—that you think that this would really be good, particularly for, you know, cash-strapped newsrooms, so that we could free people up to do better work? And then, Dex, I’m going to ask you the same question. PIMENTEL: City council meetings. I mean, I started my career— ROBBINS: You’re going for the boring first. PIMENTEL: Right, right. School board meetings. Yeah, it’s boring, right? That’s where you start out. That’s where I started out. And, if—I mean, I’m sort of torn on this, because you can use ChatGPT or generative AI to maybe present the agenda, right? The agenda for the week’s meeting in a readable, more easily digestible manner, instead of having people go to the website and try to make sense of it. And even the minutes of the meeting, right, to present it in a way that here’s what happened. Here’s what they decided. I actually look back—you know, like you said, and like I said, it’s boring. But it’s valuable. For me, the experience of going through that process and figuring out, OK, what did they decide? Trying to reach out to the councilman, OK, what did you mean—I mean, to go deeper, right? But at the same time, given the budget cuts, I would allow—I would accept a newsroom that decides, OK, we’re going to use ChatGPT to do summaries of these things, but we’re going to disclose it. I think that’s perfectly—especially for local news, which has been battered since the rise of the web.  I mean, I know this because I work for the Chronicle and I work in bureaus in the past. So that’s one positive thing, aside from, you know, traffic hazard warning. That it may take a human reporter more time. If you automate it, maybe it’s better. It’s good service to the community.  ROBBINS: Dex, you have additions to the positive list? Because we’re going to go to the negative next.  HUNTER-TORRICKE: Yeah, absolutely. I mean, look, I think that category of stuff which, you know, Ben might talk about as boring, you know, but certainly, I would say, is useful data that just takes a bunch of time to analyze and to go through, that’s where AI could be really, really valuable. You know, providing, you know, analysis, surfacing that data. Providing much broader context for the kinds of stories that reporters are producing. Like, that’s where I see systems that are able to parse through a lot of data very quickly being incredibly valuable. You know, that’s going to be something that’s incredibly useful for identifying local patterns, trends of interest that you can then explore further in more stories. So I think that’s all a really positive piece. You know, the other piece is just around, you know, exposing the content that local media is producing to a much wider audience. And there, you know, I could see potential applications where, you know, AI is, you know, able to better transcribe and translate local news. You know, you mentioned the Ukrainian example, but certainly I think there’s a lot of, you know, other examples where outlets are already using translation technology to expose their content to a much broader and global audience. I think that’s one piece. You know, also thinking about how do you make information more easily accessible so that, you know, this content then has higher online visibility. You know, every outlet is, you know, desperately trying to, you know, engage its readers and expose, you know, a new set of readers to their content. So I think there’s a bunch of, you know, angles there as well. ROBBINS: So let’s go on to the negative, and then we’re going to pass it over because I’m sure there’s lots of questions from the group. So, you know, we’ve all read about the concerns about AI and disinformation. There have been two recent reports, one by NewsGuard and another by ShadowDragon that found that AI-created sites and AI-created content, filled with fabricated events, hoaxes, dangerous medical advice, you’ve got that on one hand. So there was already, you know, already an enormous amount of disinformation and bias out there. You know, how does AI make this worse? And do we have any sense of how much worse? Is it just because it can shovel a lot more manure faster? Or is there something about it that makes this different? Ben? PIMENTEL: I mean, as Dex said, generative AI allows you to create content that looks real, like it was created by humans. That’s sort of the main thing that really changes everything. We’ve been living with AI for a number of years—Siri, and Cortana, and all that. But when you listen to them, you know that it’s not human, right? Eventually you will have technologies that will sound human, and you can be deceived by it. And that’s where the concern about disinformation comes up.  I mean, hallucinations is what they call it in terms of they’re going to present you—I don’t know if you ever search yourself on ChatGPT, and they spit out a profile that’s really inaccurate, right? You went to this university or what. So that’s a problem. And the thing about that, though, is the more data it consumes, it’ll get better. That’s sort of the worrisome, but at the same time positive, thing. Eventually all these things will be fixed. But at the same time, you don’t know what kind of data they’re using for these different models. And that’s going to be a major concern.  In terms of the negative—I mean, like I said, I mentioned the training of journalists is a concern to me. I mean, I mentioned certain things that are boring, but I think—I also wonder, so what happens to journalists if they don’t go through that? If they already go to a certain level because, hey, ChatGPT can take care of that so you don’t have to cover a city council meeting? Which, for me, was a positive experience. I mean, I hated that I was doing it, but eventually looking back that was good. I learned how to talk to a city politician. I learned to pick up on whether he’s lying to me or not. And that enables me to create stories later on in my career that’re more analytical, you know, more nuanced, more sensitive to the needs of my readership.  Another thing is in journalism we know there is no such thing as absolute neutrality, right? Even and especially analytical stories, your point of view will come up. And that brings up the question, OK, what point of view are we presenting if you have ChatGPT write those stories? Especially the most analytical ones, like features, a longer piece that delves into a certain problem in the community and tries to explore it. I worry that you can’t let ChatGPT or an AI program do that without questioning whether, OK, what’s the data that is the basis of this analysis, of this perspective? I’ll stop there. ROBBINS: So, Dex, jump in anywhere on this, but I do have a very specific technical thing. Not that I want to get into this business but, you know, I’ve written a lot in the past about disinformation. And it’s one thing for hallucinations, where they’re just working with garbage in so you get garbage out, which is—and you certainly saw that in the beginning with Wikipedia, which has gotten better with crowdsourcing over time. But from my understanding of these reports from NewsGuard and ShadowDragon, that there were people who were malevolently using AI to push out bad information. So is this—how is generative AI making that easier than what we just had before? HUNTER-TORRICKE: I mean, I think the main challenge here is around how compelling a lot of this content seems, compared to what came before, right? So, you know—you know, I think Ben spoke to this—you know, a lot of this stuff isn’t exactly news. AI itself has been around for a long time. And we then had manifestations of these challenges for quite a long time with the entire generation of social media technology. So like deepfakes, like that’s something we’ve been talking about for years. The thing about deepfakes which made it such an interesting debate is that for years every time we talked about deepfakes, everyone knew exactly what a deepfake was because they were so unconvincing. You know—(audio break)—exactly what was a deepfake and what wasn’t. Now, it’s very different because of the quality of the experience.  So, you know, a few weeks ago you may have seen there was a picture that was trending on Twitter of the pope wearing a Balenciaga jacket. And for about twenty-four hours, the internet was absolutely convinced that the pope was rocking this $5,000 jacket that was, like, perfectly color coordinated. And, you know, it was a sort of—you know, it was a funny moment. And of course, it was revealed that it had been generated using an AI. So no harm done, I guess. But, like, it was representative of how—(audio break)—are being shared. Potentially it could have very serious implications, you know, when they are used by bad actors, you know, as you described, you know, to do things that are much more nefarious than simply, you know, sharing a funny meme. One piece of research I saw recently which I thought was interesting, and it spoke to what some of these challenges might look like over time, I believe this was from Lancaster University. It compared how trustworthy AI-generated faces of people were compared to the faces of real humans. And it found that actually amongst the folks they surveyed as part of this research, that faces of AI-generated humans were rated 8 percent more trustworthy than actual humans. And, you know, I think, again, it’s a number, right, that, you know, I think a lot of people laugh at because, you know, we think oh, well, you know, that’s kind of funny and—(audio break)—of course, I can tell the difference between humans and AI-generated people. You know, I’m—(audio break)—were proved wrong when they actually tried to detect the differences themselves. So I do think there’s going to be an enormous number of challenges that we will face over the coming years. These are issues that, you know, certainly on the industry side, you know, I think lots of us are taking very seriously, certainly governments and regulators are looking at. Part of the solution will have to be other technologies that can help us parse the difference between AI-generated content and stuff that isn’t. And then part of that, I think, will be human solutions. And in fact, that may actually be the largest piece, because, of course, what is driving disinformation are a bunch of societal issues. And it’s not always going to be as simple as saying, oh, another piece of technology will fix that. ROBBINS: So I want to turn this over to the group. And I’ve got lots more questions, but I’m sure the group has—they’re journalists. They’ve got lots of questions. So the first question is from Phoebe Petrovic. Phoebe, can—would you like to ask your question yourself? Or I can read it, but I always love it when people ask their own questions. Q: Oh, OK. Hey, everyone. So, I was curious about how we might—just given all the reporting that’s been done about ChatGPT and other AI models hallucinating information, faking citations to Washington Post articles that don’t exist, making fake—totally make up research article citations that do not exist, how can we ethically or seriously recommend that we use generative AI for newsgathering purposes? It seems like you would just have to factcheck everything really closely, and then you might as well have done the job to begin with and not get into all these ethical implications of, like, using a software that is potentially going to put a lot of us out of business?  ROBBINS: And Phoebe, are you—you’re at Wisconsin Watch, right? Q: Mmm hmm. And we have a policy that we do not—at this point, that none of us are going to be using AI for any of our newsgathering purposes. And so that’s where we are right now. But I just wonder about the considerable hallucination aspect for newsgathering, when you’re supposed to be gathering the truth. ROBBINS: Dex, do you want to talk a little bit about hallucinations? HUNTER-TORRICKE: Yeah, absolutely. So I think, you know, Phoebe has hit the nail on the head, right? Like, that there are a bunch of, you know, issues right now with existing generative AI technology. You do have to fact-check and proof absolutely everything. So it is—it is something that—you know, it won’t necessarily save you lots of time if you’re looking to just generate, you know, content. I think there are two pieces here which, you know, I think I would focus on.  One is, obviously, the technology is advancing rapidly. So these are the kinds of issues which I expect with future iterations of the technology we will see addressed by more sophisticated models and tools. So absolutely today you’ve got all those challenges. That won’t necessarily be the case over the coming years. I think the second piece really is around thinking what’s the value of me experimenting with this technology now as a journalist and as an organization? It isn’t necessarily to think, oh, I can go and, you know, replace a bunch of fact-heavy lifting I have to do right now as a reporter. I think it’s more becoming fluent with what are the things that generative AI might conceivably be able to do that can help integrate into the kind of work that you’re doing?  And I expect a lot of what I think reporters and organizations generally will use generative AI for over the coming years, will actually—to be doing some of the things that I talked about, and that Ben talked about. You know, it’s corralling data. It’s doing analysis. It’s being more of a researcher rather than as a co-writer, or entirely taking over that writing. I really see it as something that’s additive and will really augment the kind of work that reporters and writers are going, rather than replacing it. So if you do it from that context and, you know, obviously, you know, it does depend on you experimenting to see what are all the different applications in your work, then I think that might lead to very different outcomes. ROBBINS: So we have another question, and we’ll just move on to that. And of course, Ben, you can answer any question you want at any time. So— PIMENTEL: Can I add something on that? It’s almost like the way the web has changed reporting. In the past, like, I covered business. To find out how many employees a company has or when it was founded, I would have to call the PR department or the media rep. Now I can just go quickly to the website, where they have all the facts about the company. But even so, I still double check if that’s an updated information. I even go to the FCC filings to make sure. So I see it as that kind of a tool, the way the web—or, like, when you see something on Wikipedia, you do not use that as a source, right? You use that as a starting point to find other sources. ROBBINS: So Charles Robinson from Maryland Public Television. Charles, do you want to ask your question? Q: Sure. First of all, gentlemen, appreciate this. I’m working on a radio show on ChatGPT and AI. And one of the questions that I’ve been watching in this process is the inability of AI and ChatGPT to get the local nuances of a subject matter, specifically reporting on minority communities. And, Ben, I know you being out in San Francisco, there’s certain colloquialisms in Filipino culture that I wouldn’t get if I didn’t know it. Whereas, like, to give you an example, there’s been a move to kind of, like, homogenize everybody as opposed to getting the colloquialisms, the gestures, and all of that. And I can tell you, as a Black reporter, you know, it’s the reason why I go into the field because you can’t get it if all I do is read whatever someone has generated out there. Help me understand. Because, I’m going to tell you, I write a specific blog on Black politics. And I’m going to tell you, I’m hoping that ChatGPT is not watching me to try and figure out what Black politics is. ROBBINS: Ben. PIMENTEL: I mean, I agree. I mean, when I started my career, the best—and I still believe this—the best interviews are face-to-face interviews, for me. We get more information on how people react, how people talk, how they interact with their surroundings. Usually it’s harder to do that if you’re, you know, doing a lot of things. But whenever I have the opportunity to report on—I mean, I used to cover Asian American affairs in San Francisco. You can’t do that from a phone or a website. You have to go out into the community. And I cover business now, which is more—you know, I can do a lot of it by Zoom. But still, if I’m profiling a CEO, I’d rather—it’d be great if I could meet the person so that I can read his body language, he can react to me, and all that. In terms of the nuances, I agree totally. I mean, it’s possible that ChatGPT can—I mean, as we talked about—what’s impressive and troubling about this technology is it can evolve to a point where it can mimic a lot of these things. And for journalism, that’s an issue for us to think about because, again, how do you deal with a program that’s able to pretend that it’s, you know, writing as a Black person, or as a Filipino, or as an Asian American? Which, based on the technology, eventually it can. But do we want that kind of reporting and journalism that’s not based on more human interactions? ROBBINS: So thank you for that. So Justin Kerr who’s the publisher of the McKinley Park News—Justin, do you want to ask your question? Q: Yes. Yes. Thank you. Can folks hear me OK? ROBBINS: Absolutely. Q: OK. Great. So I publish the McKinley Park News, which is, I call it, a micro-local news outlet, focusing on a single neighborhood in Chicago. And it’s every beat in the neighborhood—crime, education, events, everything else. And it’s all original content. I mean, it’s really all stuff that you won’t find anywhere else on the internet, because it’s so local and, you know, there’s news deserts everywhere. A handful of weeks ago, I discovered through a third party that seemingly the entirety of my website had been scraped and included in these large language models that are used to power ChatGPT, all of these AI services, et cetera.  Now, this is in spite of the fact that I have a terms of service clearly linked up on every page of my website that expressly says: Here are the conditions that anyone is allowed to access and use this website—which is, you know, for news consumers, and no other purpose. And I also list a bunch of expressly prohibited things that, you know, you cannot access or use our website for. One of those things is to inform any large language model, algorithm, machine learning process, et cetera, et cetera, et cetera.  Despite this, everything that I have done has been taken from me and put into these large language models that are then used in interfaces that I see absolutely no benefit from—interfaces and services. So when someone interacts with the AI chat, they’re going to get—you know, maybe they ask something about the McKinley Park neighborhood of Chicago. They’re not—you know, we’re going to be the only source that they have for any sort of realistic or accurate answer. You know, and when someone interacts with a chat, I don’t get a link, I don’t get any attention, I don’t get a reference. I don’t get anything from that.  Not only that, these companies are licensing that capability to third parties. So any third party could go and use my expertise and content to create whatever they wanted, you know, leveraging what I do. As a local small news publisher, I have absolutely no motivation or reason to try to publish local news, because everything will be stolen from me and used in competing interfaces and services that I will never get a piece of. Not only that, this— ROBBINS: Justin, we get—we get the—we get the point. Q: I guess I’m mad because you guys sit up here and you’re using products and services, recommending products and services without the—without a single talk about provenance, where the information comes from. ChatGPT doesn’t have a license to my stuff. Neither do you. ROBBINS: OK. Q: So please stop stealing from me and other local news outlets. That’s—and how am I supposed to—my question is, how am I supposed to operate if everything is being stolen from me? Thank you very much. ROBBINS: And this is a—it’s an important question. And it’s an important question, obviously, for a very small publisher. But it’s also an important question for a big publisher. I mean, Robert Thompson from News Corp is raising this question as well. And we saw what—we saw what the internet did to the news business and how devastating it’s been. So, you know, it’s life and death—life and death for some—life and death for a very small publisher, but it’s very much life and death for big publishers as well. So, Dex, this goes over to you. HUNTER-TORRICKE: Yeah, sure. I mean, I think—you know, obviously I can’t comment on any, you know, specific website or, you know, terms and conditions on a website. You know, I think, you know, from the deep mind perspective, I think we would say that, you know, we believe that training large language models using open web content, you know, creates huge value for users and the media industry. You know, it leads to the creation of more innovative technologies that will then end up getting used by the media, by users, you know, to connect with, you know, stories and content. So actually, I think I would sort of disagree with that premise. I think the other piece, right, is there is obviously a lot of debate, you know, between different, you know, interests and, you know, between different industries over what has been the impact of the internet, you know, on, you know, the news industry, on the economics of it. You know, I think, you know, we would say that, you know, access to things like Google News and Google Search has actually been incredibly powerful for, you know, the media industry. You know, there’s twenty-four, you know, billion visits to, you know, local news outlets happening every month through Google Search and Google News. You know, there’s billions of dollars in ad revenue being generated by the media industry, you know, through having access to those platforms. You know, I think access to AI technologies will create similar opportunities for growth and innovation, but it’s certainly something which I think, you know, we’re very, very sensitive to, you know, what will be the impacts on the industry. Google has been working very, very closely with a lot of local news outlets and news associations, you know, over the years. We really want to have a strong, sustainable news ecosystem. That’s in all of our interest. So it’s something that we’re going to be keeping a very close eye on as AI technology continues to evolve. ROBBINS: So is—other than setting up a paywall, how does—how do news organizations, you know, protect themselves? And I say this as someone who sat on the digital strategy committee at the New York Times that made this decision to put up a paywall, because that was the only way the paper was going to survive. So, you know, yes, Justin, I understand that payrolls or logins kill your advertising revenue potential. But I am—yes, and we had that debate as well. And I understand the difference between your life and the life of the New York Times. Nevertheless, Justin raises a very basic question there. Is there any other way to opt out of the system? I mean, that’s the question that he’s asking, Dex. Is there? HUNTER-TORRICKE: Well, you know, I think what that system is, right, is still being determined. Generative AI is, you know, in its infancy. We obviously think it’s, you know, incredibly exciting, and it’s something that, you know, all of us—(audio break)—today to talk about it. But the technology is still evolving. What these models will look like, including what the regulatory model will look like in different jurisdictions, that is something that is shifting very, very quickly. And, you know, these are exactly the sorts of questions, you know, that we as an industry—(audio break)—is a piece which, you know, I’m sure the media industry will also have a point of view on these things.  But, in a way, it’s sort of a difficult one to answer. And I’m not deliberately trying to be evasive here with a whole set of reporters. You know, we don’t yet know what the full impacts really will be, with some of the AI technologies that have yet to be invented, for example. So this is something where it’s hard to say this is a definitively, like, model that is going to produce the greatest value either for publishers or for the industry or for society, because we need to actually figure out how that technology is going to evolve, and then have a conversation about this. And different, you know, communities, different markets around the world, will also have very different views on what’s the right way, you know, to protect the media industry, while also ensuring that we do continue to innovate? So that’s really how I’d answer at this stage. ROBBINS: So let’s move on to Amy Maxmen, who is the CFR Murrow fellow. Amy, would you like to ask your question? Q: Yeah. Hi. Can you hear me? ROBBINS: Yes. Q: OK, great. So I guess my question actually builds on, you know, what the discussion is so far. And part of my thought for a lot of the discussion here and everywhere else is about, like, how AI could be helpful or hurtful in journalism. And I kind of worry how much that discussion is a bit of a distraction. Because, I guess, I have to feel like the big use of AI for publishers is to save money. And that could be by cutting salaries further for journalists, and cutting full-time jobs that have benefits with them. Something that kind of stuck with me was that I heard another—I heard another talk, and the main use of AI in health care is in hospital billing departments to deny claims. At least, that’s what I heard. So it kind of reminds me that, you know, where is this going? This is going for a way for administrators and publishers to further cut costs.  So I guess my point is, knowing that we would lose a lot if we cut journalists and kind of just—you know, and cut editors, who really are needed to be able to make sure that the AI writing isn’t just super vague and unclear. So I would think the conversation might need to shift away from the good and the bad of AI, to actually, like, can we figure out how to fund journalists still, so that they use AI like a tool, and then also to make sure that publishers aren’t just using it to cut costs, which would be short-sighted. Can you figure out ways to make sure that, you know, journalists are actually maybe paid for their work, which actually is providing the raw material for AI? Basically, it’s more around kind of labor issues than around, like, is AI good or bad? HUNTER-TORRICKE: I think Amy actually raises, you know, a really important, you know, question about how we think conceptually about solving these issues, right? I actually really agree that it’s not really about whether AI is good or bad. That’s part of the conversation and, like, what are the impacts? But this is a conversation that’s about the future of journalism. You know, when social media came along, right, there were a lot of people who said, oh, obviously media organizations need to adapt to the arrival of social media platforms and algorithms by converting all of their content into stuff that’s really short form and designed to go viral.  And, you know, that’s where you had—I mean, without naming any outlets—you had a bunch of stuff that was kind of clickbaity. And what we actually saw is that, yeah, that engaged to a certain extent, but actually people got sick of that stuff, like, pretty quickly. And the pendulum swung enormously, and actually you saw there was a huge surge in people looking for quality, long-form, investigative reporting. And, you know, I think quality journalism has never been in so much demand. So actually, you know, even though you might have thought the technology incentivized and would guide the industry to one path, actually it was a very different set of outcomes really were going to succeed in that world.  And so I think when we look at the possibilities presented by technology, it’s not as clear-cut as saying, like, this is the way the ecosystem’s going to go, or even that we want it to go that way. I think we need to talk about what exactly are the principles of good journalism at this stage, what kind of environment do we want to have, and then figure out how to make the technology support that. ROBBINS: So, Ben, what do you think in your newsroom? I mean, are the bosses, you know, threatening to replace a third of the—you know, a third of the staff with our robot overlords? I promised Dex I would only say that once. Do you have a guild that’s, you know, negotiating terms? Or you guys are—no guild? What’s the conversation like? And what are you—you know, what are the owners saying? PIMENTEL: I mean, we are so small. You know, the Examiner is more than 150 years old, but it’s being rebuilt. It’s essentially just a two-year-old organization. But I think the point is—what’s striking is the use of ChatGPT and generative AI has emerged at a time when the media is still figuring out the business model. Like I said, I lived through the shift from the pre-website world, World Wide Web world, to—and after, which devastated the newspaper industry. I mean, I started in ’93 with the year that the website started to emerge. Within a decade, my newspaper back then was in trouble. And we’re still figuring it out. Dex mentioned the use of social media. That’s what led to the rise of Buzzfeed News, which is having problems now. And there are still efforts to figure out, OK, how do we—how do we make this a viable business model? The New York Times and more established newspapers have already figured out, OK, a paywall works. And that works for them because they’re established, they’re credible, and there are people who are willing to pay to get that information. So that’s an important point. But for others, the nonprofit model is becoming also a viable alternative in many cases. Like, in San Francisco there’s an outlet called Mission Local, actually founded by a professor of mine at Berkeley. Started out as a school project, and now it’s a nonprofit model, covering the Mission in a very good way. And you have other experiments. And what’s interesting is, of course, ChatGPT will definitely be used by—you know, as you said—at a time when there’s massive cuts in newsroom, they’re already signaling that they’re going to use it. And I hope that they use it in a responsible way, the way I explained it earlier. There are—there are important uses for it, for information that’s very beneficial to the community that can be automated. But beyond that, that’s the problem. I think that’s the discussion that the industry is still having. ROBBINS: So, thank you. And we have a lot of questions. So I’m going to ask—I’m going to go through them quickly. Dan MacLeod from the Bangor Daily News—Dan, do you want to ask your question? And I think I want to turn it on you, which is why would you use it, you know, given how committed you are and your value proposition, indeed, is local and, you know, having a direct relationship between reporters and local people? Q: Hi. Yeah. Yeah, I mean, that’s really my question. We have not started using it here. And the big kind of question for us is that the thing that, you know, we pride ourselves on, the thing our audience tells us that it values about us, is that we understand the communities we serve, we’re in them, you know, people recognize the reporters, they have, like, a pretty close connection with us. But this also seems to be, like, one of those technologies that is going to do to journalism what the internet did twenty-five years ago. And it’s sort of, like, either figure it out or, you know, get swept up. Is there anything that local newsrooms can do to leverage it in a way that maintains its—this is a big question—but sort of maintains its sort of core values with its audience?  My second question is that a lot of what this seems to be able to do, from what I’ve seen so far, promises to cut time on minor tasks. But is there anything that it can do better than, like, what a reporter could do? You know, like a reporter can also back—like, you know, research background information. AI says, like, we can do it faster and it saves you that time. Is there anything it can do sort of better? ROBBINS: Either of you?  HUNTER-TORRICKE: Yeah, so—yeah, go ahead. Sorry, go ahead, Ben. PIMENTEL: Go ahead. Go ahead, please. HUNTER-TORRICKE: Sure. So one example, right? You know, I’ve seen—(audio break)—using AI to go and look through databases of sport league competitions. So, you know, one, you know, kind of simple example is looking at how sport teams have been doing in local communities, and then working out, by interpreting the data, what are interesting trends of sport team performance. So you find out there’s a local team that just, you know, won top of its league, and they’ve never won, you know, in thirty years. Suddenly, like, that’s an interesting nugget that can then be developed into a story. You’ve turned an AI into something that’s actually generating interesting angles for writing a story. It doesn’t replace the need for human reporters to go and do all of that work to turn it into something that actually is going to be interesting enough that people want to read it and share it, but it’s something where it is additive to the work of an existing human newsroom. And I honestly think, like, that is the piece that I’m particularly excited about. You know, I think coming from the AI industry and looking at where the technology is going, I don’t see this as something that’s here to replace all of the work that human reporters are doing, or even a large part of it. Because being a journalist and, you know, delivering the kind of value that a media organization delivers, is infinitely more complex, actually, than the stuff that AI can deliver today, and certainly for the foreseeable future. Journalists do something that’s really, really important, which is they build relationships with sources, they have a ton of expertise, and that local context and understanding of a community. Things that AI is, frankly, just not very good at doing right now. So I think the way to think about AI is as a tool to support and enhance the work that you’re doing, rather than, oh, this something that can simply automate away a bunch of this. ROBBINS: So let’s—Lici Beveridge. Lici is with the Hattiesburg American. Lici, do you want to ask your question? Q: Sure. Hi. I am a full-time reporter and actually just started grad school. And the main focus of what I want to study is how to incorporate artificial intelligence into journalism and make it work for everybody, because it’s not going to go away. So we have to figure out how to use it responsibly. And I was just—this question is more for Benjamin. Is there any sort of—I guess, like a policy or kind of rules or something of how you guys approach the use of, like, ChatGPT, or whatever, in your reporting? I mean, do you have, like, a—we have to make sure we disclose the information was gathered from this, or that sort of thing? Because I think, ethically, is how we’re going to get to use this in a way that will be accepted by not just journalists, but by the communities—our communities. PIMENTEL: Yes. Definitely. I think that’s the basic policy that I would recommend and that’s been recommended by others. You disclose it. That if you’re using it in general, and maybe on specific stories. And just picking up on what Dex said, it can be useful for—we used to call it computer-assisted reporting, right? That’s what the web and computers made easier, right? Excel files, in terms of processing and crunching data, and all that, and looking for information.  What I worry about, and what I hope doesn’t happen, is—to follow up on Dex’s example—is, you know, you get a—it’s a sports event, and you want to get some historical perspective, and maybe you get the former record holders for a specific school, or whatever. And that’s good. The ChatGPT or the web helps you find that out. And then instead of finding those people and maybe doing an interview for profiles or your perspective, you could just ask ChatGPT, can you find their Instagram feed or Twitter feed, and see what they’ve said? And let the reporting end there. I mean, I can imagine young reporters will be tempted to do that because it’s easier, right? Instead of—as Dex said, it’s a tool as a step towards getting more information. And the best information is still going face-to-face with sources, or people, or a community. Q: Yeah. Because I know, like, I was actually the digital editor when—for about fifteen years. And, you know, when social media was just starting to come out. And everything was just, you know, dive into this, dive into that, without thinking of the impact later on. And as we quickly discovered, you know, things like we live in a place where there’s a lot of hurricanes and tornadoes. So we have people creating fake pictures of hurricanes and tornadoes. And, you know, they were submitting as, you know, user-generated content, which it wasn’t. It was all fake stuff. So, you know, we have to—I just kind of want to, like, be able to jump in, but do it with a lot of caution. PIMENTEL: Definitely, yes. ROBBINS: Well, you know, I thought Ben’s point about Wikipedia is a really interesting one, which is any reporter who would use Wikipedia as their sole source for a story, rather than using it as a lead source, you know, I’d fire them. But it is an interesting notion of —do you use this as a lead source, knowing that it makes errors, knowing that it’s lazy, knowing that it’s just a start, versus—and that is a—you know, that’s not even ethics. That’s your just basic sort of the rule that we also have to do inside the newsroom, which then to me raises a question for Dex, which is do we have any sense of how often—you know, this term of hallucinations. I mean, how often does it make mistakes right now? Do you have a sense of with Bard how often it makes mistakes? Certainly everybody has stories of fake sources that have showed up, errors that have showed up. Do we have a sense of how reliable this is? And, like, my Wikipedia page has errors in it, and I’ve never even fixed it because I find it faintly bemusing, because they’re really minor errors.  HUNTER-TORRICKE: Right, yeah. I mean, I don’t have any data points to hand. Absolutely it is something that we’re aware of. I expect that this is something that future iterations of the technology will continue to tackle and to, you know, diminish that problem. But, you know, going back to this bigger point, right, which is at what point can you trust this, I think you can trust a lot of things you find there. But you do have to verify them. And certainly, you know, as journalists, as media organizations, I mean, there’s a big much larger responsibility to do that than folks, you know, who may be looking at these experimental tools right now and using it, you know, just to share for, you know, fun and amusement. You know, the kinds of things that you’re sharing are going to really have a huge societal impact. I do think when you look at the evolution of tools like Wikipedia, though, we will go through this trajectory where, you know, at the beginning people will—a lot of folks will think, oh, this is really, like, not that reputable, because it’s something that’s been generated in a very novel way. And there are other more established, you know, formats where you would expect there to be a greater level of fact checking, a greater level of verification. So, you know, obviously, like, the establishment incumbent example to compare against Wikipedia back in the day was something like Encyclopedia Britannica. And then a moment was reached, you know, several years into the development of Wikipedia, where then research was finding that on average Wikipedia had fewer errors in it than Encyclopedia Britannica.  So we will absolutely see a moment come when AI will get more sophisticated, and we will see the content generally being good enough and with more minor errors which, you know, again, technology will continue to diminish over time. And at that point, I think then it will be a very, very different proposition than what we have today, where absolutely, you know, all of these tools are generally labeled with massive caveats and disclaimers warning that they’re experimental and that they’re not, you know, at the stage where you can simply trust everything that’s been put through them. ROBBINS: So Patrick McCloskey who is the editor-in-chief of the Dakota Digital Review—Patrick, would you like to ask your question? We only have a few minutes left. No, Patrick is—may not still be with us. So we actually only have three minutes left. So do you guys want to sum up? Because we actually have other questions, but they look long and complicated. So would you like to have any thoughts? Or maybe I will just ask you a really scary question, which is: We’re talking about this like it is Wikipedia or like it is a calculator. And that, yes, it’s going to have to be fixed, and we have to be careful, and we have to disclose, and we’re being very ethical about it. We’ve had major leaders of the tech industry have put out a letter that have said: Stop. Pause. Think about this before it destroys society. Is there some gap here that we need to be thinking about? I mean, this is—they are raising some really, really frightening notions. And are we perhaps missing a point here if we’re really just talking about this as, well, it’ll perfect itself. Dex, do you want to go first, and then we’ll have Ben finish up?  HUNTER-TORRICKE: Yeah. So, I mean, the CEO of Google DeepMind signed a letter recently, I think this might be one of the several letters that you referenced, you know, which called on folks to take the potential extinction risks associated with AI as seriously as other major global existential risks. So, for example, the threat of nuclear war, or a global pandemic. And that doesn’t mean at all that we think that that is the most likely scenario. You know, we absolutely believe in the positive value of AI for society, or we wouldn’t be building it.  It is something that if the technology continues to mature and evolve in the way that we expect it will, with our understanding of what is coming, it is something that we should certainly take seriously though, even if it’s a very small possibility. With any technology that’s this powerful, we have to apply the proportionality principle and ensure that we’re mitigating that risk. If we only start preparing for those risks, you know, when they’re apparent, it will probably be too late at that point. So absolutely I think it’s important to contextualize this, and not to induce panic or to say this is something that we think is likely to happen. But it’s something that we absolutely are keeping an eye on amongst very, very long-term challenges that we do need to take seriously. ROBBINS: So, Ben, do you have a sense that—I mean, I have a sense, and I don’t cover this. I just read about it. But I have the sense that these industries are saying, yes, we’re conscious that the world could end, but, you know, we’d sort of like other people to make the decision for us. You know, regulate us, please. Tell us what to do while we continue to race and develop this technology. Is there something more? Are they—can we trust these industries to deal with this? PIMENTEL: I mean, the fact that they used the phrase “extinction risk” is really, I think, very important. That tells me that even the CEOs of Google, DeepMind, and OpenAI, and Microsoft know—don’t know what’s up ahead. They don’t know how this technology is going to evolve. And of course, yes, there will be people who—in these companies, including Dex, who will try to ensure that we have guardrails, and policies, and all that. My problem is, it’s now a competitive landscape. It becomes part of the new competition in tech. And when you have that kind of competition, things get missed, or shortcuts are done. We’ve seen that over and over again. And that’s where you can’t leave this to these companies, not even to the regulators. I mean, the communities have to be involved in the conversations. Like, one risk of AI—it goes beyond journalism—that I’ve heard of, which is for me partly one of the most troubling, is the use of AI for persuasion. And on people who don’t even know that they’re being—they’re communicating with an AI system. The use of AI to, in real time, figure out how to sell you something or convince you about a political campaign. And, in real time, figure out how you’re reacting and adjust, because they have the data, they know that if you say something or respond in a certain way, or you have a facial expression—a certain kind of facial expression, they know how to respond. That, for me, is even scarier. That’s why the European Union just passed the—which could be the law—called AI Act, which would ban that, the use of AI for emotional cognition recognition and manipulation, in essence. The problem, again, is this has become a big wave in tech. Companies are scrambling. VCs are scrambling to fund the startups or even existing companies with mature programs for AI. And on the other hand, you have the regulators and the concerns about the fears of what is the impact. Who’s going to win? I mean, which thread is going to prevail? That’s the big question. ROBBINS: So this has been a fabulous conversation. And we will invite you back probably—you know, things are moving so fast—maybe in six months. Which is a lifetime in technology. I just really want to thank Dex Hunter-Torricke and Ben Pimentel. It’s a fabulous conversation. And everybody who asked questions. And sorry we didn’t get to all of them, but it shows you how fabulous it was. And we’ll do this again soon. I hope we can get you back. And over to Irina. FASKIANOS: Thank you for that. Thank you, Carla, Dex, and Ben. Just to—again, I’m sorry we couldn’t get to all your questions. We will send a link to this webinar. We will also send the link to the Nieman Report that Carla referenced at the top of this. You can follow Dex Hunter-Torricke on Twitter at @dexbarton, and Benjamin Pimentel at @benpimentel. As always, we encourage you to visit CFR.org, ForeignAffairs.com, and ThinkGlobalHealth.org for the latest developments and analysis on international trends and how they are affecting the United States. And of course, do email us to share suggestions for future webinars. You can reach us at [email protected]. So, again, thank you all for being with us and to our speakers and moderator. Have a good day. ROBBINS: Thank you all so much. (END)
  • Taiwan
    U.S.-Taiwan Relations in a New Era
    Although a conflict in the Taiwan Strait has thus far been avoided, deterrence has dangerously eroded. To maintain peace, the United States must restore balance to a situation that has been allowed to tilt far too much in China’s favor.
  • Cybersecurity
    The Great Firewall of Montana: How Could Montana Implement A TikTok Ban?
    Montana banned TikTok a month ago. Enforcing this ban would require the creation of a surveillance regime that would be far more detrimental to privacy and civil liberties than TikTok could ever be.
  • Artificial Intelligence (AI)
    How Artificial Intelligence Could Change the World
    Play
    Artificial Intelligence (AI) could transform economies, politics, and everyday life. Some experts believe this increasingly powerful technology could lead to amazing advances and prosperity. Yet, many tech and industry leaders are warning that AI poses substantial risks, and they are calling for a moratorium on AI research so that safety measures can be established. But amid mounting great-power competition, it’s unclear whether national governments will be able to coordinate on regulating this technology that offers so many economic and strategic opportunities.
  • Women and Women's Rights
    Artificial Intelligence Enters the Political Arena
    Politics is one of the latest industries shaken up by AI, the use of artificially generated content in campaigns could spell trouble for candidates and voters alike in the fight against mis- and disinformation.    
  • Technology and Innovation
    Artificial Intelligence Enters the Political Arena
    Politics is one of the latest industries shaken up by AI, the use of artificially generated content in campaigns could spell trouble for candidates and voters alike in the fight against mis- and disinformation.    
  • United States
    CEO Speaker Series With Dan Schulman
    Play
    Dan Schulman discusses the future of the digital economy, the evolving role of business in society, and leadership lessons learned as president and CEO of PayPal. The CEO Speaker Series is a unique forum for leading global CEOs to share their insights on issues at the center of commerce and foreign policy, and to discuss the changing role of business globally.