Panelists discuss the impact, benefits, and challenges of how artificial intelligence technologies are being adopted across sectors.
BODURTHA: Good morning. I’m Nancy Bodurtha. I’m the vice president of meetings and membership here at the Council on Foreign Relations. And I’d like to welcome you. For those of you who were here last evening, I’d like to welcome you back to the 28th Conference of the Stephen M. Kellen Term Member Program. The conference got off to a very energetic start last night with plenary discussions on the war in the Middle East as well as U.S.-China relations. And then we had a reception that did not want to end. (Laughter.) I am told that some of you may have taken the party offsite. So for any of you who are in need of extra hydration this morning, please know that we have lots of beverage stations that are located throughout the house and they’ll be available all day today.
We have got an extraordinarily full agenda that’s designed to offer you good content, good conversation, and good community. The plenaries today will focus on artificial intelligence, the war in Ukraine, climate change, and we will also have a special keynote conversation with Rockefeller Foundation President Raj Shah, in conversation with Council on Foreign Relations President Mike Froman. They’re going to share some thoughts on how to make a difference in the world. Bit of a spoiler alert, I think Dr. Shah’s compelling thesis basically boils down to go big or go home. And I think that this will be a very provocative note and an inspiring call to action on which to end the conference late this afternoon, before we adjourn for happy hour this evening.
In addition to happy hour, there are going to be lots of opportunities to engage with one another throughout the day. There are breakout sessions both this morning and this afternoon on a really rich range of regional and functional issues. We’ll take you on a little field trip for lunch, a couple of blocks—just a couple blocks south on Park Avenue, and I think half a block to the east, to the Cosmopolitan Club, where you will have an opportunity to have tabletop conversations, again, on a really fascinating range of issues, that will be led by your fellow term members. We’ve also built in plenty of networking breaks, and I really hope that you’ll take full advantage of these opportunities to meet one another. You are an impressive group. This is a uniquely special community. And my hope is that you will all leave this gathering with new connections, lots of LinkedIn invitations, and maybe even some new friendships.
Also, I’d like to just ask that you be on the lookout next week on Monday in your email inbox. We’ll send you a brief survey. We are eager to hear your thoughts about the conference as we start to plan for the 2024 edition. I think it comes as no surprise to you all that it really takes a Council on Foreign Relations village to produce this conference. So I want to just give a round of thanks. I want to double down on our thanks to Andrew Gundlach and the Kellen family for the generosity of the Anna-Maria and Stephen Kellen Foundation in supporting the Term Member Program and making this conference possible. I also want to thank my Council colleagues, Meaghan Fulco and Sam Dunderdale for their extraordinary leadership of the Term Member Program—(applause)—thank you—and their vision—their vision for this conference. Together with Meaghan and Sam, I also want to thank the indefatigable teams from meetings membership, events management, AV, and facilities without whom we could not produce a conference of this scope. So a little more applause for that gang. Thank you. (Applause.)
Mike mentioned this next point last night, and I want to mention it again. January 10 is the application deadline for anyone who would like to apply to the Term Member Program. You all are our best recruiters. So if you have colleagues, friends, family who you think would contribute to and benefit from the program, please encourage them to consider it. The director of membership, Vera Ranola, and I are always happy to speak with prospective candidates and offer guidance on the process. We’re both here at the conference throughout the day. So please feel free to find us, say hello, and hit us up with any questions that you might have. I’m easy to spot, since I’m wearing my orange blazer today. Vera Ranola is standing in the back of the room. Give another wave, Vera. Anyway, we look forward to speaking with you all.
All right. Let’s pivot to this morning’s conversation on artificial intelligence. First, I want to welcome the members who are joining us via Zoom. This is such a vitally important conversation that we’ve opened it up to life members to participate virtually. And I understand that we have over 200 in the Zoom room with us this morning. I’m going to turn the proceedings over to our moderator and my CFR colleague, Kat Duffy. This is Kat’s first Term Member Conference, as she recently joined the Council’s think tank, which we also refer to as our Studies Program. She is our senior fellow for digital and cyberspace policy. Kat, welcome. The floor is yours.
DUFFY: Thank you. Thank you. Good morning, everybody. Oh, man. Yeah, y’all have a late night, huh? (Laughter.) Yeah, you did. All right. OK.
I have been given by this unbelievable Meetings team beautiful—a couple of beautifully prepared sentences that I am on my honor to read. And so I’m going to start with welcoming you to the third plenary of the Council on Foreign Relations’ 28th annual Term Member Conference. This session is titled “How Artificial Intelligence is Reshaping our World.” As Nancy said, I’m Kat Duffy. I’ll be presiding over our session today. I’m joined by three amazing panelists, including Michael joining virtually. And Michael I—are you—you’re dialing in from California, is that right?
DUFFY: At an ungodly hour. (Laughter.) And Jared flew in from San Francisco, like, overnight last night, y’all. So let’s just—can we just start by giving a round of applause to our early birds? Like, first—(applause)—they’re doing it. I’m so appreciative. OK, team, I want to start with just a collective agreement here, OK? It’s early. Everyone had a late night. If you need coffee, just get up and go get coffee. (Laughter.) And come back. I won’t be offended. No one will be offended. Like, I just—I really want you to feel free if you need to get up and caffeinate, go caffeinate, and come back. So I want to just give you that blessing out of the gate. Are we in agreement that you guys are comfortable and you can do that?
AUDIENCE MEMBERS: Yes.
DUFFY: Fantastic. All right. So we’re going to start there. Now, one of the things that I wanted to get a sense of—and unfortunately I can’t do it in the Zoom room, but for this amazing room—I wanted to get a sense first of, like, where people are on this topic. So can I get a show of hands from, like—let’s say, like, one to ten, one being when artificial intelligence comes up at a party I panic because I have like nothing to say. I have nothing to contribute. I’m like, ahh, right? Who are my, like, one to threes, one to fours? Who feels like they’re sort of in that range? Don’t be shy. This is a safe space. (Laughter.) Yeah, OK. Now, where are my, like, four to sixes? Like, I have thoughts and feelings. I’ve read some stuff. I’ve played around. But, like, it’s not—I’m not going to call myself an AI expert? Yeah, OK. That’s solid, about half the room. Who are my people who are, like, I know this. I know this. Like, I work on this. I work in this. I care about this. Yeah? I want it—hands high, y’all. Hands high. Own your expertise. (Laughter.) OK, so that is—that’s a pretty solid number.
Now, the thing that I love about our panel today is that I think we have a tendency sometimes to talk about artificial intelligence as a vertical. And I think of it very much as a lateral, right? It’s going to be impacting across the foreign affairs space. It’s going to be impacting across the economic space. It’s going to be impacting across sort of all of our different journeys. And so what we’re going to do today is offer you a bit of a—like, a speed run through major sectors and how they are thinking about artificial intelligence.
And so we’re going to be starting with Michael, who’s really coming in with deep expertise from McKinsey, and how McKinsey has been working with the private sector, with the corporate sector, and doing surveying to think about what the impacts will be there. From then we’re going to go on to Jared, who is coming in with a lot of deep background and work with the Defense Innovation Unit and has been thinking a lot about how the defense sort of system and the military is thinking about these things, and what the commensurate commercial applications are there. And from there, we’re going to go on to Camille François, who is going to give us a readout on how things are looking in the land of democracy rights and governance. Spoiler alert, it’s tricky. And how other governments are thinking about this as well. And where we are in sort of global governance conversations.
So when you come out of this room today, my hope is that if you’re an AI expert, you will have a little more expertise or a little more insight into an area that hasn’t been your specific focus. And that if you are a proud newbie, you feel better about walking into a happy hour or walking into a meeting and feeling like you have a little more in your toolkit in order to think through these things. Does that sounds good for everybody in terms of a run? OK, fantastic. And so with that, I want to, again, welcome our members who are joining online. Welcome all of you. And I’m going to start now by asking Michael, Jared, and Camille to give us just one to two minutes on where their area of focus is and sort of how they came to these questions. So, Michael, can I start with you?
CHUI: Sure. So my area of focus—I’m part of McKinsey, which is sort of the think tank-y part of McKinsey. I’m at the McKinsey Global Institute. All of us—or, the few of us who are at MGI were all McKinsey consultants first. And so my research program has been around the impact of long-term technology trends. And obviously AI is one of the most important ones recently. So I’ve been doing a bunch of research there. I jokingly describe myself as akin to a private sector professor, but I don’t get tenure. But instead of grad students, I have much more motivated McKinsey consultants join us for our research. (Laughter.) I actually—much better than I was when I was in grad school. (Laughter.)
I actually started as an AI practitioner, a couple AI winters ago. So, you know, I’m no longer on the keyboard as much but, you know, back in the day, you know, I once studied with the late, great David Rumelhart. And at the time, you know, a reasonably sized neural network was a few dozen neurons. And as people know now, you know, 175 billion is kind of not that big a deal nowadays. So things have changed drastically in the time that I’ve been around.
Jared, how about you?
DUNNMON: So my journey here was a bit of a broken road. I started as an engineer building energy systems. And I was putting sensors on those energy systems in grad school. And I got really annoyed with the fact that I couldn’t interpret the data coming out of it. This was around 2015-2016. And there were whisperings in the air that this thing called machine learning was kind of starting to work. And so I started using it. And in fact, it did work sufficiently well that I went and spent some time in grad school as a postdoc in the AI Lab over at Stanford doing that, working on building systems for a number of different applications. And then I got a chance to work standing up a company coming out of that. And then about two years in there, I got a chance to go and do public service in the way that I would have wanted it to do it, kind of at the intersection of commercial technology and national security. So I was at the Defense Innovation Unit for three years running programs specifically focused on applications of AI throughout the DOD.
Those obviously have major kind of implications for interactions with policy, not just within DOD but across the interagency. And so I went from, you know, asking—for those of you that know the Pentagon—walking in and asking, please tell me exactly what OSD policy does, I don’t know, to having thought about these things in the context of not just our individual programs but the National Defense Strategy, how we interact with allies and partners, you know, et cetera and so forth. And now I’m actually back in the energy industry, working on—working with a company that’s focused on building advanced battery cells, LFP cells. And to do that, you have to optimize pretty much every section of the—of the operation. You’ve got to find new electrolytes, you’ve got to optimize your manufacturing lines, you’ve got to get data off these batteries. And so that’s what I spend my time doing now.
DUFFY: And Camille.
FRANҪOIS: Yeah. So if the pink pants don’t communicate, I’ll preface by saying I’m a real optimist. (Laughter.) And for the past ten years, or so, I’ve been working in Silicon Valley in a job that we call trust and safety, which is essentially like a doctor disaster. When something goes really wrong with technology and its impact on society, you call your friendly trust and safety person. So I’ve been working on how terrorist groups leverage technologies, including social media, to recruit. I’ve been working on foreign interference, troll farms. I’ve been working on the disastrous impacts of social media on kids’ health, child sexual abuse and exploitation online and all those very nice fun things.
And I think over the years, you know, the few of us who specialize in this have been working quite closely on AI and how does that change those sociotechnical impacts? In which ways is it sort of accelerated some of those bad impacts of tech on society, in which ways is there a paradigm shift both in how that impact manifests and the tools that we have to mitigate for these impacts? These days, I still do this in Silicon Valley with a trust and safety team, this time in the augmented reality and gaming industry. And I have the great honor to teach those issues too, over at Columbia University. And a few months ago, President Macron of France asked me to lead one of his big initiatives on AI and democracy. So we said, that sounds like a large topic. And so we tried to sort of go at it through small programs, which we’re now doing through an innovation lab based a few blocks up over at Columbia University, SIPA.
DUFFY: Having known Camille for many years, I would say it is wildly on brand, that her side hustle is co-chairing an AI initiative with a Nobel Prize winner, Maria Ressa, for the president of France. That’s just, like, her side hustle. (Laughter.)
OK, fantastic. And so having everyone sort of understanding the lenses from what you all are coming, Michael, can I turn back to you, and can I ask what are the biggest trends that you’re seeing? Where are you seeing the greatest moments of opportunity? And also, where are you seeing the risks that will have to be addressed or mitigated in order to achieve those opportunities?
CHUI: Well, I’ll let Camille take care of the risks, you know, trust and safety. We’ll just call them up. No, but seriously, let me tell you a little bit about some of our research in terms of what we have observed. And I say we don’t do projections, but, you know, what potential might be going forward in some of our research. We’ve taken a number of different lenses to it. One is looking at, you know, use cases that businesses largely can apply. We sometimes describe our methodology as micro to macro. And, you know, the advantage I have as a researcher is thousands of consultant colleagues who are in the world. They don’t share any, you know, client-proprietary information with me but, you know, we have experts in every geography, in every function, every sector.
And so what we’ve done is identify sixty different potential use cases, particularly of generative AI. Again, we’ve actually studied AI for the better half of a decade. And so I’ll leave off, you know, sort of analytical AI to the side, but we can come back to that as well. We looked at over sixty different use cases, or potential use cases, of generative AI around the world. And basically, found the potential to add $2.6 to $4.4 trillion of value, call it potential profit, by corporations by applying these technologies in every function and every industry. They largely fall into four different categories. So folks who have used generative AI, you know, you they you enter something unstructured, like natural language, and you get something unstructured out the other side.
Those categories, you know, correspond with those capabilities pretty well. You know, one of them is around marketing because, again, if you want to market to a million people or you want to just ask something to create a thirty-second video spot, these systems are moving in that direction. The second category is customer service, because, you know, if it’s a chatbot, it makes sense you can actually use it to be a virtual expert. The other two categories are a little bit, again, for some people more surprising. Computer language is just a language. And you might have heard the term “large language models.” You can now ask these systems to write computer code for you. It doesn’t come out perfectly, but it can accelerate the productivity of software engineers by 10 to 70 percent. You know, call it 50 percent maybe in the median. That’s huge.
By the way, that doesn’t mean we’re going to fire a third of our software engineers. As people have heard, software is eating the world. We need more software. But if we can make, you know, the software engineers, you know, more productive, that’s all to the good. And then, though, I think the under-recognized potential for generative AI is in research and development. Because, again, you know, if you can ask a system like this, you know, please generate some drug candidates that don’t have these side effects—we’re not quite there yet, but there’s a lot of work being done in this area. And so, you know, Jared might be talking about, you know, what this might mean for defense. But, you know, that’s a place where I don’t think everyone recognizes it’s not just being able to write something and, you know, Shakespeare and iambic pentameter. It is, please develop me a design for this circuit.
So, you know, that’s a lot of the potential in the business space. We also looked at it in terms of its potential impact on labor. Again, we’ve been studying the potential automation impacts of technology in general. You know, previous generations of AI, robotics, et cetera. The interesting thing about generative AI—a few interesting things about it, I guess. But one of them is, it has greatly accelerated the potential for automation. And why is that? Because previously our assessment was that, you know, reaching a median level of capability in understanding human language wouldn’t happen until, call it, the 2040s or so. Or maybe around 2040, late 2030s. Because of generative AI, and many of us have experienced this, now that’s moved forward about, you know, arguably more than a decade. And a large percentage of the activities we ask people to do in business require human levels of understanding of natural language.
And so we’ve looked at every occupation in the economy. We’ve looked at not only every occupation, but all of the constituent detailed work activities because, again, all of our jobs are heterogeneous. You’re rarely going to have a robot show up and it will do everything that anybody does in their job. As we analyzed it at that level. We scored it against eighteen different capabilities which could be automated, including natural language. And then tried to understand, look, if this is the cost of automation and this is the, you know, how much you pay for, you know, this occupation in all of these forty-seven countries that we have data for, you know, how fast might this all happen?
So again, we’ve seen—we’re seeing a potential acceleration and the potential to automate work in the economy, to the extent to which in the U.S. economy it might be 30 percent of today’s activities could be automated by 2030, in the midpoint of a wide range of scenarios that we modelled. That said, do we think there’s still enough important work to be done? Yeah, we don’t see a lack of work to be done. But what that will require is a large amount of transitions in the skills that people have in the workforce. And we need to make sure if this is going to work out well that people are paid in a way that actually is sustaining and hopefully increases economic growth.
If all that happens, though, then we could see an increase in productivity. And again, half of the sources of our economic growth in the largest economies in the world over the past half century have come about because of increases in population or increases in workforce. People are living longer. We have more women in the workforce. That’s all to the good. Half of it because of increases in productivity. In the next half century, right, you know, if you look at demographics there are large swaths of the world where the size of the workforce is actually declining.
And so we don’t have enough people to have the historical rates of economic growth that we need to make sure the next generation has better lives than we do. So we need to accelerate productivity growth. And for the economics geeks in the room, you know that productivity growth has been plateauing and in some cases is lower than it has been over time. And so this technology has the potential to accelerate productivity growth. But we do need the reskilling and we do need to make sure people actually get incomes in order for that all to happen.
Kat, I don’t know if I exceeded my time. But that’s what I got for now.
DUFFY: No, no, no. That’s great. And so I think just to sort of do some highlights with like, 2.6 (trillion dollars) to $4 trillion value that can be applied. Marketing, customer service, coding software, and R&D are going to be really key areas. We’re looking at extreme impacts on labor, but those don’t necessarily have to be disastrous impacts on labor, right? They’re going to be shifts in labor and we’re going to have to be getting ready and getting prepared for those shifts.
So I’m going to move to Jared now, because, Jared, in the Defense Innovation Unit you’ve been thinking a lot, and folks have been thinking for a long time about what shifts need to look like, what different capabilities and capacities are. And so I want to turn it over to you. If you can give us five to seven, eight minutes on what you’ve seen and what you’ve been learning.
DUNNMON: Absolutely. And so just to—you know, I’m here in my private sector capacity. So I will be giving kind of my views on this, my experience on this, kind of in that capacity.
In the security community, there are a couple of application sets of AI that we, I would say, focus on a lot. But I would actually abstract up from that for this conversation and look, at dovetailing off of what Michael said, at where are the impacts going to be and how does that affect the security community horizontally? I tend to bucket this into three areas that I—that I like to focus on. They’re not the only three areas, but they’re areas to emphasize. One is accelerating scientific progress. Another is the implications of the productivity increases that Michael was talking about across the economy on the security community. And then lastly, is the effect of these of these technologies on the state of geopolitical competition. And I’ll dive into that a bit.
So on the science part to start with, I can’t emphasize this enough, as someone who’s been a scientist and engineer, the things that we spend time doing in science—running experiments, testing hypotheses, asking questions of the literature, doing literature reviews, trying to figure out which of the thousand experiments I should run when I only have a budget for ten—all of these things, if I can—I don’t have to be perfect. If I can rank experiments and say, look, I probably got a good one in the top twenty versus, you know, me throwing a dart at a dartboard, that’s massive improvement. And I expect to see that across applications. And we’re already starting to see it across applications—biology, chemistry, even pure mathematics, physics, you name it. And so that’s one aspect.
The example that I’ll give there is the one that I think a lot of folks in this room might know, which is AlphaFold, where, you know, protein structure prediction where you’re trying to take a sequence of amino acids and predict how a protein is going to fold in three dimensions. That is a—that was decades old, almost a century-old problem. And, you know, in the last couple of years, we’ve gotten substantial increases in performance to the point that we’ve opened up an entirely new area of what you would call AI-driven molecule design, not just for drugs but also for things like biologically based production, et cetera. So that’s the science angle.
There’s also the economics angle. And if you think about, you know, the security community, and particularly the Department of Defense, it’s very much a microcosm of society in a lot of ways. You know, we have to do things like medicine, predictive maintenance. We have to optimize business processes. We have to do all the things that society, broadly speaking, has to do. And so in—so in the same way that we expect there’s trillions of dollars of headroom in in society, we also expect that we can be, you know, more efficient with taxpayer dollars, and we expect that, you know, we can do things—and I would say along three axes that are important—unprecedented speed, unprecedented scale, and unprecedented performance.
So I can do things faster than I could ever do them before. So I could look across the entire world every day, like looking for something, like I couldn’t—I can’t—physically couldn’t do that before at scale. I can I can ask questions of the millions of documents and get an intelligent answer. We’re just—there’s no—that was physically inaccessible beforehand. And then performance. They are just things that I can—that machines can perceive that humans have a hard time perceiving.
And so given those implications, I’ll focus on—I would make a comment here that a reason that you’re seeing this—the transformation happen now, is, I would argue, with an infrastructural one. So when I say AI in the context that I’m talking about right now, I’m really talking about post-2014 neural network based systems. If you want to go back to, you know, 1900, running linear regression on a computer, over a million points would have seemed like AI. You know, today, we take this for granted. So I’m talking about this in that context.
The infrastructure, from a software perspective in particular—and, yes, hardware underlie this—but if you believe that cloud computing has democratized hardware, go back even five or six years. Forget generative AI. Even just normal, you know, I want to take an image and tell if it’s a cat or a dog, the infrastructure to train that model, to test it, to deploy it, to monitor it after deployment, to figure out when it breaks, to redo that cycle for tens, hundreds, even thousands of models a day, which is common in your biggest—your biggest companies. You know, so your Microsoft, your Google, your Apple—most companies in the Fortune 500 were not data-first companies. They were not built to do that. So they couldn’t do that.
You now have a world where, because of that, you know, kind of percolation in open-source software and in—and in software offerings in the private sector that are aimed at small and medium enterprise, the rest of the Fortune 500 and the rest of the economy is now able to do that in a way that they couldn’t even a couple years ago. And so you’re starting to see that diffusion now and across the economy, I would argue. So that’s an important concept as we go into, you know, the aspects of this that are, you know, kind of specific to geopolitical competition.
So in the security community, I tend to bucket the application space that we—that we think about, in addition to kind of being a microcosm of civil society, into five core buckets. So one is mission forecasting and planning. I need to go do something, have a bunch of historical data, how do I go do it best? Number two, real-time decision making. I have a bunch of things that are streaming at me in real time, what do I do with them? And I may not have all of them all the time.
Number three, control of complex systems. I’ve got 9,000 things that I need to orchestrate to accomplish something. How do I do that? Number four, anomaly detection. It goes without saying. Anyone who’s been in the intelligence community you’re sitting there, you know, rolling your eyes because, like, yes, this is what I do for a living. And then last, there’s an entire area that’s related to what I talked about before, which is, I would say, the information and infrastructure protection and verification. Because none of these systems work unless the information that’s going into that makes sense, and unless you have the infrastructure to run them.
And so across—and so in the security community, those are five buckets of things that we tend to care a lot about. I’m not going to go into specifics on, you know, individual applications there, because I’m actually going to get to talk a bit more broadly about how, that the dynamic that these—where these systems are diffusing throughout the economy, throughout the commercial sector, and how—and how the commercial sector is driving the security community in this world. This is not something that was invented in in DOD and then percolated out. Yes, fundamental research drove it, but ultimately a lot of these technologies are coming from the commercial and open-source communities. And that means that competition in the security community in this sector has some really interesting dynamics.
So AI, I would argue, is, is shaping, but is also emblematic of what we’re seeing in the twenty-first century more broadly. So if we think about the AI stack, the technology stack, there is at the bottom, hardware. So the chips that you run on. There’s data that you need to run, you know, modern machine learning systems. There’s software that you need to actually run algorithms. There’s the algorithms themselves. And there’s kind of a user-interface layer, that kind of humans interact with. I would say, and folks can argue with me on any of this, to be clear. This is my biased view of the world. I would argue that the user interface layer is not wildly competitive. There are folks who are good at it, but it’s something that it’s doable.
There’s algorithms. Shocking the core algorithms in this area, most of them are open source. Most of them are built either in academia, or, interestingly, by big companies who then release them. Most of the research that went into, you know, the generative AI platforms that you would think of—your, you know, GPT-style models—a lot of that stuff was published, you know, years ago. And then it took a while to scale it. And so that’s kind of an interesting dynamic. And there is competition in that space, but you kind of assume that it’s table stakes this stuff is released in the open source that’s kind of accessible. OK, so those two things don’t seem that interesting.
So now we get into software. And this starts to get a bit interesting, because most of these algorithms right now, if you go and find an implementation of them, they’re built in—so raise your hand. Who knows what TensorFlow is? PyTorch? Oh, there’s more PyTorch than TensorFlow. That’s an opinion. How about PaddlePaddle? Not one. So PaddlePaddle is a Chinese equivalent of TensorFlow and PyTorch. And the statistics on this don’t actually surprise me because they’re actually reflected worldwide. There aren’t that many people that use it. They’re mostly in—you know, obviously, in mainland China.
And that has implications for talent competition, because what it means is that if everybody around the world is coding in TensorFlow, and PyTorch, because they have the libraries that support building machine-learning systems, you’re not competing—you’re not having a workforce that’s being built to use the thing that, you know, the Chinese state-owned enterprises want folks to use. And that has major implications for talent. And there’s—so there’s this interesting dynamic where you have software companies paying millions, billions of dollars, arguably, to support these open-source software packages that they’re releasing out into the world so that people will build on top of them and create the talent base that they need.
Then there’s the data. The data is really interesting, because there was—there was a prevailing wisdom for a while that, you know, China has a huge amount of data and, you know, therefore its AI systems were going to, you know, quote/unquote, “eat the world.” That being said, and I say this only half tongue in cheek, that data has Chinese characteristics. And so what that means is that, you know, there’s a homogeneity to some of it. There is a context to some of it that is both—that is cultural, that involves censorship. That has implications for function. That has implications for how well do these systems work when you take them outside of, say, a Chinese context?
That has implications not just economically, you know, am I going to build on something that doesn’t—you know, doesn’t give me the answers I want or am I going to build on something that’s restricted in some way? But it also has implications in the security community, because the collective—I would argue—the collective West has decades of operational data from real-world operations that other folks in the world do not. And so if you’re going to build AI systems on top of that data, there is a good question—there’s a legitimate question as to whether the value of that data and how it compares to a large amount of data that was that was recorded outside of conflict.
The last piece I’ll end with it’s hardware, which I think folks are probably pretty aware of. So the graphics processing units that—you know, and there are other chips as well—but graphics processing units mostly that modern machine learning neural networks are built on, you know, in a cosmic irony, built using American designs, Dutch and Japanese equipment, raw materials from the Chinese mainland, in Taiwan. And so that is the state of the world that we find ourselves in.
And you start to see as a result, you know, export controls saying, hey, you know, we don’t want—you know, we may not want, you know, chips of a certain type being used, you know, outside the U.S. So we’re going to put export controls on them. You know, and then there’s a question of how do you disambiguate that and make the argument, and pass the red face test saying, well, these are—this is for security reasons not economic competition reasons? So there’s a very interesting dynamic there, where there’s an interplay. And then you started to see the Chinese export controls on things like gallium and kind of other core ingredients in in semiconductors.
So there’s this whole—this across the stack competition happening that affects what we can do in the security community and how we think about how we build our systems in a secure way. assume that they’ll work, assume that they’ll not work, et cetera. And I’ll end with just saying that there’s, again, kind of an interesting fact here, which is the graphics processing unit came to be not because we were driving towards better neural network performance, but because people wanted to play better video games. (Laughter.) So I just leave you with the fact that most of what we’re talking about here is fundamentally resulting from a class of algorithms that was developed in the ’80s and ’90s, but never worked until people really want to play video games in the 2000s. And so in just terms of being able to predict where things are going, it’s not necessarily the easiest.
DUFFY: So when we think about that, we think about the chips, we think about the data, we think about the algorithms, we think about the software, we think about the user face, and we’re also thinking about within that, right, like, what is the “knowledge,” quote/unquote, underneath that data? Where is it coming from, right? What does it actually increase in terms of speed? What is it increased in terms of scale? And then what does it increase in terms of stakes? And how do governance—and particularly, like, governments that care about a deliberative process and about building consensus, how are they going to deal with that speed and that scale? Because it does fundamentally outpace what is needed for a deliberative process? And so with that, I’m going to turn it over to Camille to give us a read on what you see.
FRANҪOIS: Sure. So I’m taking it from video games are awesome to the doom of democracy, is that right? (Laughter.)
DUFFY: I mean, this is your brand.
FRANҪOIS: Yeah. (Laughter.)
DUFFY: You do make Pokémon safe.
FRANҪOIS: That’s true. (Laughter.) All right. So I think Michael and Jared give a really clear picture of how we are at a really interesting moment for that technology, right? It’s a really interesting moment in AI, because while some of these technologies—notably, the transformers technology underpinning the generative AI era—are not new, we are in a moment of radical acceleration where new developments are really coming much faster than what they used to. And, more importantly, these technologies, who used to be a little bit in the lab, a little bit to the researchers, are now becoming mainstream. We have a lot more people having actually played with generative AI.
And so as we look at this from a governance perspective, we’re saying, all right, that’s a really interesting, crucial moment in AI. And I think a few people are saying, well, that’s also a really pivotal moment for global democracy, for a number of reasons. Recent survey approximate that about 72 percent of the world right now lives under authoritarian rule. That’s up from 60 percent last year. If I look at 2024, we have sixty-five elections around the world in more than fifty-four countries. That includes huge democracies in the world, right? We’re talking India. We’re talking Indonesia. The U.S. That includes pivotal elections, Taiwan, European Union parliamentary elections. I know, it sounds really boring but, you know, trust me, this actually matters, shapes the worlds too.
And this is also a moment where I think everybody recognizes that the impact of technology on democracy, on society, on elections can be really complicated to tackle, to mitigate, and to govern. And so as I think about how to share, you know, Cliff’s Notes on that governance debate on AI, I would think about some of the fault lines that might be worth highlighting. And I think about three fault lines. I’m just going to go through them. The first one is, which type of harms are we talking about? The second one is, how do these harms come to be? And the third one is, what the hell do we do about them?
And so for the first one, you generally see sort of two camps from people who are saying the harms of this new sort of generation of AI technology and AI systems are either immediate harms or far-reaching existential harms. The far-reaching existential harms camp is an interesting one. If you want to sort of fast-forward your reading through it, you can pick up a book that was published now ten years ago, Nick Bostrom’s book on superintelligence.
And ten years ago, scientists who were working on AI systems were starting to wonder, hey, what if we create systems that actually surpass the level of intelligence of humans? And what does that mean for existential risk? Are we going to create machines that are going to be smarter than humans, and are we going to create machines who are going to want to get rid of humans and replace the human race? Now if that sounds crazy, it kind of does, I will highlight that a lot of serious AI scientists feel strongly that this should be the conversation of global governance. They are saying very seriously we should invest global governance muscles on figuring out what are we going to do for this existential risk of AI.
On the other side of that camp, you’ll have a series of people saying, wait a minute, that’s cute. We also have 99 risks that are manifesting right now. There’s so many harms that we already know about, that are already documented, and that we need to tackle with much more seriousness. That are harms like discrimination, right? We know that those AI systems are biased. They have a lot of issues with making racist prediction, sexist predictions. We know that we when we deploy those AI systems, we can actually accelerate and sharpen some of the discriminatory mechanics that come into the way they’ve been built and the data they’ve been trained on.
Again, I don’t want to sound too academic about it, but if you want to pick up one book on this—and a fun one—there’s one that’s going to be published in two days. It’s called Unmasking AI. It’s a great book by Joy Buolamwini, who’s talking about her own journey realizing that there were some racist dynamics embedded in a lot of these AI systems and going about having these conversations with major players in the industry. So a lot to do there.
There are also a lot of immediate harms with what does it mean when a lot of the people who’ve been doing disinformation and foreign interference in the context of elections, for instance, suddenly have access to generative AI? There are also, of course, a lot of harms related to privacy, and even to classified materials, right? A lot of scientists are saying, hey, it seems a lot of these models that are now publicly accessible to a lot of people have absorbed a lot of the internet. That also means that I may retrieve information about people that they didn’t want me to retrieve. I may actually be able to figure out where somebody lives, and that’s not what they wanted. Or I may have these models hallucinate fake things about people that are harmful.
Everybody know what we call a hallucination? So some of these models will very confidently make up a very fake answer. And either you’re very used to dealing with such things because, for instance, I don’t know, you work in foreign affairs and you’ve been, you know, in rooms where that happens. (Laughter.) But, you know, the nickname that we that we call it—it’s a debate nickname—but we call that a hallucination. It’s essentially models producing wrong content and yet, you know, feeling very confident about it.
DUFFY: Lawyers, do not do your briefs with this! (Laughter.)
FRANҪOIS: No. Don’t do that. And so that’s sort of the first fault line, which is we understand that there are harms. They go all the way from a lot of very immediate harms—privacy, discrimination, disinformation—to a lot of very far-fetched harms, are robots going to take over the world. And it’s kind of complicated to have a structured conversation on how to go about it if people don’t agree which harms we should tackle first.
The second fault line related to that is the one between open models, closed models. Now, I suppose most of you have used ChatGPT. Yeah, OK. I suppose that most of you have tried to make it do things it’s not supposed to do, right? (Laughter.) That’s pretty fun, right? Like, I don’t know, give me a recipe for a bomb. And then if everything goes according to the plan it’s supposed to say, no, that’s not what I want to do today. I have terms of services. And really, I just don’t want to answer this question. And then, you know, my students at Columbia, I have them break those safety safeguards. So essentially, you say, OK, I’m giving you thirty minutes and you do need to come up with a recipe for a bomb. I know, this raises questions in faculty. But, you know, the point is—(laughter)—
DUFFY: Camille and I throw really fun parties. (Laughter.)
FRANҪOIS: It turns out that, you know, if you really double down, those safety safeguards, which are important, are also easy to go around. And so similarly, if you say, hey, nice model that I’m talking to through a very, you know, clever UI, I really do want you to give me a plan to overthrow my government. Again, its first answer should be, no, we’re not doing that today. But if you really want to, eventually you will get your plan. And so that leads to a few people saying, well, those safety safeguards while they’re imperfect, right now they’re all we have. And so it’s a really bad idea for people to get open models. The only models that really we should be investing in are those models that large corporations control, what we call the closed models, right?
So ChatGPT here is a good example. It’s produced by Open AI. It has a trust and safety team. Although somebody just—you know, the person who was leading it, just quit from it. But that’s a story for another day. It shows you that it’s not super straightforward to keep those models in line. And so that’s sort of the first camp that says, OK, it’s too dangerous and we don’t really know how to deal with that. So let’s make sure that any AI is produced by people who are vetted and who have invested safety safeguards, and who have safety teams, and who have terms of services.
Other side of the pond, people who are saying, no, that’s a terrible idea. That’s the opposite of what you want to do if you actually care about safety and security, because if you only have Google and Open AI building those models, then how’s academic research going to find out what those real harms are? And how are we going to build alternatives? And how our regulators going to be able to build the models that they need? We’re going to need a lot of specific models that are not necessarily in line with business interests, right?
I’ll give you a very—not particularly fun, but very real example, on the matter of producing child sexual abuse material. A lot of the NGOs who are working on this are saying, well, unfortunately, there’s a lot of this material that’s now producing synthetically. And what you need as a society in order to best deal with that is to be able to create generators that can tell you what is synthetic material and what’s a real picture of the child being harmed that is in need of law enforcement investigation and rescue.
DUFFY: Can I just pick up for some people, synthetic is a very term—synthetic material is material that’s been produced by an AI system that is not actually a picture of a human being, but is an AI-generated picture. For those who are newer in this space.
FRANҪOIS: Yeah. Essentially, you’re facing a picture of harm, and you’re trying to figure out, is there a real person being harmed? And should an investigation be opened? Is there, you know, need for rescue, for instance? Or is this a fake person, a fake situation, in which case we might still have questions but it’s sort of a different ways to tackle that harm. And so, you know, we often think about those very big models, but there’s going to be a need for a lot of very specific models, including for a lot of various central functions of democracy. And those models might not be produced by Open AI, or by Google, by Microsoft, by all these giants. We might want people to have access to these open models in order to tackle those harms. So that’s sort of the second fault line, open versus closed.
Third, faut line, about governance. And essentially, this one is saying a lot of these issues, why they seem novel, are issues that we’ve tackled with other issues—I’m saying issues a lot—with other technologies. And so, for instance, we have the FCC, and we want to empower our existing organizations to tackle those new harms that are produced by new systems. So essentially, folks saying, well, let’s use the institutions we have, both on the domestic side and on the international side, versus folks saying, no, this is an entirely new beast. We want entirely new frameworks and new institutions. And this is creating an interesting competition on the global scale for who is going to show leadership in regulating this new object.
So in a week I am heading to the U.K. AI Safety Summit, where the U.K. is going to announce probably a bunch of new things on how they see the regulation of AI. On Monday, you will see coming probably from the White House a new executive order on regulating AI. It is expected that France will announce a few things too at the Paris Peace Forum in a few weeks. This is really a moment where you see a lot of governments saying, all right, let’s come up with something new, a new object, a new set of ideas, to regulate AI. And I’m also missing that the G-7 has put out in Hiroshima Declaration on how to govern AI. Yesterday, the U.N. announced a new advisory body. So you can kind of see, like, a little bit of a race.
FRANҪOIS: China, of course.
DUFFY: Through BRI, 155 countries.
FRANҪOIS: Absolutely. I quite like Anu Bradford’s model that she put through in this book called Digital Empires, where she says generally when it comes to regulating new technology you can see China as a state-based system, the U.S. as a market-based system, and the EU is a rule-based system. And we kind of see those different approaches manifesting in AI too.
So that sort of, you know, leaves us with a set of questions which are also interesting from a foreign affairs perspective because it also becomes what is the object that we can compare it to? So folks are saying, for instance, AI, if I believe in existential risk, if I am coming from this idea that we really need to close access, let’s say we’re going to treat it as nuclear nonproliferation with a set of rules that are inspired from this. That doesn’t fully work for a bunch of very specific reasons. But this is sort of like that pivotal moment right now, where all global powers are trying to figure out what can it be compared to, what are the harms that we need to focus on, and what are the strategy that we’re going to deploy both domestically and internationally to go out and tackle those harms?
DUFFY: I think Global Partners Digital, which is a great NGO in the digital space, did a recent tally and I think they calculated forty-seven different multilateral initiatives occurring right now on AI governance.
And so I know one of the titles of this session was U.S. foreign policy but given how much is going to happen in the next week to week and a half, truly, we didn’t want to try to put a focus today on what the existing foreign policy is because literally in three and four days it’s going to look different and we’re going to have different things to be grappling with.
So I would say just hold on to your hats. (Laughter.) It’s coming. And so—all right. So, Cam, I think from you so much of what we’re hearing I really can’t emphasize enough this question of thinking about existential risk versus thinking about existent risk.
You hear people in this space talk about ex risk and they’re generally talking about existential risk. It’s worth pushing on, like, why aren’t we looking at existent risk as opposed to existential risk, right?
This question of open versus closed is going to be a really important and fascinating one as well in particular because it gets deeply technical and this is an area where policymakers are going to really struggle and have historically always struggled with understanding how something that is open can also be more secure.
So it’s going to be in the weeds on what is already a weedy topic and so this one in particular, I think, is going—it’s going to be tricky and fundamentally impact how this space evolves.
FRANÇOIS: Kat, can I—
FRANÇOIS: —add the fun side of it? Which is—
DUFFY: For sure.
FRANÇOIS: —a lot of people do, as I’m sure you heard, you know, in—(laughs)—in my tone of voice do believe that openness here is going to be key and critical. And while that position is shared by a lot of players, it is also fair to say that nobody agrees what openness means in the context of AI.
This is this kind of fun moment where we kind of all agree that openness is key, but really is this acknowledgement that openness is a spectrum from things that are fully open—they’re trained on open data, they’re running on an open infrastructure, their license is open—to things that are, like, maybe a little bit less open but still on that spectrum. So it’s a fault line and, yet, you know, the contour of that debate is still very much up in the air.
DUFFY: And so with that we’re going to turn it over. I am certain that there are a number of questions in this room but everyone has been sitting and listening for a long time at this moment. So I want to do one quick thing. As we think about the complexities also of governing and of talking about this across different countries and across different cultures who in this room speaks French?
Who in this room has thought about how you pronounce ChatGPT in French? (Laughter.) Cam, would you do us the honor, being French, of giving us the pronunciation?
FRANÇOIS: ChatGPT. (Laughter.)
DUFFY: Now let’s translate that into English for the non-French speakers. And why don’t you direct it at me because it feels apropos?
FRANÇOIS: Why do I have to do that?
DUFFY: Because, you know.
DUFFY: And in English, please?
FRANÇOIS: Cat, I farted. (Laughter.)
DUFFY: I support you. I support you.
All right. And so with that, we’re going to take literally thirty seconds. Thirty seconds. I want everyone to stand up, get your wiggles out. I want you to look to your neighbor. Find a neighbor and I want you each to say as solemnly as you can to each other ChatGPT. (Laughter.)
Excellent. Well done. Well done. All right. Everyone take your seats. Take your seats. We’re going to go into a Q&A now. All right. Oh, my lovely term members, shhh.
FRANÇOIS: I swear France has more to bring to the global governance of AI debate than that.
DUFFY: This is—this is true, but it’s good to let—I’m a mom and it’s good to let people get their wiggles out. OK. So we’re going to start with questions. I’m going to take maybe three questions at a time, turn them over to our amazing panelists to answer, and then try to do another round.
So can we start—you have a very enthusiastic hand up. Do you need a microphone?
Q: Good morning. Tao Tan, Perception Capital Partners. Michael Chui, so good to see you.
The question is this. You’ve raised the point that AI can significantly accelerate the pace of R&D productivity and R&D has been on a downward trend in our country for several decades now. So the question for you is: How are economic actors thinking of this? Are they thinking of this as an opportunity to backfill decades of foregone R&D or are they thinking of this as I can now get more for less and resume the downward—potentially accelerate the downward pace of R&D spending?
Q: Thank you very much. Joseph Gasparro, RBC Capital Markets.
So this room is filled with the most ambitious, smartest, curious people probably in the world. Only some of us knew about PaddlePaddle. So how do we find the next Kat, the next Jared, the next Camille, or the next Michael when it comes to colleges and universities and talent development and competition?
DUFFY: OK. Great. And let me do one more question.
Q: Good morning. Carrie Lee from the U.S. Army War College.
My question is sparked by something that Michael said but mostly for Camille. AI will accelerate productivity in some areas. But, Michael, something that you said struck me that as it accelerates efficiency in some, you know, historically we have supplanted the labor force using immigration rather than kind of just declining workforces and it occurs to me that this may—that the introduction of AI particularly into the economy may accelerate inequality, particularly in some aspects and some kind of—depending on your industry.
We’ve already seen kind of the results of significant inequality domestically, which has posed problems for global governance, the emergence of populism, et cetera. How are we thinking about that kind of almost not immediate harm but, like, mid-level harm of AI when it comes to thinking about kind of global governance and what that means for the rest of the world?
DUFFY: Fantastic. So the through line that I hear at least between these conversations is as ever we’re looking at speed and scale, right. So how are we speeding up and scaling R&D versus thinking of speeding and scaling up essentially, like, market efficiencies that allow for R&D to be cheaper, right?
How are we speeding and scaling up talent identification and talent development and how are we potentially speeding and scaling up bias, right, marginalization, and then I think that by extension what would we do about that.
And so with that I will—I’ll just turn it over to our panelists. But, Michael, I wanted to start with you because I know you’re—you know, you’re coming in virtually and we love you when we see you. And so do you want to—do you want to weigh in first?
CHUI: I didn’t get to go to the reception.
No, first of all, Tao, great to see you. (Laughter.) Great to see you.
Look, companies in terms of their R&D they’re not necessarily thinking about should I, you know, backfill the R&D I didn’t do years ago. They are looking from a competitive standpoint what do I need to do in order to succeed.
Some companies will say, look, I need to do as little as I can just to—others are, you know, you can see it. So there’s a huge amount of individual variation. But to the extent to which this increases competitiveness that’s all for the good because that drives into productivity.
If you don’t mind, I’ll just offer a couple thoughts on the others. On the talent side we need to look for a lot more talent particularly for these technologies and, hopefully, we can look beyond, you know, like, all the usual suspects. You know, I went to one of those usual suspects. But we don’t necessarily need people who got Ph.D.s from D-1 schools or what have you. We need people from all over the place.
And so understanding, you know, there’s a—some of our work on future of work is looking at having a more skills-based view. And then just to segue into the inequality question, I mean, one of the interesting things that I think people don’t recognize that inequality has actually declined over the past few years in the United States. Now, a lot of that is a result of public policy but, nevertheless, it’s interesting as we look at it.
That said, historically, you know, returns to capital are—you know, can increase versus returns to labor and so we need to think really hard about how we actually, you know, distribute the gains of these brilliant machines.
That said, I mean, the one interesting thing about generative AI as opposed to previous AI and previous types of automation for—again, for economics geeks, you know, there’s this idea of skill bias technological change. Most of these technologies historically have affected low and middle wage workers or occupations, which had lower levels of occupational attainment as their requirements.
Generative AI is exactly the opposite. You know, losers like me who have Ph.D.s it actually affects us more as it turns out and so exactly the opposite. You could argue that’s one of the reasons why generative AI has become so interesting because now the people in power realize, oh, gosh, like, this affects me, too.
With that said, let me just—a quick heads up. I have a low grade amount of anxiety right now because despite having my machine plugged in—by the way, power is one of the big issues or energy is one of the big issues with generative AI. (Laughter.) My battery is slowly declining.
So, hopefully, Jared, some of your, you know, battery chemistry people can fix that. But if I disappear I will return on a backup device. (Laughter.)
DUFFY: Thank you for letting us know.
FRANÇOIS: I think it’s existential risk at play is really what’s going on right now. (Laughter.)
DUFFY: Cam, Jared, what are thoughts?
DUNNMON: So I could give these in order. So on the R&D front the way that I would think about this is that there’s been a change in the kind of activation (of energy service ?). And what I mean by that is there are—R&D would take directions where you would say, OK, I assume that this process—I’m at a certain point. To get to the next point in my R&D process, I need to—I need to invest a certain amount, right? And there are certain ways I can do that. And the most inexpensive way I can do it, the most cost-effective way to do that, is this one, you know, option A over here; and option A is going to give me a good result 10 percent of the time.
Well, now there’s another path where I can maybe get a good result 30 percent of the time for the same cost and it changes what I’m willing to do because I say, well, if I can do—run these experiments for that cost, well, then I’m actually going to decide to do it versus not do it.
So I think it changes the cost benefit calculation for R&D and deciding which path and technological tree you’re going to explore. So that’s thing one and that’s where, for instance, the computational design pieces are major parts of that. So if you’re able to replace, you know, physical experiments with computational experiments your cost structure decreases massively. So that’s a big area.
On the talent piece, I have a lot to say about this but I’ll frame it around something that is concrete, which is on the DOD side one of the things that—one of the programs that we ran during my time in government was called xView3.
It was a—in fact, the xView sort of prize challenges. It was a set of programs that was focused on actually putting out applications that would—for which there’s a DOD use case as well as a civil society use case and putting those things out in a way that was well curated where you actually had good data, you actually had good labels on outcomes, and you wanted someone to predict one from the other. Like, that’s what you needed to do.
And when you went through the work of actually defining, like, what’s my task, what’s my baseline, what’s my metric, be very clear about those things—having the data to support it and the provenance on those—on that, having thought about beforehand, like, harms analysis—like, what are the things that can go wrong here, like, how should we build these models and communicating those things, that we’ve thought about those things and those risks and putting those out in the world—one of them was, for instance, for doing post-disaster damage detection from satellite imagery. One of them was for—you know, one of them was for detecting activity that looked like illegal fishing from satellite imagery. So there was a lot of satellite imagery involved. We put those out in the world and just said, like, could people please work on these? Because they’re important.
If you look at the winners from those competitions, they were a mixture of not giant companies. It was often individuals or small—it was mostly individuals actually, and if you look at the countries that they came from from all over the world.
And so, you know, what I would say is that we—you know, moving away from—yes, and I say this, you know, it’s a funny world where yes, like, I spent a long time, you know, getting—you know, getting a Ph.D. But in reality that’s what people look at. That’s a proxy for, like, do you know what you’re doing and so—and it is often not, by the way. (Laughter.)
You know, but in this case, right, I can very clearly define what do you know what you’re doing means because if you can build me something that gets me this input from that output, that output from this input that’s what I need. And so you can move to that from the standpoint of talent without saying, I need this, I need that, I need the other, I need go to these universities, which you should. You should go out there and do that. But you can also just state here’s what I need you to do and then let people work on it. So there’s that.
DUFFY: And then let’s actually move on to Camille.
FRANÇOIS: Yeah. I love this example. There are great uses of these technologies for progress. I think about the satellite imagery example. The other one that comes to mind is folks using AI to look for illegal deforestation patches in the Amazon.
Generally, there’s also such fantastic investigative journalism around this. And it goes back to the talent question and the importance of openness, right? Because I think what we’re saying here is, sure, all those fancy American schools are great, but we also want the hackers, we want the pirates, we want the makers, and whoever are these young women in pink pants somewhere, you know, from their basement playing with those models that are going to create new alternatives, new pathways. And I think that is really important both in how we think about the harms, how we think about access, and how we think about this governing debate on open versus closed, right?
Like, right now I don’t think we can say in the U.S. that the education system produces equality at scale and it’s really important that we make sure that the way we teach and give access to these technologies don’t reproduce those harms. That goes to the question on what do we do with the sort of massive amounts of inequality, and here I don’t want to sound like the most clichéd French person you’ve ever had on stage but regulation, right, I think is a good tool for how we tackle sort of the mass production of inequalities at scale and the other one—I feel like this is a joke but strikes, right—labor movements. (Laughter.) It’s been really interesting to see how generative AI has shaped the labor movement in Hollywood.
I think a lot of people didn’t expect it to come from Hollywood first but writers have said, like, hey, we don’t want our work to be absorbed, reproduced, and replaced by machines and we are going to organize. We’re going to rely on our unions and we’re going to have a strike and we’re going to do social movement in order to bring about the change that we want to see.
And so yes, apologist with this sort of, again, very French perspective but regulation can help, I think. There’s also a good place in that debate for social movements.
DUFFY: And I’m going to take moderator’s privilege here to just add on this I think when we consider talent we also have to think a lot more extensively than we have about what talent is and what expertise is.
If we know that systems are built, right, based on information that is ingested across fundamentally inequitable societies that means that we’re building on some fundamentally inequitable models and understandings of what knowledge is, right.
And so I think there’s really interesting and creative work that can be done. As governments are thinking about this there’s a real push right now to think about the sticks, to think about how to constrain this. I would really love to be seeing more creative outputs from governments in terms of the carrots, right, in terms of how government gets ahead of this and specifically how government gets ahead of looking at the scaling of marginalization.
So how do we—if we know that that is coming potentially with some of these models how do we use government programs, how do we use government outreach, how do we use our existing systems to actually get ahead of that, start working with communities, and that is a different type of expertise and a different type of talent—that lived expertise, that knowledge of how bias is going to hit you.
And that is not specific to the United States. It’s going to be the same thing if you’re a Dalit in India instead of Brahmin, right? So that’s another area where I would really push for more creative solutions in thinking about how we address inequity in particular and ways that we can be proactive about it as opposed to simply reactive.
Sorry, moderator’s privilege. OK. We have time for, like, maybe three more questions. I’m going to go to the back because we did a lot of questions from the front. So start there.
Q: Thank you so much. I’m Amy Larsen. I’m director of strategy of Microsoft’s Democracy Forward team working on AI elections, information environment in Ukraine, among other issues.
DUFFY: Are we moving forward, Amy?
Q: Yes. Yes, we are trying. It’s a team effort. Team support always.
I really appreciated the framework that you laid out, Camille, and I’m just curious about each person’s sort of perspective and their thoughts about how you would sort of move through that framework and whether you’d add anything as well.
Q: Thanks. I’m Heather Hwalek from the Bill and Melinda Gates Foundation.
Just a quick plug. Through our Grand Challenges initiative earlier this month in Dakar, Senegal, we awarded nearly fifty grants to locally-led projects seeking to innovate on community-driven AI applications towards global health and development.
But my question is not about that. My question is for these three practitioners of AI who spend a lot of time thinking about this. On a very personal level, talk about harms for you be it existential or immediate. What is the one thing that keeps you up at night about AI?
DUFFY: And there in the back.
Q: Hi. My name is Imani Franklin. I serve as counsel in the office of Elizabeth Warren.
Concerned about monopolistic trends in the AI space and I’m curious what you think the implications of developing public options for AI—public cloud infrastructure, public data resources—could look like and what implications that would have for some of the trends that you discussed, Camille.
DUFFY: Fantastic. OK. So we’re looking at—to come back, we’re looking at, sorry, starting with sort of, like, Camille’s framework, right, and then what is keeping folks up at night and then also how do we think about sort of the monopolistic options. Michael, I think you heard that question as well, right?
So why don’t we—why don’t we reverse? Camille, can we start with you?
DUFFY: And then we’ll flip back. Michael, how’s your battery? Are you good, man? Should we start with you? Should we start—let’s actually go to Michael.
FRANÇOIS: Go to Michael.
DUFFY: I’m really—I’m really big into contingency planning. Let’s start with Michael.
CHUI: Like, a resiliency problem here. I love Camille’s framework, too. I don’t—I don’t think I could go through all of them. Let me just share two things.
One is we’ve been surveying thousands of executives on their views just on risks in general. In general, while in forums like this we talk about those risks a lot, actual companies—except for cybersecurity, most companies don’t recognize most of the risks that we’ve talked about with regard to AI as being relevant to them and, furthermore, of course, even fewer have tried to mitigate them. So I think that’s a real challenge.
I think openness is really interesting, right, because to a certain extent openness and if you believe in existential or other risks, if you have these open systems where anybody can use them and see them that transparency is terrific. But that also means all kinds of either intentional or mal actors can use these technologies as well when you’re extremely open. So it gets really complicated.
Let me really quickly touch on this question about consolidation in the industry. I think one thing to think hard about is that the old idea that it takes billions of dollars to train a foundation model turns out not to be as true at least if you’re away from the frontier.
So while it might have taken, you know, a billion dollars to reach, you know, GPT 3.5 level performance that’s down into the millions, which, yeah, I don’t have that in my back pocket. But around here where I live in Silicon Valley it’s not that hard to find a few million bucks and so I think—but there are other reasons why you might see consolidation, whether it’s consumer preference and, you know, all kinds of other commercial things.
So I think it’s a little bit complicated about how you want to think about industry structure there. So anyway.
DUFFY: Fantastic. Camille, over to you and then Jared. But we’re almost at time, so let’s try to do quick answers.
FRANÇOIS: Yes. I will disclose where I live in this—my own framework and before I do that I will say thank you for working on electoral integrity inside of the company. I know it’s not fun every day and so grateful that those big Silicon Valley companies continue to invest in those topics. It’s not a given and it matters a lot. So thank you for the work that you do.
When it comes to the framework I do—I will admit that I sit on the side of immediate harms in the immediate versus existential. Yann LeCun, who’s one of the architects of those new generative AI systems and who leads AI Meta—I keep on calling it Facebook, Meta—recently wrote a piece saying that he thinks that this existential risk completely overblown. He says that current AI is as dumb as a cat, which is not very nice. I have a very smart cat. I wouldn’t say that just yet. (Laughter.)
DUFFY: I would like to say that I am too.
FRANÇOIS: Yeah. You know, I think that a lot of people who’ve played with those systems can see that, yes, it does something that’s a little bit magic. But I think we can also say we’re not yet at the existential risk and a lot of these harms that are already manifesting are really impacting societies, are impacting freedom, are impacting human rights, are impacting equality. I think they’re extraordinarily important and tactical now. So this is where I live on immediate versus existential.
On open versus closed, I live strongly in the open camp. I think it’s going to be extraordinarily important for innovation, for competition, for equality, for hackers and tinkerers to get access to these models to create smaller models focused on immediate harms, too.
And, finally, on the, you know, new governance mechanisms versus old governance mechanisms, this I really don’t know because as you said, Kat, it’s head-spinning to see all those new bodies, governance systems, new mechanisms that are coming out of the hat at a very, you know, head-spinning pace.
So I don’t know. I think at the end of the month I will look at all the objects that all of those different bodies have put on the table and try to figure out which ones is best positioned to tackle those immediate harms while ensuring that those models remain open and accessible to a broad number of people.
DUNNMON: Yeah. So on the framework I would say I tend a little bit more towards the immediate than the existential and this has to do with what keeps me up at night. What keeps me up at night is not, you know, a model taking over the world.
What keeps me up at night is someone building a model, not documented it well. Someone else going and using it for something it wasn’t supposed to be used for and it breaking. Like, that’s what keeps me up at night and that almost certainly will happen all the time unless we are careful and that’s the collective we. Everyone who builds and deploys these systems, particularly if you’re building it for someone else.
On the open versus closed piece I actually think about this more from a practical perspective. I think the release of the Llama weights from Meta was instructive, which is they tried to do it in a controlled way and it did not happen, right. And then eventually they said, like, OK, well, we’re just going to kind of make them open.
DUFFY: And we meant to do that the whole time.
DUNNMON: Right. So—well, right. So but the point being is I think there’s a practicality question both because of, like, are these things—is it practical to keep these things closed and the open source if it’s following so fast. There’s a really kind of, I would argue, pretty good blog post. It’s written by my post doc advisor called kind of—you know, is AI having its Linux moment, you know, from an open source perspective.
Everybody can see this stuff. You know, we can all look at it. You know, we have these same exact conversations about Linux in the OS space. Now, there are differences in terms of interpretability, in terms of inspectability, in terms of can we see all the data, et cetera.
But I think those are—that’s kind of where I fall is it’s almost certainly—you’re going to have highly capable models out in the open and we need to design for a world for which that’s true regardless of whether we have closed models that continue to be a little more capable.
And then the last piece on the public option I think we absolutely need to make sure that we have resources for folks who are, you know, not at these very highly resourced places to do AI research for reasons that include some of the equity and quality reasons that were mentioned earlier but also for reasons that just go to the fact that we—you know, most of the innovations that have driven these large scalable things came from a world where we didn’t have these large scalable things and then we eventually did.
So if we start doing research and keep doing research in this world where everybody only has—you assume you only can do it when you have these large scalable things. You don’t actually do anything interesting because you’re just focused on, like, kind of moving to the next thing versus thinking, OK, why am I doing what I’m doing, and some of the—I think some of these public options and getting the folks that would use those on those systems is critically important to making that progress.
DUFFY: And I will just—I’ll take moderator’s privilege on this one as well and just say what’s keeping me up at night is that we have entered what I call a post-market pre-norms world and we’re going to be there for a hefty chunk of time, right. And when I say norms I’m not just talking about governance. I’m talking about societal norms, right?
We have tools increasingly like generative AI tools for image production, for video production, for audio production, right. For video production former sort of nation state capabilities to produce completely fake and/or altered information is now going to be available literally for $1.99 in an app store, right, and the thing is that that’s going to be available for all sorts of reasons that we fundamentally want to protect as well.
We want to protect satire. We want to protect creativity. We want to protect play. We want to protect experimentation, right. And so we don’t—I think we’re going to live in the land of voluntary principles, right?
But voluntary principles without teeth, without accountability, aren’t particularly meaningful and we don’t actually have strong frameworks for voluntary principles that have accountability mechanisms associated with them that we can peg off of.
And so what’s keeping me up at night is the fact that all of the decisions that we’re talking about, all of the governance that we’re talking about, is potentially going to be taking place over what I think of as, like, an emerging post-truth world. And there is a phrase that many of you might have heard that was created in 2018 by Danielle Citron and her colleague, the liar’s dividend, and I think this is an area that’s going to be of deep concern in the foreign affairs space.
The liar’s dividend is the idea that if everything can be fake nothing can absolutely be true, right, and so if something is true and it’s awful it’s easy to discount it and if something is awful and not true it’s easy to argue that it is in fact true, and we will see that the more that LLMs get more advanced in different languages the more that we will see that capacity migrate across the world, right, in countries where there’s significantly less rule of law and significantly less capacity and significantly less, like, media, for example, to vet and validate it. And so that’s what’s keeping me up at night.
What does not keep me up at night, however, is the incredible capacity of emerging leadership that we have especially here at the Council to grapple with these issues, right? And so for those of you who raised your hands at the beginning and were, like, I’m an AI person, I hope you’re coming out of this feeling, like, galvanized and like you heard something new.
For those of you who did not raise your hands and who were, like—or, like, I’m sort of this is not my area I hope that what you take from this conversation is that this is in fact absolutely your area and you are going to bring critical expertise and capacity to these questions, and I hope that you will engage in it and put yourself into it because it is going to be one of the leading questions of our time.
And so with that, I would like to thank profoundly first our colleagues in the meetings team for arranging this fantastic session and for doing all of the work that they’ve done. (Applause.)
I would like to thank—(applause)—I’d like to thank the wonderful Council members who joined online. I apologize that we really taking questions from the crowd, but it’s the Term Member Conference so we prioritize the term members who are in the room, and we have a term member on the stage so let’s give it up for having a term member on the panel. (Applause.)
And finally, certainly, not last—last but not least I want to thank Michael and his battery. I want to thank Jared and I want to thank Camille for joining us today. I’m so appreciative of you all taking the time.
I think there’s going to be coffee now. Is that right? If anybody can hang for a few minutes. Thank all of you. I look forward to seeing you throughout the rest of the day. (Applause.)