Meeting

Transition 2025 Series: National Security in the Age of Artificial Intelligence

Monday, May 12, 2025
U.S. Marine Corps/Lance Cpl. Rhita Daniel/Reuters
Speakers

Co-Founder, Anthropic; Co-Chair of the AI Index Steering Committee, Stanford HAI
 

Chief Strategy Officer, CHAOS Industries; Member, Board of Directors, Council on Foreign Relations

Ira A. Lipman Chair in Emerging Technologies and National Security and Director of the Digital and Cyberspace Policy Program, Council on Foreign Relations

Presider

Senior Fellow for Digital and Cyberspace Policy, Council on Foreign Relations

Panelists discuss how artificial intelligence is reshaping the national security landscape and how government and technology leaders can respond to emerging threats, protect critical systems, and manage global competition.

This meeting is part of CFR’s Transition 2025 series, which examines the major foreign policy issues confronting the Trump administration.

DUFFY: Thank you so much. And thank you to everyone for joining us today. I am delighted to be presiding over this discussion of “National Security in the Age of Artificial Intelligence” with some incredible speakers. Jack Clark, the co-founder of Anthropic and the co-chair of the AI Index Steering Committee at Stanford’s—at Stanford. Will Hurd, former congressman, chief strategy officer of CHAOS Industries, and a member of the board of directors of the Council on Foreign Relations. And Adam Segal, my colleague, the Ira Lipman chair in emerging technologies and national security, and the director of the Digital and Cyberspace Policy Program. Gentlemen, thanks for being here today. We have a lot of interest in this topic. And I’m so excited to talk with all of you.

So I wanted to start off by asking you each, you know, when we say “national security,” that can mean a lot of things to a lot of different people. And so for each of you, as you come to this topic of national security in the age of AI, what is your particular lens on national security? What are the components of national security that you are prioritizing? What’s your lens? And, you know, what, Jack, I’ll start with you.

CLARK: OK, great. Well, I really approach national security and AI by looking at three different things. One is understanding how good we are at actually utilizing it. And, you know, I’ll talk more about this throughout the session, but it has amazing applications for the national security enterprise. And figuring out how to get the technology inside the system and doing useful work is an important part of it. Then there are two other really important factors. One is understanding the state of global competition. You know, how are other nations doing at it? How are other nations doing in terms of both the technology frontier and utilizing it themselves? And then the third concern is, how can AI systems be used to kind of compromise or affect national security? How can they be kind of misused in ways that could harm national security or be weaponized? So those are the lenses through which I approach it, from Anthropic’s perspective.

DUFFY: Thank you. And, Will, what about you?

HURD: Look, I’m going to take—pick up on a thread that Jack was talking about. Are we using it, right? Look, when I was in Congress we did a thing—we had this thing called FITARA. And we were evaluating whether people were actually transitioning to the cloud. This was, like, 2016. And people thought the cloud in 2016 was, like, new technology. It’s, like, no, this has been around for a long time. So for me it’s, is the CIA using this to enhance their job use? Is the NSA using this to enhance their operations? Are we wringing our hands on the ability to use these internally and potentially go on (param ?)? So, for me, it’s how are we using it?

And then when I—when I put my old congressional lens on, too, I think in a very broad sense of are we educating the population to introduce this into their lives. And the answer is no, right? At the end of last year, Pew did a—did a study of K through twelve teachers; 25 percent of teachers thought AI was going to lead to worse outcomes than better outcomes. Only 6 percent said better outcomes, right? That’s scary, especially at a time when you see our biggest global competitor, the Chinese government, enforcing this and having six-year-olds starting using and adopting AI, and having the goal of an AI-literate population by 2030. And so if we don’t have people that know how to use it, they’re definitely—you know, in a broader sense they’re definitely not going to be able to apply it to their jobs in the intelligence community.

DUFFY: Fantastic.

And, Adam, over to you. How are you thinking about it?

SEGAL: Yeah. I think Jack and Will covered kind of the main categories. I’ll just add, I think, two others that kind of come up when we think about AI’s impact on national security.

One is just the—how proliferation is going to empower other actors, right? So we’re focusing a lot on nation-state competition. But you know, cyber—when we first started talking about cyber, we were worried about the hacker in the—in the basement or terrorist groups, and that threat really turned out to be overstated. But AI around bioweapons or cyber could, in fact, turn out to be the threat that we thought it was.

And then I’ll just add kind of uncertainty. And I think we’re in a period now of just huge uncertainty about its impact, which in itself creates a kind of instability that’s worrying to national security. So we just don’t really know what the impact of AI is going to be. We don’t have a great sense of who’s winning, if anyone’s going to win. And that may tempt people to do not-so-smart things if they think there’s a window of opportunity open to try and stop other people from doing things. And so I think that is also having an impact in this space.

DUFFY: Absolutely.

And, Jack, I’m going to turn it back to you because this question of sort of AI’s proliferation and deployment, right, I think for many years in the governance space there’s been a tremendous focus on either the stack that AI is built on, right—we’re thinking about chips, right—and then also thinking about the frontier models, right, the foundational models, and how those should be governed. But we also are moving from a space of building to deployment with these large models, and you have, of course, like, a front-row seat to some of the most cutting-edge work. For you, what are the most—what are the specific AI capabilities or applications that you think/believe pose the most immediate national security risks in the next sort of two to three years versus longer term?

CLARK: Well, I think—I think Adam raised a really important point, which is there’s actually—there’s kind of two types of risks, right? There’s risks that come from what you might think of as expensive, elite, targeted utilization of the technology, and that can lead to really, really scary things. And there’s risks that come from just commoditization and proliferation of capabilities to make a load of stuff easier. So I think—and I’ll just sort of break this down.

Like, we think that AI today can just accelerate people who want to do what you might think of as like cyber mischief, which isn’t the kind of thing that’s going to, like, massively damage national security in a single attack but is going to make it easier to do, you know, phishing or low-grade hacking or low-grade forms of compromise, and just generally uplift people who are doing bad stuff in cyber today. So that’s one axis in which AI is changing the environment.

And then the other, which is more on the specific national security kind of elite-tier risks, is AI and bio, where I think that there is credible information now that, you know, we and other labs have found that AI models can provide a meaningful uplift to human experts trying to do bad stuff in bio, and they can also educate kind of non-experts to get them to do dangerous stuff. And it seems like bio has—bio’s like a load of loaded guns lying around on a table. It’s like full of, like, quite dangerous, scary things, but it’s been hard to access the knowledge to do stuff with it. And AI might make it easier for people to access that knowledge, and can make it easier for experts to do targeted things.

And I promise at some point in this conversation I’ll sound more cheerful, but you did ask me about the security risks. (Laughter.)

DUFFY: You know what? I like to—I like to frontload the bad news so that we can move towards the good news and leave people feeling more enlivened.

Will, can I ask you to take that on as well? Because you’ve also thought about this not even in terms of the emerging capabilities, but you also think about it with such a clear mind to who will have the capability to take them on and whether we have the capability inside the government as well to seize them. So, for you, what are the greatest risks that you’re seeing in terms of the capabilities?

HURD: Sure. And I’ll say what I think the greatest risks are in the next couple of years, right, like initial, because I held the first hearings on AI in Congress, right? This was in 2017—I think 2016, 2016-2017. It’s crazy to think that that—it was—that was the first hearing. And at that hearing, I thought we were going to be talking about how do I get my lunch dropped through the sunroof of my autonomous driving car while I’m barreling down Interstate 10, right? I never would have thought that—at that time, I wasn’t worried about cybersecurity increasing incidents.

I wasn’t worried about broader autonomy on the battlefield. And I think the immediate issue—and we’re seeing this. It has played out in Ukraine. The immediate issue is autonomy of warfare, the ability to overwhelm. And no longer—the phrase “exquisite system,” right, has a unique background in defense. These are these massive tens of millions of dollars systems, you know, a radar—$6 million radar that can get blown up by a $20,000 Shahed drone because of the AI-enabled ability in order to erode that system; the ability to do jamming so that you’re signal hopping and doing all these fascinating things, you know, on the—on the edge with that drone or with that device to get through—to get through systems; and then—and then spoofing, the ability to do spoofing on the battlefield.

So, to me, this is having an immediate effect on the battlefield. We’re seeing it in Ukraine. And I can make the argument that Ukraine is—it would be fairly unsophisticated when compared to the first island chain and what we would potentially see in a conflict with China. So that’s one area.

And then—and then on cyber.

DUFFY: Well, can I—can I pause for a second to ask you, because we have a very broad audience, can you explain what spoofing is?

HURD: It’s making something look like something else, you know. And, look, there was a—there was something that happened in Jordan end of last year. It was called Tower 22. And three Americans were killed, and forty Americans and Jordanians were injured because of a(n) adversary using a drone, and part of that was involved was spoofing. It looked like something else, right? And so that’s what we have to deal with.

And then in the cyber realm, look, some of these companies that evaluate the number of phishing and spear phishing that come in, they say that there’s been a 4,000 percent increase in the volume of phishing campaigns, right? Now, that’s a 49 percent increase of campaigns that have gotten through digital defenses and hit systems that was written by AI. So Adam mentioned that in his—in his opening remarks it’s the ability to see an increase in that, and sophistication, and the volume is going to overwhelm. And again, we’re seeing that on the battlefield, and we’re going to start seeing it when it comes to peer cyber defenses as well.

DUFFY: And, Adam, that’s such a good lead in to you. You’ve talked a little bit about cyber, but if you want to think a little more, like, specifically, are there particular applications, are there particular systems that you would be concerned about for you—again, that sort of immediate-term risk? What are you seeing?

SEGAL: Yeah. I mean, I think with all the interest so far there’s, like, two strands: like, the specific risk; and then this question about diffusion and deployment.

And I think on the specific risk on cyber, where both Will and Jack alluded to, right, we’re seeing it mostly being used for scaling, for spear phishing, for helping with language, right, so foreign state actors can get the language correct. We’re seeing it on online operations; we’re being able to design bots that respond to people in real time based on certain type of characteristics.

I think also the—as more and more company (sic) move to agentic AIs inside their systems, the security and safety of those things are going to be, you know, really questionable. And they themselves are going to be targets of attacks. But I think, you know, as we move out of this two-to-three-year period, we’re going to start seeing the ability to map critical infrastructure and design attacks that cause cascading effects outside of systems and be able to do more disruptive and destructive attacks, which traditionally have been actually pretty hard in the cyberspace because you need to understand complex systems and legacy systems.

DUFFY: OK, but so now I want to flip it. And I’m going to stick with Adam and ask him to just flip the coin, and let’s go positive for a minute. It’s very easy to get very caught up in all of the risks, because we’re looking at a future that we haven’t seen, right? So it’s a little bit hard to assess it. But for you, Adam, what are the greatest opportunities that AI could offer us in terms of national security?

SEGAL: Well, on the—on the positive side, on cyber, I think it’s also going to make us better on defense. It’s definitely going to—we already see it being used pretty widely in helping drive down false positives and doing some self-writing, self-healing code, and being able to use those to defend. I think the most positive impacts though are going to be on logistics, and support, and some decision making. All those things are going to happen in a pretty short term, and that’s going to, I think, you know, both help on cost but also free up mental space to make, you know, more important decisions. So I think those are things we’ll see very short term.

DUFFY: And, Will, for you, what do you—how are you thinking about the opportunities?

HURD: Look, I take a—let me be very narrow on this answer, right? So my job—I started at the CIA when I was twenty-two. My job was to recruit spies and steal secrets. Best job on the planet. I was in India two years, Pakistan two years, did some interagency work in New York for a year and a half, and spent a year and a half in Afghanistan running all their undercover operations.

Now, I—look, I’m from South Texas. I’m not the greatest linguist, right? I still—I only speak dance floor Spanish, right? The ability, you know, for me to leverage AI in India and Pakistan, not only to translate stuff that my agents were sending me but to give me broader context, that exists today. Before you go to a post, you do a thing called a file review, right? Like, you may have—I would say, you have—when you go in the CIA you have old business and new business. You’re recruiting new people, but you have existing ones. And to be able to look in the file of every piece of intelligence and asset in the past has been able to do is just—the ability to glean that to improve operations is pretty significant.

You would do a thing called a file review, where you’d read everything else. It took you, like, a week, week and a half to do one, if you were, like, diligent all day long. Being able to do that now in minutes, and to query that as you go along, is just—the ability to do that is going to improve my capabilities to go out and ultimately do my job. And that’s why I think my first point was, you know, using this in order to be more effective, so that we can—in this changing environment is, I think, is exciting and amazing.

DUFFY: And, Jack, you know, Anthropic has certainly been, I think, increasing its engagement with the U.S. government steadily over the years. You’ve more of a D.C. presence now. How are you all thinking about the opportunities and, without being a particular shill for Anthropic, please, what are the capabilities that you are seeing in your own builds that feel really unique, really different, things that could be transformative for the positive?

CLARK: So I actually think that it’s less about really specific, amazing capabilities, and more about the fact we have a true general-purpose technology. And by “we,” I just mean the AI industry has developed large language models. You can buy them from us or, like, many other providers will sell you equally good ones, and they’re all—they’re all extremely powerful. And I was thinking about this question in the context of how huge amounts of the national security enterprise are just, like, a large bureaucracy machine. There is huge amounts of, like, information that you are constantly looking for. You’re reading reports. You’re reading intelligence reports. You’re looking at network logs. You’re looking across, like, loads of things in your system. And it’s all stuff that is amenable to being used by this kind of technology to become more accessible to you.

And I’ll just give you maybe two or three examples to help you understand it. Like, today all of the AI companies have automated, like, large chunks of themselves using this technology, in a way that we wish we could get the national security enterprise to. There is, inside Anthropic, a Slack channel that people can use to ask questions about anything that’s written down anywhere in the company. So, like, if I need to file a bit of code, what is the security procedure? Or, today I’m staying late so I asked it this morning, what time is dinner served, because I’m going to be in the office late. It looks across every bit of information that we’ve published as an organization internally, and then uses AI to bring you the answer.

And stuff like that sounds really mundane, but in the aggregate it ends up to, like, literally thousands of hours a week that are being saved in terms of people’s time that would otherwise be asking these questions to each other, where the question form is you saying, hey, like, how do I do this code thing? And they say, you should look at this document. Which if you can just, like, get rid of that stuff, it’s amazing. And then the final thing is, you know, anything that involves writing code can be kind of massively accelerated now. You know, many of my colleagues now write three or four times the amount of code that they used to on a per capita basis because they have access to this. So we just have this wildly general-purpose technology.

And I think the real challenge is that the U.S. government is used to buying things that have specific capabilities for specific programs. And what this technology is, is actually just a, like, highly generic technology that you need to find a way to, like, bring into the building and use in a kind of experimental way broadly. And I think that that challenges notions of, like, procurement, and how one even thinks of buying in tech itself.

DUFFY: I’ve been waiting, I’ve been counting, and it took us twenty-one minutes to get to the word “procurement.”

CLARK: Yeah. You should have your bingo card and cross it off.

DUFFY: Which is actually quite impressive, I think, for any conversation involving AI and national security. I think my previous record was seventeen.

CLARK: Well, actually, maybe I can say one thing, because it might be useful for the audience.

DUFFY: Sure.

CLARK: It’s, like, if you were—if you were a defense contractor, a lot of selling to the government makes sense. If you’re a private sector startup that formed your business to go after consumer and enterprise markets and then you went to government, selling to the government is a wildly complicated endeavor. And it requires you to make tons and tons of investments, and kind of crawl across, like, broken glass, just to be able to have the privilege of selling into USG. And it’s kind of wild to me how hard we make it to do business with governments in general, compared to, like, any other customer. And, you know, since you said we got to the “procurement” word, I had to have my useful rant, because I think it’s worth letting people know that it’s, like, perplexingly difficult to sell, even when the government wants to buy the thing. (Laughs.)

DUFFY: Yeah. (Laughs.) Well, and so, Will, I want to turn this next question back to you, because you’re in the unique position of having sat inside, you know, the executive branch, where you—in a position where you would be using these types of tools and know how they could help you. You sat on the congressional side, where you’re trying to both push reform but also ensure good oversight, right? And now you sit—you know, you also have sat on the industry side, both in terms of boards and now your role at CHAOS. And so for all—I’m going to ask this of all of you, but I’m going to start with you, Will. If you could just snap your fingers, it’s done, it exists, that’s it, what is the one policy change or shift that you would just create, in a world where you could snap your fingers and it would be so, and it would help us maximize the opportunities and minimize the risks?

HURD: On prem AI brain that’s cut off from everybody except for that entity. That’s what I—

DUFFY: And can you—can you describe “on prem” for the audience?

HURD: OK, so it’s almost we’re going backwards, right? Like, when we—everybody had these massive datacenters on their system, and then we went into the cloud, and was, like, hooray. And now it’s, like, wait a minute, we got to wall off some of this information, because, look, here’s the difference. A hotel—a large hotel industry, if they lose their information the impact is to the consumer. And, yes, they may have a credit card stolen and have an impact, but there’s redress. But if you have the ability of all the CIA or the NSA, or certain parts of Treasury’s information getting out, right, or having an impact, like, that’s tectonic implications. And so doing things within Uncle Sam is going to always be a little bit harder because the impact.

But being able to have that system—and, again, when I say “on prem,” I probably shouldn’t say that because to tech people that means a certain thing. But it’s an accessible model that has all the benefits of the public model, the evolutions, the changes. But it is restricted and walled off that only people within that agency are able to have access to it. And then I would even go further and bifurcate that within the different agency where only certain people have access to it, the same way you would permission access to data, you know, already. If we could do that—and that requires the ability of Uncle Sam to buy a service like that, but it also requires the providers that have the technical capability in order to maintain something while it’s hard to update over the air—I think we would be able to see all those positive things that Adam Jack, and I have been talking about come to fruition a lot quicker.

DUFFY: And so, Jack, I’m going to turn that over to you. I’d love to hear both if you could snap your fingers what policy shift you would see, but also, I’d love to hear your response to Will in terms of what do we have in the current builds and the way that they’re deployed that would allow for that, and where are there still structural obstacles or technical obstacles that would need to be overcome?

CLARK: I mean, I think that we can do most of what—and by whenever I say “we,” I mean the industry writ large—can do most of what Will says. But the real challenge is information access and silos inside these organizations. Because this stuff is only really useful insofar as it has access to things, and especially in the case of, say, you know, where Will used to work, for CIA. Many compartments, many different, like, access controls, and many different formats of information. All of that stuff is kind of a something you’re going to need to work through to get the most out of this. And each organization has its own kind of almost unique quirks here.

If I could snap my fingers, I would massively scale up the U.S. AI Safety Institute, which today builds our testing and measurement systems for understanding risks to frontier AI systems. And the reason for that is testing is kind of fundamental to having the confidence to deploy this stuff. You know, we’re able to deploy our technology broadly because places like the U.S. AISI help us test it out for bioweapons and give us confidence we’re not going to proliferate good bioweapon tech onto the internet.

But I think if you resource the AISI enough, you could also tie it potentially to procurement, where now you could simplify getting this stuff in the building by having tests that the U.S. government ran to figure out if stuff had performance on tasks that the U.S. government wanted it to do, and then tie this into some way of getting that technology into the—into the places where it could do the most good.

DUFFY: Thank you. And, Adam, I’m going to turn it to you. And I’m going to ask you the snap your fingers question, but I’d like to ask you first to respond to what Jack said. In that the AI Safety Institute was one of, I think, eleven sort of institutes around the world that was set up. And, you know, you famously helped create the first cyber digital strategy for the State Department the past year. So you also thought a lot about international cooperation and the geostrategic concerns here, in terms of which country will be the prevailing country that people are looking to for their AI procurement and builds. And so if I could ask you to first just weigh in a little bit on what Jack said, in terms of how you would see America’s role or an international role in that regard, in terms of being able to help partners and allies vet these types of things as well, or come to standards. And then, if you could go to the snap your fingers question, I’d appreciate it.

SEGAL: Sure. So I think, you know, one of the things that’s happened in the last, what, four or five months after DeepSeek has been this focus on diffusion and how the race is not just, you know, which performance metrics you are hitting and who’s getting there faster. And how important the diffusion is, both domestically and internationally. And how does the U.S. compete on that? So partly that has to do about the models, right? Are they open model or open source? But I think also it has to do about convincing people about using your models. And, as Jack said, you know, clearly safety, and trust, and accountability, and some—intelligibility are all things that would go a long way to convince others to use U.S. models, and gives the U.S. a—you know, a known locus of those discussions.

I mean, one of the problems with the U.S. government for these types of discussions is that there are, you know, often too many voices, right? The agencies go off and discuss things, and people in much smaller countries don’t know who they should talk to, you know, who do—should they go to the State Department, should they go to Commerce? And so having the AI Institute in many ways solves that problem. They all say, oh, we’re—if we’re going to talk about international safety standards, then we’re going to move that side. So I think from—one of the things that we want to be thinking about is how do you shape those standards internationally, and the institute was a—was a clear place to be able to do that and promote it.

My snapping my fingers, I had the institute on my list too but Jack took that one, so I’m going to go with workforce development—(laughs)—in the sense of, you know, part of the—we knew globalization was happening. We knew the IT revolution was happening. We talked about workforce development for a long time and we did very little. And we know the AI revolution’s going to happen. People are starting to talk about workforce development. But again, I have serious worries that we’re not going to keep pace with it. And so if I’m going to snap my fingers, it would be around addressing partly what Will said about people’s attitudes towards it, but also preparing them to use it and preparing for whatever dislocations that come from it.

DUFFY: Thanks. Fantastic. Thank you.

I want to assure our—the members and guests who are joining us today that we’re going to turn to a Q&A shortly. I know there’s—we have, you know, well over 250 people logged in.

And before I do that, I’m going to do one sort of last popcorn round. Of the 266 participants we currently have, I suspect all 266 of us were somewhat neurotic about doing our homework, and so I want to put all of you in the role of professor for a minute. And, Adam, I’m going to start with you, and then I’m going to move to Jack, and then I’m going to go to Will. Rapid response. Assign reading. So what’s the piece or who is the expert that you believe everyone watching this today should have on their must-read list? Adam, to you.

SEGAL: So this week I read a piece—honestly, I mean, next—last week—by Andrew Lohn, who was in the NSC and now is, I think, back at, he said, Georgetown, that tries to map the entire landscape of how AI will affect cyber. Now, he doesn’t have any answers, but he basically breaks down the multitude ways that it’s going to impact different parts of cyber. And it’s very useful in saying there’s not going to be one answer; there’s going to be, probably hundreds of answers depending upon which part of cyber risk we think about.

DUFFY: Fantastic.

And, Jack, what about you?

CLARK: I’m going to assign watching rather than reading. Basically, within the AI community there are lots of debates around existential risk from AI systems or, you know, the possibility of AI systems being—taking off and harming humanity. And it’s basically impenetrable; it’s, like, a really hard debate to orient yourself around. And there is a good YouTube series called “Doom Debates” by a guy called Liron Shapira, who is one of these people that believes AI could be really, really, really bad. But what he does is he does long-form interviews with other people which are actually very sort of fact-based, very, like, calm and collected, not fight-y, and him just sort of trying to talk through people’s reasoning and unwrap why people have different perspectives on this question. And I’ve found that to be a really helpful and accessible way to both understand people like Liron who are worried about it but also hear a range of perspectives that he sort of treats well in the form of long-form interviews. So something to listen to while you do chores, perhaps.

DUFFY: Fantastic. I clutched momentarily when you said watch something because I was really—I was like, is he going to say Terminator?

CLARK: Oh, no. (Laughs.)

DUFFY: Will, what about you?

HURD: Well, if you—if you don’t have an alert on for anything that Adam Segal writes, then shame on you. That’s step one.

But for me, I try to read anything that is—“philosopher” is the wrong word. I think he’s Li Bicheng from China. Whenever he writes anything in Chinese, I use an AI tool to translate it. RAND just—RAND did a report on him a couple months ago; that was pretty interesting. But this is a guy who’s talked about how China should be being more aggressive in using social media manipulation. He’s talked about how the Chinese government should be using AI a lot more in order to surpass the United States as the global superpower. But Li Bicheng.

DUFFY: Fantastic. Thank you all.

OK. And I know that we have some questions in the audience, and if you do have a question please just go into the Zoom and raise your hand. And so, with that, can I turn it over to my colleagues to call on our first question?

OPERATOR: (Gives queuing instructions.)

And as a reminder, today’s discussion is on the record.

We will take the first question from Shaarik Zafar.

Q: Hey. This is Shaarik Zafar at Meta. Adam, Kat, great to see you again. This is a terrific panel, and I agree that you should read anything that Adam Segal writes.

Just a quick question, so I’ll be really fast. In light of DeepSeek, Adam, you know, how should we think about diffusion of open source, also values and the ability of Global South countries to access models that are expensive?

To the former congressman, how do you—in addition to procurement, what about maintenance, the ability to avoid vendor lock in, which can mean—potentially save millions of dollars, you know, and the ability for, you know, your former colleagues in the IC or DOD to build themselves air gap systems to allow for on-prem solutions by giving them the flexibility, including through open source?

Thanks so much.

DUFFY: All right. Adam, should we start with you?

SEGAL: Sure. Nice to hear from you, Shaarik.

So, yeah, as I said before, I think it’s been a really important intellectual inflection point after DeepSeek, where there’s been a kind of focusing on if we’re thinking about competition globally do our models match the needs of emerging economies, and looking at what Chinese firms are already doing on the ground modifying their models to those needs, providing cheaper cost, providing more access, but also, you know, just doing so much more training and providing access to the entire digital stack. So I think it’s a real long-term concern for the U.S., is that we now have to think about, you know, not only are we the fastest and are the models, you know, doing the best on the reasoning, but are they accessible to people.

DUFFY: And, Will, over to you. Maintenance.

HURD: Look, on the issue about maintenance, I’m actually not going to criticize the agencies for doing this. This is a long-term funding issue and Congress needs to actually pass appropriations.

And look, you know, everybody on this call that has spent some time in D.C. are going to laugh when I say this, but I would say that we should go to an appropriations cycle of at a minimum two years so a new Congress gets to vote, you still do oversight and review. I would even go as far as four years, but you’ve got to have—agencies and departments have to have the ability to do multiyear planning, not weekly planning, which is really what they’ve having to do with this crazy budget cycle and appropriations cycle we’ve been—we’ve been stuck in the last couple years.

DUFFY: It’s a great point.

OK. Next question.

OPERATOR: We’ll take the next question from Alan Raul.

Q: Hi. Thank you. Alan Raul, lawyer at Sidley Austin, lecturer at Harvard Law School. Great panel. Really terrific. Thank you.

On, by the way, who you should read, I suggest Eric Schmidt. He’s terrific on national security and AI, and certainly on AI generally.

My question follows up, I think, what Will Hurd raised about spoofing, and it’s another version of it: foreign malign influence online and social media to weaken our society and democracy. Do you all see that as a significant national security issue? And if so, is there anything we can do about it given our state of polarization and stalemate? Thank you.

DUFFY: Why don’t I—well, which of you would like to take that?

CLARK: I might take just—start just quickly where we build a lot of technology to monitor our platforms for risks, and we do a lot of this around elections to try and see if people are trying to misuse our technology. I think that generally we’ve seen less use of AI for what you might think of as, like, influence operations or propaganda than many people had feared years ago and more use of it for what you might think of as, like, low grade kind of marketing or commercially oriented, like what you might think of as disinformation, rather than—rather than national security-oriented stuff. So I just wanted to share that context from inside the AI platforms. And I think that if you look at reports from OpenAI, and Google, and others, it seems like they see relatively less of this as well, relative to expectations a few years ago.

DUFFY: Thank you. Adam, Will, anything that you want to add?

SEGAL: So I think actually, from you guys, Jack, that the most recent report on online use has, I would say, two worrying trends. I mean, generally I think it’s not that big a threat, for the reasons that you suggested. I think but the two worrying trends were that engagement went up. So, you know, usually we find these bots don’t really, actually, you know, talk to anyone, or they have very small number of followers. But with the use of some of the tools, the engagement went up because the bots themselves could model their response on what they thought the person wanted to hear based on, you know, their bio, or their picture, or other things that they could do. So they were learning to engage more dramatically. And they were—you know, the content was spreading more widely.

I think, Alan, to get to your question though, responding to it is very hard given the ideological divide about content moderation, right? I mean, we saw with the closing of the Global Engagement Center and the investigations of State Department and others for allegedly restraining free speech, that area is so highly politicized—the dismantling of what CISA was doing in this space—that that area is so highly politicized that it’s very hard to imagine that we could get to some agreement about taking bots down, and the backing of the social media companies themselves in that space, that we’re probably in for a span of getting much worse before it gets better.

HURD: Look, I don’t have a short answer or a simple answer to this problem, but AI is a tool that bad guys and gals are going to use in order to achieve their ends. And, you know, during the election you had here in Texas the Chinese government was pushing this notion that Texas was going to secede from the union and start its own country. And, you know, some of the things that were developed was using AI—they propagated the message using AI, right? Like, and so this is going to continue to exist. But the real solution is how do we—how do we inoculate the population from looking at this stuff, right?

My dad’s ninety-two years old. He gets thirty-five pieces of mail, real mail, in his mailbox every single day. And I’m more worried about how—you know, how those mail marketers are trying to get him to buy this latest vitamin, right? Ahe doesn’t know how to discern the difference between one from a real medical journal from marketing. And it’s just like we need a society that can look at something and be, like, hey, why do I think DragonSlayer37, I should be listening to him, you know, on whatever they’re saying about the latest thing? And so that’s a much larger problem that we have to—we have to address, that goes back into educating people that everything you see is not true.

DUFFY: And I will—I will take moderator’s privilege here and just add that as much as AI tools can help to fuel disinformation and it can get baked in, they’re also phenomenal at speeding up how you do content moderation and the specificity that you can bring to it. And so in the same way that we think of cyber as being both defensive and offensive, you can also think of this space as an area where AI both increases the offense but also dramatically expands your ability to play defense.

With that, let’s head to the next question.

OPERATOR: We’ll take the next question from Matthew Ferraro.

Q: Hi everyone. Thank you so much for our wonderful panel. Oh, I’m—I did work for the Department of Homeland Security. Now I’m teaching at George Mason before going back into private practice, as one does.

So here’s my question. It’s about talent development. And I have to say, the thing that I read most recently about AI that really struck me was that New York Magazine piece about folks who were—students who were outsourcing the entire college experience to AI. And that did get me thinking because it seems to me that we have to thread a needle here between encouraging the adoption of AI without teaching students to basically not engage in the intellectual development that helps them become not just good citizens, but good national security practitioners. And I mean, to Congressman’s point, I also was at the agency. And some of that stuff you have to do, and it’s good to do it yourself so you know how to do it right, before you outsource it to AI. Or at least it seems that way to me. So I just—how do we thread that needle between the two? Thank you.

DUFFY: Will, you want to—you want to take that?

HURD: Look, so, like, from an educational standpoint, we got to start asking better questions. We’re not going to be able to educate our kids the same way and asking the same questions and the same questions on tests that we had when we were going through school, right? And that’s going to—that’s going to force us to have—to try to get to a higher order of thinking. And that’s hard to be able to do that. But, yes, the best way to master these tools is to already have some existing knowledge on this. I’m going to get a better response out of an AI system because of the quality of the questions I asked it because of my background and experience. And so that’s the future. We got to go that way. And we can’t stop.

And so I agree, talent development is an issue. But if you want to understand the power of these tools, you got to use it. If you want to worry about preventing destruction from the use of these tools, you got to use it, right? And so that starts at every level. And the fact that we’re not—like, you know, how many schools—how many schools are introducing this into the workload. This is like introducing a computer or a typewriter. And we need to do more of it and not shun away from it, because this is the future.

DUFFY: And, Jack, if I could ask that—just to follow on that. You know, I think one of the challenges, especially in the national security space, is that if the AI systems aren’t being deployed inside the community then the people who have expertise in that community also aren’t positioned to be using them. So we have a real Catch-22. How is Anthropic thinking about trying to help with basic education, or how—better put, maybe—how are you seeing the industry kind of reckon with this challenge? Because to learn it is to use it, right?

CLARK: Yeah. I think that inside the industry a similar pattern is playing out at all the companies, where you have teams that naturally use this stuff and start moving really quickly. You know, coding teams. And then you have teams like, for instance, public policy teams, one of which I run, that are less likely to be heavily incentivized to use it. And as managers, you need to basically create time and space for your people to experiment with the technology and find ways that it can be helpful to them.

So I ask everyone on my teams to spend several hours a week just playing around with this. And it’s led to loads of kind of efficiency improvements that we wouldn’t have anticipated. But it takes management empowering people to do the experimentation. And I think getting that into government, again, comes back to the point that, you know, Will touched on and I touched on earlier. Which is you just need to get this stuff inside the building, and then you need the organization to empower people to experiment. And, honestly, to, like, fail a bit. You know, it’s not going to be obvious immediately what the most useful parts of it are.

And then the third thing, which we’ve seen here, is we have lots of people here who’ve got really good at using AI to kind of automate chunks of their teams. And now what we do is we, like, rotate them around the organization. So someone who automated loads of our finance department is now actually embedded in our legal department doing the same thing there and teaching them how to do it. And finding a way to encourage that sort of cross-pollination is going to be fundamental to adoption.

DUFFY: Fantastic. Adam, anything you wanted to add?

SEGAL: So I teach a class, adjunct, at SIPA. And this year I adopted the AI policy that you are allowed to use it, but you have to tell me how you used it, why you used it, what prompts you used, and which model you used. The mistake I made was I didn’t say you’re not allowed to just cut and paste. (Laughs.) So I had two students who then just cut and paste, which is not what I wanted. So the additional thing that I will add next year is you’re allowed to use it but you have to then put it into your own words or your own thoughts. But only two students of the twenty-three chose to do that, which I found interesting. It could be good.

They were master students, and they’re—you know, they’re paying for it in a way that maybe undergraduates don’t feel it in the same way. But I just—you know, to go back to what Will and Jack were saying, I think you just have to get people to use it, and accept it, and figure out how you’re going to respond. And it may be, you know, am I going to do an oral exam next year? I don’t think so, but I probably have to do some more one-on-one or other types of testing than just responding—relying on one kind of short answer question.

DUFFY: Thank you. Next question.

OPERATOR: We’ll take the next question from Wafa Ben-Hassine.

DUFFY: Wafa, you’re still muted, I think.

Q: Sorry. I’m Wafa Ben-Hassine from Omidyar Network. I had an issue with my mic.

I presume many of us in this session are familiar with the high energy use that AI employees to function, and the stress demands will increasingly cause on power grids. Now, a big part of national security is actually ensuring energy independence. I’m curious what you all think about that, especially the national security implications of competition for energy and computing resources, and what can be done from the technological side in response. Should we be looking at lighter computing and modeling technology? Really just looking forward to hearing what you all think about that. Thank you.

DUFFY: Jack, you want to start with that?

CLARK: Yeah. I mean, in our submission to OSTP for the AI Action Plan, we said we expect for AI industry will need about fifty gigawatts of net new power, baseload power provisioning, by 2027 to support large-scale training runs. A large-scale gas plant does two gigawatts. The largest nuclear power station in America, the Vogtle facility in Georgia, does about 4.5 gigawatts. So serious numbers. The only way through here, though, is a large-scale, almost unprecedented infrastructure build out to ensure that we have energy dominance. China has started deploying nuclear reactors at the rate of, like, multiple ones that get initiated each year in terms of new designs, while the U.S. fails to match it.

And I think in the limit your ability to just utilize and generate energy is going to be fundamental to any kind of national success in the twenty-first century. And right now America, through a combination of failure to do permitting reform, failure to deal with states’ rights around transmission lines, failure to do provisioning on DOD land and other things, is kind of falling behind very, very dangerously. So we have to find ways to build. And the final thing is because we generate very predictable amounts of base load energy use, AI companies are great prototyping partners for new power sources. You know, we’re excited about small modular reactors. We’re excited in the 2030s about partnering on fusion power. The AI industry can help be the demand side to some of the new supply of energy sources, but they’re going to be coming on grid as well.

DUFFY: Adam or Will, anything to add to that? Adam, I don’t know if you have any thoughts on just also securing critical infrastructure as a component of that. If it becomes even more important for producing AI, and having AI is a national security imperative, the stakes get even higher if we have incursions into our critical infrastructure.

SEGAL: I think that’s right. I mean, I’m not sure there’s much more to add to that, but clearly defending the infrastructure is going to be much higher on the priority, and getting the bill—you know, getting security right before we do the buildout, as opposed to how we traditionally do these things, build them and then worry about the security, would be key.

DUFFY: Will, anything from you?

HURD: No. Look, Jack, hit all the points. And this is also one of the areas where government and industry could potentially work together. Nuclear power on military bases that’s protected, controlled for use of powering these kinds of things is an important way of the future. And it’s going to require nuclear. If ten years ago if any one of us would have said fusion on a panel like this everybody would have laughed us—you know, would have laughed us off the Zoom, right? But the fact that you’re going to see some real, you know, successes and technical capabilities is pretty exciting.

DUFFY: Thank you. OK, next question.

OPERATOR: We’ll take the next question from Nikhil Bharadwaj.

Q: Nikhil Bharadwaj. Thanks for great discussion, Kat. Adam, Will, and Jack.

I have a question on asymmetric threats. All of you talked about how AI is changing the battlefield dynamics. And, Will, you mentioned how the $20,000 Shahed drones can now destroy million-dollar radar systems, and the Tower 22 incident. How should we think, rethink, defense investments and doctrine, when AI enabled capabilities now increasingly favor these asymmetric approaches? And what are some of the critical changes we need to make to current acquisition defense planning processes so we’re equipped for the shift? Thank you very much.

HURD: Yeah, I’m looking—I’ll take the first crack at that. I’m looking through my notes and the Marine Corps uses—and some of the folks that are on the cutting edge of adopting technology in the Marine Corps use a term. And I want to get right. It’s fighting with prototype, right? And it’s the ability, one, industry actually needs to produce things that work out of the box, right? That’s not always—that’s not always the case. And then we have to have the ability to deploy in real environments, that that kind of prototype. This is the future. We have to get ready for it.

Bigger does not beat smaller, right? Faster will always beat slower. And I’m also a fan that smooth is fast. And so if you don’t have a budget, right, to be able to make long-term decisions, you can’t work with—you don’t have the comfort to make sure that you can do some of these new things, and fail, and pivot, and zig instead of zag. And so I think it starts with having—you know, we need Congress to make sure that the Department of Defense has budgets. And then those planners have the—and, look, there’s, there’s a lot of IT tools—procurement tools that they have in order to adopt things on a fast basis. But we also need the war fighters that are willing to do that.

DUFFY: Jack or Adam, anything to add?

CLARK: No, nothing to add.

SEGAL: I would just add that we’re going to have to think about balance, right? Because, of course, there’s been a lot of talk about, as Will said, producing things quickly, and attritable, and getting more and more things out there quickly. But we’re still going to have some pretty expensive platforms that are going to be needed for power projection, and presence in the battlespace, and other reasons. So it’s—you know, we tend to swing back and forth. And we’re going to need to think about how the two interact, which has its own kind of procurement issues as well.

HURD: Yeah, and testing is going to take longer, right? Look, I remember when I was on the board of OpenAI, and we first released ChatGPT. And, you know, it said some—it had some blue—used some blue language that people didn’t like, right? You know, the long-term implication. But if you’re deploying something and it’s protecting forces, it has to work that first time. And so the testing phase within DOD is significantly longer than in most other cases, because it has to work on day one.

DUFFY: We have about three minutes left, so let’s do one very fast, quick question.

OPERATOR: We’ll take the next question from Sarah May Stern.

CLARK: You’re on mute, Sarah. It’s got to be a yes/no question. (Laughter.)

OPERATOR: Ms. Stern, please accept the unmute now prompt.

Q: Got it now. Quick question. Jack mentioned, looking at the YouTube, generative AI. What kind of security threat does it represent?

CLARK: Saving the easy question till last, I suppose. You should just think of generative AI as a force multiplier on pretty much any technology. So it’s going to exacerbate existing security threats that we deal with, but it’s also going to hold within itself the ability to fight against them or deal with them. You know, in the same way it accelerates cyber offense, it accelerates cyber defense. In the same way it can make it easier to build bioweapons, it also makes it easier to build classifiers you can deploy onto biosynthesis platforms to stop you being able to manufacture bioweapons. It’s a true everything technology.

DUFFY: And so with that, you know, CFR is a great believer in punctuality. We have one minute left. And so I’m going to ask each of you to just say very quickly, people are going to walk away from this discussion, what’s the one thing you—what’s the one point you want to land in their head, that you want them to walk away from this discussion with? Jack, I’ll start with you.

CLARK: The power and impact of AI continues to be underhyped.

DUFFY: OK. Adam, to you.

SEGAL: I was going to go with uncertainty, but—(laughs)—Jack went the other way.

DUFFY: (Laughs.) And, Will, how about you?

HURD: Use it, OK? Use it. The first time—yeah, the first time you used PowerPoint your presentation looked real janky, OK? The more you do it, the better you’re going to get at it.

DUFFY: So if you’re feeling uncertain, go on and use it, and then you’ll be able to figure out if it’s being underhyped or overhyped. I think that’s where we land. (Laughs.) Thank you all so much for joining us today. Thank you, Jack, Adam, and Will for your time and great conversation. And we’ll see you all on the flip side. Take care.

CLARK: Thanks everyone.

SEGAL: Thank you.

(END)

This is an uncorrected transcript.

Top Stories on CFR

Asia

U.S.-China trade talks in Geneva resulted in a temporary slash of tariff rates, but the ripple effect of this tit-for-tat escalation won’t disappear anytime soon.

Iran

Karim Sadjadpour, senior fellow at the Carnegie Endowment for International Peace, sits down with James M. Lindsay to discuss the ongoing talks between the United States and Iran over Iran’s nuclear program.

Economics

Speeding trusted investment from allies and partners would strengthen U.S. competitiveness and security.