Academic Webinar: AI Military Innovation and U.S. Defense Strategy
Lauren Kahn, research fellow at CFR, leads the conversation on AI military innovation and U.S. defense strategy.
FASKIANOS: Thank you, and welcome to today’s session of the Fall 2022 CFR Academic Webinar Series. I’m Irina Faskianos, vice president of the National Program and Outreach at CFR.
Today’s discussion is on the record, and the video and transcript will be available on our website CFR.org/Academic if you would like to share it with your colleagues or classmates. As always, CFR takes no institutional positions on matters of policy.
We’re delighted to have Lauren Kahn with us to talk about AI military innovation and U.S. defense strategy. Ms. Kahn is a research fellow at CFR, where she focuses on defense, innovation, and the impact of emerging technologies on international security. She previously served as a research fellow at Perry World House at the University of Pennsylvania’s global policy think tank where she helped launch and manage projects on emerging technologies and global politics, and her work has appeared in Foreign Affairs, Defense One, Lawfare, War on the Rocks, Bulletin of the Atomic Scientists, and the Economist, just to name a few publications.
So, Lauren, thanks very much for being with us. I thought we could begin by having you set the stage of why we should care about emerging technologies and what do they mean for us in—as we look ahead in today’s world.
KAHN: Excellent. Thank you so much for having me. It’s a pleasure to be here and be able to speak to you all today.
So I’m kind of—when I’m setting the stage I’m going to speak a little bit about recent events and current geopolitical situations and why we care about emerging technologies like artificial intelligence, quantum computing—things that seem a little bit like science fiction but are now coming into realities and how our military is using them.
And then we’ll get a little bit more into the nitty gritty about U.S. defense strategy, in particular, and how they’re approaching adoption of some of these technologies with a particular focus in artificial intelligence, since that’s what I’m most interested in.
Look, awesome. Thank you so much for kicking us off.
So I’ll say that growing political competition between the United States, China, and Russia is increasing—the risk of great power conventional war in ways that we have not seen since the end of the Cold War.
I think what comes to everyone’s mind right now is Russia’s ongoing invasion of Ukraine, which is the largest land war in Europe that we’ve seen since World War II, and the use of a lot of these new emerging capabilities.
And so I’ll say for the past few decades, really, until now we thought about war as something that was, largely, contained to where it was taking place and the parties particularly involved, and most recent conflicts have been asymmetric warfare being limited to terrestrial domains. So, on the ground or in the air or even at sea, where most prominent conflicts were those between nation states and either weak states or nonstate actors, like the U.S. wars—led wars in Afghanistan and Iraq or intervention in places like Mali and related conflicts as part of the broader global war on terrorism, for example.
And so while there might have been regional ripple effects and dynamics that shifted due to these wars, any spillover from these conflicts was a little bit more narrow or due to the movement of people themselves, for example, in refugee situations.
I’ll say, however, that the character of wars is shifting in ways that are expanding where conflicts are fought and where they take place and who is involved, and a large part of this, I think, is due to newer capabilities and emerging technologies.
I’ll say it’s not entirely due to them, but I think that there are some things, like, with the prominence of influence operations, and misinformation, deep fakes, artificial intelligence, commercial drones, that make access to high-end technology very cheap and accessible for the average person has meant that these wars are going to be fought in kind of new ways.
We’re seeing discussion of things like information wars where things are being fought on TikTok and social media campaigns where individuals can kind of film what’s happening on the ground live and kind of no longer do states have, so to speak, a monopoly on the dissemination of information.
I’ll speak a little bit more about some of the examples of technologies that we’re seeing. But, broadly speaking, this means that the battlefield is no longer constrained to the physical. It’s being fought in cyberspace, even in outer space, with the involvement of satellites and the reliance on satellite imagery and open source satellite imagery like Google Maps and, again, in cyberspace.
And so as a result, it’ll not only drive new sectors and new actors kind of into the foray when it comes to fighting wars, and militaries have been preparing for this for quite a while. They’ve been investing in basic science research and development, testing and evaluation in all of these new capabilities, from artificial intelligence, robotics, quantum computing, hypersonics.
And these have been priorities for a few years but I’ll say that that conflict in Ukraine and the way that we’re seeing these technologies are being used has really kind of put a crunch on the time frame that states are facing, and I’m going to speak a little bit more about that in a minute.
But to kind of give you an example of what are—what does it mean to use artificial intelligence on the battlefield—what do these kind of look like, there’s—largely, my work before this conflict was a little hypothetical. It was hard to kind of point to. But I think now, as these technologies mature, you’re seeing that they’re being used in more ways.
So artificial intelligence, for example, are used to create—has been used by Russia to create deep fakes. There was a very famous one of President Zelensky that they used that they then combined with a cyberattack to put it at a very—to put it on national news in Ukraine, to make it look a little bit more believable even though the deep fake itself, it was a little, like, OK, they could tell it was computer generated.
These are kind of showing how some of these technologies are evolving and, especially when combined with other kinds of technological tools, are going to be used to kind of make some of these more influence operations and propaganda campaigns a little bit more persuasive.
Other examples of artificial intelligence, there’s facial recognition technology being used to identify civilians and casualties, for example. They’re being used to—they’re using natural language processing, which is a type of artificial intelligence that kind of analyzes the way people speak.
You think of Siri. You think of chat bots. But more advanced versions being used to kind of read in radio transmissions and translate them and tag them so that they’re able to—that forces are able to go through more quickly and identify what combatants are saying.
There’s the use of 3D printing and additive manufacturing where individuals are able to for very cheap—a 3D printer costs a couple—a thousand dollars and you can get it for maybe less if you build it yourself.
You can add—you can add different components to grenades to make—and then people are taking smaller commercial drones to kind of make a MacGyvered smart bomb that you can maneuver.
So those are some of the kind of commercial technologies that are being pulled into the kind of military sphere and into the battlefield. They might not be large. They might not be military in its first creation.
But because they’re so general purpose technologies—they’re dual use—they’re being developed in the private sector and you’re seeing them being used on the battlefield and weaponized in new ways.
There are other technologies that are more based originally in the military and defense kind of sectors and who’s created them, things like loitering munitions, which we’re seeing more of now, and a little—a lot more drones. I’m sure a lot of you have been seeing a lot of—about the Turkish TB2 drones and the Iranian drones that are now being used by Russia in the conflict.
And these are not as new technologies. We’ve seen them. They’ve been around for a couple of decades. But they’re reaching a maturity in their technological lifecycle where they’re a lot more cheap and they’re a lot more accessible and they’re a lot more familiar now that they’re being used in innovative and new ways.
They’re being seen as less precious and less expensive. And so not that they’re being used willy nilly or that they’re expendable but militaries, we’re seeing, are willing to use them in more flexible ways.
And so, for example, Ukraine, in the early days of the campaign, there were some—allegedly, Ukraine used it as—the TB2 as a distraction when it wanted to sink a war ship rather than actually using it to try and sink the war ship itself. And so using it for things that they’re good for but maybe not the initial thought or the initial what they were designed to be used for.
Iran—I mean, excuse me, Russia, now using the Iranian-made loitering munitions. They’re pretty reasonable in price. They’re about $20,000 a pop, and so using them in swarms to be able to take out some of the Ukrainian infrastructure has been a pretty good technique.
Ukraine, for example, is very good at shooting them down. I think they were reporting at some point they had an ability to shoot them down at a rate of around 85 percent to 90 percent. And so the swarms weren’t necessarily all of them were getting through but because they’re so reasonably priced it was still—it was still a reasonable tactic and strategy to take.
There’s even some kind of more cutting edge, a little bit more unbelievable, applications like now being touted as an Uber for artillery, whether you’re using similar kind of algorithms that Uber uses to kind of identify which passengers to pick up first and where to drop them off, about how to target artillery systems—what target is most efficient to hit first.
And so we’re seeing a lot of these technologies being used, like I said, in new and practical ways, and it’s really condensed the timeline that, I think, states are seeing, especially the United States—that they want to adopt these technologies.
Back in 2017, Vladimir Putin famously stated that he believed that whoever became leader in AI would become leader of the world, and China has very much publicized their plans to invest a lot more in AI research and development, to invest in bridging the gaps between its civil and military engineers and technologists to take advantage of AI by the year 2023. So we’ve got about one more year to go.
And so I think that the United States, recognizing this, the time crunch has been—the heat is on, so to speak, for adopting some of these newer capabilities.
And so we’re seeing that a lot now. There’s a lot of reorganization happening within the Department of Defense to kind of better leverage and better adapt in order to take advantage of some of these technologies.
There’s the creation of a new chief data—digital and artificial intelligence office, the new emerging capabilities policy office, that are efforts in order to better integrate data systems ongoing projects in the Department of Defense, et cetera, to implement it for broader U.S. strategy.
There’s been efforts as well to partner with allies in order to develop artificial intelligence. I mean, as part of the Indo-Pacific strategy that the Biden administration announced back in February of 2022 they announced that along with the Quad partners—so Japan, Australia, and I’m forgetting—and India, excuse me—they are going to fund research, for example, for any graduates from any of those four countries to come study in the United States if they focused on science, technology, engineering, and mathematics, and so to foster that integration and collaboration between our allies and partners to better take use of some of these things.
I’ll say, even so, recently, in April 2022, for example, I think, looking at how Ukraine was using a lot of these technologies, the United States was able to fast track one of its programs. It was called the Phoenix Ghost. It’s a loitering munition. Little—it’s still a little—not well known.
But, for example, I saw that the capabilities requirement that Ukraine had and fast tracked their own program in order to fulfill that. So they’re being used for the first time.
So, again, we’re seeing that the United States is kind of using this as an opportunity to learn as well as to really take advantage and start kicking into high gear AI in defense innovation development.
And so I’ll say that doesn’t mean that it’s not without its challenges, acquisitions process in particular. So how the United States—how Department of Defense takes a program from research and development all the way to an actual capability that it’s able to use on the battlefield.
Before, in the 1950s where it used to take maybe five years now takes a few decades, there’s a lot of processes in between that make it a little bit challenging. All these sorts of checks and balances in place, which are great, but have made the process slow down the process a little bit. And so it’s harder for smaller companies and contractors to kind of—that are driving a lot of this—driving the cutting-edge research in a lot of these fields to work with the defense sector.
And so there are some of these challenges, which, hopefully, some of this reorganization that the Pentagon is doing will help us. But that’s the next step, looking forward. And so that’s going to, I think, be the next big challenge that I’m watching for the—over the rest of this year and the next six months.
But I think I threw a lot out there but I’m happy to open it for questions now and focus on anything in particular. But I think that gave an overview of some of the things that we’re seeing now.
FASKIANOS: Absolutely. That was insightful and a little scary—(laughs)—and look forward now to everybody’s questions.
As a reminder, after two and a half years of doing this, you can click on the raise hand icon on your screen to ask a question, and on an iPad or Tablet click the more button to access the raise hand feature.
When you’re called upon, please accept the unmute prompt and state your name and affiliation. You can also submit a written question via the Q&A icon, and please include your affiliation there, and we are going to try to get through as many questions as we can.
All right. So the first question—raised hand comes from Michael Leong.
Q: Hi. Is this working?
FASKIANOS: It is. Please tell us your affiliation.
Q: Hi. My name is Michael Leong. I’m an MPA student in public administration at the University of Arizona in Tucson.
And I just have a question about, basically, with the frequent use and successful use of drones in Ukraine is there any concern domestically about—because of how easily they are adapting such accessible technology to warfare that those can be used maliciously domestically and what steps they might be considering. Thanks.
KAHN: Absolutely. That’s a great question.
I think it’s broader than just drones as well when you have this proliferation of commercial technology into defense space and you have these technologies that are not necessarily, for example, weapons, right.
So for—I think a good example is Boston Dynamics. They make this quad pet robot with four legs. It looks kind of like a dog. His name is Spot. And he’s being used in all sorts of commercial applications—help fund local police forces, et cetera—for very benevolent uses.
However, there’s been a lot of concern that someone will go and, essentially, duct tape a gun to Spot and what will that kind of mean. And so I think it’s a similar kind of question when you have some of these technologies, again, that aren’t—it depends on how you use them and so it’s really up to the user.
And so when you get things like commercial drones, et cetera, that you’re seeing that individuals are using for either reconnaissance or, again, using in combination with things like 3D printing to make weapons and things like that, it is going to be increasingly, increasingly difficult to control the flow.
We’ve seen Professor Michael Horowitz over at the University of Pennsylvania, who’s now in government, he’s done a lot of research on this and you see that the diffusion of technologies happens a lot—a lot quicker when they’re commercially based rather than when they’re from a military origination.
And so I think it’s definitely going to pose challenges, especially when you get things like software and things like artificial intelligence, which are open source and you can use from anywhere.
So putting—kind of like controlling export and extrolling (sic) after the fact how they’re used is going to be extremely difficult. A lot of that right now is currently falling to kind of companies who are producing them to self-regulate since they have the best, like, ability to kind of limit access to certain technologies.
Like, for example, open AI. If any of you have played with DALL-E 2 or DALL-E Mini, the image generating prompt sandbox tool that’s—they have limited what the public can access—certain features, right—and are testing themselves to see, OK, how are these being used maliciously.
I think a lot of them are testing how they’re being used for influence operations, for example. And so making sure that some of those features that allow that to be more malicious they’re able to regulate that.
But it is going to be extremely hard and the government will have to work hand in hand with a lot of these companies and private actors that are developing these capabilities in order to do that.
But it’s a very great question and it is not one that I have a very easy answer to on how to address that. But it is, like, something that I’ve been thinking about a lot.
FASKIANOS: Thank you.
I’m going to take the next question from Arnold Vela, who’s an adjunct faculty at Northwest Vista College.
What is the potential value of AI for strategy, e.g., war planning, versus tactical uses?
So I think—honestly, I think a lot of artificial intelligence the benefit is replacing repetitive human—repetitive redundant tasks, right. So it’s not replacing the human. It’s making the human be more efficient by reducing things like data entry and cleaning and able to pull resources from all together.
And so it’s actually already being used, for example, in war planning and war gaming and things like that and Germany and Israel have created things to make 3D AI to create sort of 3D battlefields where they can see all the different kind of inputs of information and sensors.
And so I think that’s really where the value add—the competitive advantage of artificial intelligence is. It’s not necessarily—having an autonomous drone is very useful but I think what will really be the kind of game changer, so to speak, will be in making forces more efficient and both have a better sense of themselves as well as their adversaries, for example.
And so, definitely, I think, I’m more in the background with the nonsexy—the data cleaning and all the numbers bit will be a lot more important, I think, than the having a drone with encased AI capabilities, even though those kind of suck the oxygen out a little bit because it’s really exciting. It’s shiny. It’s Terminator. It’s I, Robot-esque, right?
But I think a lot of it will be the making linguists within the intelligence community able to process and translate documents at a much faster pace. So making individuals’ lives easier, I think. So definitely.
FASKIANOS: Great. Thank you.
I’m going to go next to Dalton Goble. Please accept the unmute.
Q: Thank you.
FASKIANOS: There you go.
Q: Hi. I’m Dalton. I’m from the University of Kentucky and I’m at the Patterson School for Diplomacy and International Commerce.
Thank you for having this talk. I really wanted to ask about the technology divide between the developed and developing world, and I wanted to hear your comments about how the use of AI in warfare and the technologies such as—and their proliferation can exasperate that divide.
KAHN: Absolutely. I actually think, we’re—I think that I’ve been focusing a lot on how the U.S. and China and Russia, in particular, have been adopting these technologies because they’re the ones that are investing in it the most.
I mean, countries in Europe are as well and, Israel, et cetera, and Australia also. Except I still think we’re in those early stages where a lot of countries—I think, over a hundred or something—have the national AI strategies right now.
I don’t think it’s as far along yet in terms of its—at least its military applications or applications for government.
I will say that, more broadly, I think, again, because these technologies are developed in the commercial sector and are a lot more reasonably priced, I think there’s actually a lot of space for countries in the developing world, so to speak, to adopt these technologies.
There’s not as many barriers, I think, when it’s, again, necessarily a very expensive, super specific military system. And so I think that it’s actually quite diffusing rapidly in terms—and pretty equally. I haven’t done extensive research into that. It’s a very good question.
But my first gut reaction is that it actually can—it actually can help kind of speak—not necessarily exacerbate the divide but kind of close the gap a little bit.
A colleague of mine works a lot in health care and in health systems in developing countries and she works specifically with them to develop a lot of these technologies and find that they actually adopt them quicker because they don’t have all of these existing preconceived notions about what the systems and organizations should look like and are a lot more open to using some of these tools.
But I will say, again, they are just tools. No technology is a silver bullet, and so I think that, again, being in the commercial sector these technologies will diffuse a lot more rapidly than other kind of military technologies.
But it is something to be cognizant of, for sure.
FASKIANOS: Thank you. I’m going to go next to Alice Somogyi. She’s a master’s student in international relations at the Central European University.
Could you tell us more on the implications of deep fakes within the military sector and as a defense strategy?
KAHN: Absolutely. I think influence operations in general are going to be increasingly part of the—part of the game, so to speak.
I mean, I mentioned there’s going to be—it’s very visible to see in the case of Ukraine about how the information war, especially in the early days of the conflict, was super, super important, and the United States did a very good job of releasing information early to allies and partners, et cetera, to kind of make the global reaction time to the invasion so quick.
And so I think that was a lot—very unexpected and I think has shown just—not to overstate it but the power of individuals and that a lot of propaganda will have. We’ve known—I’m sure if you studied warfare history, you can see the impact of propaganda. It’s always been—it’s always been an element at play.
I will just say it’s another tool in the toolkit to make it a little bit more believable, to make it harder, to make these more efficient, and I think what’s really, really interesting, again, is how a lot of these technologies are going to be worked together to kind of make them more believable. Like, again, creating deep fakes. The technology isn’t there yet to make them super believable, at least on a—like, a large scale that many people at—that a state could believe.
But combining them with something like a cyberattack, to place that in a place that you would have a little bit more—more willing to believe it, I think, will be increasingly important. And we’ll see it, I’m sure, combined in other ways that I can’t even imagine.
And that goes back to one of the earlier questions we had about the proliferation of these technologies and, like, it being commercial and being able to contain the use and you can’t, and that’s the hardest part. And I think that especially when it comes to software and things where once you sell it out there they can use it for whatever they want.
And so it’s this kind of creativity where you can’t prevent against any possible situation that you don’t know. So it has to be a little bit reactive. But I think there are measures that states and others can take to be a little bit proactive to protect against the use.
This isn’t specifically about deep fakes but about artificial intelligence in general. There’s a space, I think, for confidence-building measures so informal agreements that states can kind of come to to set norms and kind of general rules of the road about, like, expectations for artificial intelligence and other kind of emerging technologies that they can put in place before they’re used so that when situations that are unexpected or have never seen before arise that there’s not—there’s not totally no game plan, right. There’s a kind of things and processes to kind of fall back on to guide how to advance and work on that situation without having to—without regulating too much too quickly that they become outdated very quickly.
But I think it’ll definitely be as the technology develops that we’ll be using a lot more deep fakes.
FASKIANOS: Yes. So Nicholas Keeley, a Schwarzman Scholar at Tsinghua University, has a question that goes along these lines.
Ukrainian government and Western social media platforms were pretty successful at preempting, removing, and counteracting the Zelensky deep fake. How did this happen?
I mean, he’s—asks about the cutting-edge prevention measures against AI-generated disinformation today that you just touched upon. But can you just talk about the Ukrainian—this specific what we’re seeing now in Ukraine?
KAHN: Yeah. I think Ukraine has been very, very good at using these tools in a way that we haven’t seen before and I think that’s, largely, why a lot of these countries now are looking and watching and are changing their tack when it comes to using these.
Again, they seem kind of far off. Like, what’s the benefit of using these newer technologies when we have things that are known and work. But I think Ukraine, kind of being the underdog in this situation and knowing since 2013 that this was a future event that might happen has been preparing, I think, in particular, their digital minister. I’m not sure what the exact title was, but they were able to mobilize that very quickly. It was originally set up to better digitize their government platforms and provide access to individuals, I think, on a phone app.
But then they had these experts that work on how—OK, how can we use digital tools to kind of engage the public and engage media. I think when they—they militarized them, essentially.
And so I think a lot of the early days, asking for—a lot of people in that organization asked Facebook, asked Apple, et cetera, to either put sanctions, to put guardrails up. You know, a lot of the early, like, Twitter, taking down the media, et cetera, was also engaged because specifically this organization within Ukraine made it their mission to do so and to kind of work as the liaison between Silicon Valley, so to speak, and to get—and to engage the commercial sector so they could self-regulate and help kind of the government do these sort of things, which, I think, inevitably led to them catching the deep fake really quickly.
But also, if you look at it, it’s pretty—it’s pretty clear that it’s computer generated. It’s not great.
So I think that, in part, was it and, again, in combination with a cyberattack you could then notice that there was a service attack. And so, while it made it more realistic, there’s also risks about that because they’re practiced in identifying when a cyberattack just occurred, more so than other things. But, absolutely.
FASKIANOS: Thank you.
I’m going to go next to Andrés Morana, who’s raised his hand.
Q: Hi. Good afternoon. I’m Andrés Morana, affiliated with Johns Hopkins SAIS International Relations. Master’s degree.
I wanted to ask you about AI and then maybe emerging technology as well. But I think artificial intelligence, as it applies to kind of the defense sector, like, the need to also at the same time reform in parallel the acquisitions process, which is notorious for—as we think about AI kind of where these servers are hosted a lot of commercial companies might come with maybe some new shiny tech that could be great. But if their servers are hosted in maybe a place that’s so easy to access then maybe this is not great, as it applies to that defense sector.
So I don’t know if you have thoughts on maybe the potential to reform or the need to reform the acquisitions process. Thank you.
KAHN: Yeah, absolutely.
I mean, this is some people’s, like, favorite, favorite topic on this because it has become sort of a valley of death, right, where things go and they die. They don’t—they don’t move. Of course, there’s some bridges. But it is problematic for a reason.
There’s been a few kind of efforts to create mechanisms to circumvent that. The Defense Innovation Unit has created some kind of funding mechanisms to avoid it. But, overall, I do think it needs—I don’t know what that looks like. I’m not nearly an expert on specifically the acquisitions process that a lot of folks are.
But it is pretty—it would make things a lot easier. China, for example, people are talking about, oh, it’s so far ahead on artificial intelligence, et cetera, et cetera. I would argue that it’s not. It’s better at translating what it has in the civilian and academic sectors into the military sphere and being able to use and integrate that. And so overcome that gap.
It does so with civil-military fusion. You know, they can kind of do—OK, well, we’re saying we’re doing it this way so it’s going to happen, whereas the United States doesn’t have that kind of ability.
But I would say the United States has all the academic and industry leading on artificial intelligence. Stanford recently put out their 2022 AI Index that has some really great charts and numbers on this about how much—how much research is being done in the world on artificial intelligence and which countries and which regions and specifically who’s funding that, whether it’s governments, academia, or industry.
And the United States is still leading in industry and academia. It’s just that the government has a problem tapping into that, whereas China, for example, its government funding is a lot greater and there’s a lot more collaboration across government, academia, and industry.
And so I think that is right now the number-one barrier that I see.
The second one, I’ll say, is accessing data and making sure you have all the bits and pieces that you need to be able to use AI, right. What’s the use of having a giant model that—an algorithm that could do a million things if you don’t have all of the data set up for it.
And so those are the two kind of organizational infrastructure problems that I’ll say are really hindering the U.S. when it comes to kind of adopting these technologies.
But, unfortunately, I do not have a solve for it. I would be super famous in the area if I did, but I do not, unfortunately.
FASKIANOS: Thank you.
I’m going to take the next question from Will Carpenter, a lecturer at the University of Texas at Austin. Also got an up vote.
What are the key milestones in AI development and quantum computing to watch for in the years ahead from a security perspective? Who is leading in the development of these technologies—large cap technology companies such as Google, ByteDance? Venture capital-backed private companies, government-funded entities, et cetera?
KAHN: Great. Great question.
I’ll say for quantum, quantum is a little bit more down the line since we do not have a quantum computer, like, a really big quantum computer yet that can handle enough data. China’s kind of leading in that area, so to speak. So it’s curious to watch them. They’ve created their first, I think, quantum-encrypted communications line and they’ve done a few works on that.
So I think to keep an eye on that will be important. But, really, just getting a computer large enough that it’s reasonable to use quantum, I think, will be the next big milestone there.
But that’s quite a few years down the line. But when it comes to artificial intelligence, I’ll say that artificial intelligence has had waves and kind of divots in interest and then research. They call them AI winters and AI springs. Winter is when there’s not a lot of funding and spring is when there is.
It’s featured a lot of—right now we’re in a spring, obviously, and it was a large part because of breakthroughs in, like, the 2010s in things like natural language processing and computer vision, et cetera. And so I think continued milestones in those will be key.
There’s a few that I’ve worked on. There’s a—there’s the paper right now—hopefully, it will be out in the next few months—on forecasting on when we actually think those—when AI experts and machine learning experts think those milestones will be hit.
I mean, there were, like, two that were hit, like, there was ones where you’d have AI being able to beat all the Atari games. You have AI being able to play Angry Birds. There’s ones that’s, like, OK—well, and there are lots of those mini milestones that—bigger leaps than just the efficiency of these algorithms.
I think things like artificial or general intelligence. Some say there are some abilities for you to create one algorithm that can play a lot of different games. You know, it can play chess and Atari and Tetris. But I think, broadly speaking, it’s a little bit down the line also.
But I’ll say for, like, the next few months, it’ll—and the next few years, it’ll probably be just, like, more efficient in some of these algorithms, making them better, making them leaner, use a lot less data. But I think we’ve, largely, hit the big ones and so I think it’ll be—we’ll see these short, smaller milestones being achieved in the next few years.
And I think there was another part to the question in the—let me just go look in the answer for what it was. Who’s developing these.
KAHN: I would say these, like, large companies like Google, Open AI, et cetera. But I’ll say a lot of these models are open source, for example, which means that the models themselves are out there and they’re available to anyone who wants to kind of take them and use them.
I mean, I’m sure you’ve seen—once you saw DALL-E Mini you saw DALL-E 2 and DALL-E X. So, like, they proliferate really quickly and they adapt, and that’s a large part what’s driving the acceleration of artificial intelligence.
It’s moving so quickly because there is this nature of collaboration and sharing that companies are incentivized to participate in, where they just take the models, train them against their own data, and if it works better they use that.
And so those kind of companies are all playing a part, so to speak. But I would say, largely, academia right now is still really pushing the forefront, which is really cool to see.
So I think that means that a lot more Blue Skies kind of just basic research being funded will—if it’s being pumped into that we’ll continue to kind of—we’ll see these advances continue.
I’ll say also a lot of—when it comes to defense applications, in particular, I think, and where the challenge is is that a lot of—a lot more than typically when it comes to artificial intelligence these capabilities are being developed by niche smaller startup companies that might not be— that might not have the capabilities that, say, a Google or a Microsoft has when it comes to working and contracting with the U.S. government.
So that’s also a challenge. When you have this acquisitions process it’s a little bit challenging at best, even for the big companies. I think for these smaller companies that really do have great applications and great specific uses for AI, I think that’s also a significant challenge.
So I think it’s, basically, everybody. Everyone’s working together, which is great.
I’m going to go next to DJ Patil.
Q: Thanks, Irina. Good to see you.
Q: And thanks for this, Lauren.
So I’m DJ Patil and I’m at the Harvard Kennedy School Belfer Center, as well as Devoted Health and Venrock Partners.
And so, Lauren, the question you addressed a little bit on the procurement side, I’m curious what your advice to the secretary of defense would be around capabilities, specifically, given the question of large language models or the efforts that we’re seeing in industry and how much separation of results that we’re seeing even in industry compared to academia. Just the breakthroughs that we’re seeing reported are so stunning.
And then if we look at the datasets that are—that they’re building on—those companies are building on, they’re, basically, open or there’s copyright issues in there. There’s defense applications which have very small data sets, and also, as you mentioned, in the procurement side a lack of access to the ability of these things.
And so what is the mechanisms if you looked across this from a policy perspective of how we start tapping into those capabilities to ensure that we have competitiveness as the next set of iterations of these technologies take place?
KAHN: Absolutely. I think that’s a great question.
I’ve done a little bit of work on this. When they were creating the chief digital AI office, I think they had, like, people brainstorming about, like, what kind of things we would like to see and I think everyone agreed that they would love for them to get kind of a better access to data.
If the defense secretary asks, can I have data on all the troop movements for X, Y, and Z, there’s a lot of steps to go through to pull all that information. The U.S. defense enterprise is great at collecting data from a variety of sources—from the intelligence community, analysts, et cetera.
I think what’s challenging to know—and, of course, there are natural challenges built in with different levels of how confidential things are and how—the classifications, et cetera. But I think being able to pull those together and to clean that data and to organize it will be a key first step and that is a big infrastructure systems software kind of challenge.
A lot of it’s actually getting hardware in the defense enterprise up to date and a lot of it is making sure you have the right people. I think another huge one—and, I mean, the National Security Commission on AI on their final report announced that the biggest hindrance to actually leveraging these capabilities is the lack of AI and STEM talent in the intelligence community in the Pentagon.
There’s just a lack of people that, one, have the vision to—have the background and are willing to kind of say, OK, like, this is even a possible tool that we can use and to understand that, and then once it’s there to be able to train them to be able to use them to do these kind of capacities.
So I think that’ll be a huge one. And there are ways that kind of—there are efforts right now ongoing with the Joint Artificial Intelligence Center—the JAIC—to kind of pilot AI educational programs for this reason as a kind of AI crash course.
But I think there needs to be, like, a broader kind of effort to encourage STEM graduates to go into government and that can be done, again, by kind of playing ball, so to speak, with this whole idea of open source.
Of course, the DOD can’t do—Department of Defense can’t make all of its programs open and free to the public. But I think it can do a lot more to kind of show that it’s a viable option for individuals working in these careers to address some of the same kind of problems and will also have the most up to date tech and resources and data as well. And I think right now it’s not evident that that’s the case.
They might have a really interesting problem set, which is shown to be attractive to AI PhD graduates and things like that. But it doesn’t have the same kind of—again, they’re not really promoting and making resources and setting up their experts in the best way, so to speak, to be able to use these capabilities.
FASKIANOS: Thank you.
I’m going to take the next question from Konstantin, who actually wrote a question—Tkachuk—but also raised his hand. So if you could just ask your question that would be best.
Q: Yes. I’m just happy to say it out loud.
So my name is Konstantin. I’m half Russian, half Ukrainian. I’m connecting here from Schwarzman Scholarship at Tsinghua University.
And my question is more coming towards the industry as a whole, how it has to react on what’s happening to the technology that the industry is developing.
Particularly, I am curious whether it’s the responsibility and interest of industry and policymakers to protect the technology from such a misuse and whether they actually do have control and responsibility to make these technology frameworks unusable for certain applications.
Do you think this effort could be possible, give the resources we have, the amount of knowledge we have? And, more importantly, I would even be curious on your perspective whether you think countries have to collaborate on that in order to such effort be efficient, or it should be incentive models based inside countries that will make an effort to the whole community.
KAHN: Awesome. I think all of the above.
I think right now, because there’s so—the relatively little understanding of how these work, I think a lot of it is the private companies self-regulating, which I think is a necessary component.
But there are also now indications of efforts to kind of work with governments on things like confidence-building measures or other kind of mechanisms to kind of best understand and best develop transparency measures, testing and evaluation, other kind of guardrails against use.
I think there are, like, different layers to this, of course, I think, and all of them are correct and all of them are necessary.
And I think also then there are, of course, export controls that can be put on and certain—you’re allowed to do, the commercial side but you make the system itself—incompatibles are being used with other kinds of systems that would make it dangerous.
But I think there’s also definitely room and necessary space for interstate collaboration on some of these, especially when you get—say, for example, when you introduce artificial intelligence into military systems, right, they make them faster. They make the decision-making process a lot more speedy, basically, and so the individual has to make quicker decisions.
And so if you have things and when you introduce things like artificial intelligence to increasingly complex systems you have the ability for accidents to kind of snowball, right, where they become—as they go through. Like, one little decision can make a huge kind of impact and end up with a mistake, unfortunately.
And so when you have the kind of situation when you’re forbid it’s in a—in a battlefield context, right. And let’s say the adversary says, oh, well, you intentionally shot down XYZ plane; and the individual said no, it was an auto malfunction and we had an AI in charge of it; who, in that fact, is responsible now? If it was not an individual now is it the—the blame kind of shifts up the pipeline.
And so you’ve got problems like these. Like, that’s just one example. But, like, where you have increasingly automated systems and artificial intelligence that kind of shift how dynamics play out, especially in accidents, which require a lot of visibility, traditionally, and you have these technologies that are not so visible, not so transparent. You don’t really get to see how they work or understand how they think in the same way that you can say, if I pressed a button and you see the causality of that chain reaction.
And so I think there is very much a need because of that for even adversaries—not necessarily just allies—to agree on how certain weapons will be used and I think that’s why there’s this space for confidence-building measures. I think a really—like, for example, a really simple kind of everyone already agrees on this is to have a human in the loop, right—a human control.
When we eventually use artificial intelligence and automated systems increasingly in nuclear context, right, with nuclear weapons, I think everyone’s kind of on board with that.
And so I think those are the kind of, like, building block agreements and kind of establishment of norms that can happen and that need to take place now before these technologies really start to be used. That will be essential to avoiding those worst case scenarios in the future.
FASKIANOS: Great. Thank you.
I’m going to take the next question—written question—from Alexander Beck, undergraduate at UC Berkeley.
In the context of military innovation literature, what organizational characteristics or variables have the greatest effect on adoption and implementation, respectively?
KAHN: Absolutely. I’m not an organizational expert.
However, I’ll say, like before, I think that’s shifting, at least from the United States perspective.
I think, for example, when the Joint Artificial Intelligence Center was created it was, like, the best advice was to create separate organizations that had the capability to kind of enact their own kind of agenda and to create separate programs for all of these to kind of best foster growth.
And so that worked for a while, right. The JAIC was really great at promoting artificial intelligence and raising it to a level of preeminence in the United States. A lot of early success in making—raising awareness, et cetera.
But now we’re seeing, there was some—a little bit of confusion, a little bit of concern, over the summer when they did establish the chief data—a digital and artificial intelligence office—excuse me. A lot of acronyms—when they—because they took over the JAIC. They subsumed the JAIC. There was a lot of worry about that, right. Like, they just established this great organization that we’ve had in 2019 and now they’re redoing it.
And so I think they realized that as the technology develop, organizational structures need to develop and change as well. Like, in the beginning, artificial intelligence was kind of seen as its own kind of microcosm.
But because it’s in a general purpose enabling technology it touches a lot more and so it needs to be thought more broadly rather than just, OK, here’s our AI project, right. You need to better integrate it and situate it next to necessary preconditions like the food for AI, which is data, right.
So they reorganized to kind of ideally do that, right. They integrate it research and engineering, which is the arm in the Defense Department that kind of funds the basic research, to kind of have people understand policy as well.
So they have all of these different arms now within this broader organization. And so there are shifts in the literature, I think, and there are different best cases for different kind of technologies. But I’m not as familiar with where the literature is going now. But that was kind of the idea has shifted, I think, even from 2018 to 2022.
FASKIANOS: Thanks. We’re going to go next to Harold Schmitz.
Q: Hey, guys. I think a great, great talk.
I wanted to get your thoughts on AlphaFold, RoseTTAFold—DeepMind—and biological warfare and synthetic biology, that sort of area. Thank you.
KAHN: Of course. I—
Q: And, by the way—sorry—I should say I’m with the University of California Davis School of Management and also with the March Group—a general partner. Thank you.
KAHN: I am really—so I’m really not familiar much with the bio elements. I know it’s an increasing area of interest.
But I think, at least in my research, kind of taking a step back, I think it was hard enough to get people within the defense sector to acknowledge artificial intelligence.
So I haven’t seen much in the debate, unfortunately, recently, just because I think a lot of the defense innovation strategy, at least in the Biden administration, is focused directly on the pacing—addressing the pacing challenge of China.
And so they’ve mentioned biowarfare and biotechnology as well as nanotechnology and et cetera, but not as much in a comprehensive way as artificial intelligence and quantum in a way that I’m able to answer your question. I’m sorry.
FASKIANOS: Thank you.
I’ll go next to Alex, who has raised—and you’ll have to give us your last name and identify yourself.
Q: Hi. Yes. Thank you. I’m Alex Grigor. I just completed my PhD at University of Cambridge.
My research is specifically looking at U.S. cyber warfare and cybersecurity capabilities, and in my interviews with a lot of people in the defense industry, their number-one complaint, I suppose, was just not getting the graduates applying to them the way that they had sort of hoped to in the past.
And if we think back at ARPANET and all the amazing innovations that have come out of the internet and can come out of the defense, do you see a return to that? Or do you see us now looking very much to procure and whatever from the private industry, and how might that sort of recruitment process be?
They cited security clearances as one big impediment. But what else might you think that could be done differently there?
KAHN: Yeah. Absolutely.
I think security clearances, all the bureaucratic things, are a challenge, but even assuming that individual wants to work, I think right now if you’re working in STEM and you want to do research I think having two years, for example, in government and being a civilian, working in the Pentagon, for example, it looks—it doesn’t necessarily look like—allow you to jump then back into the private sector and academia, whereas other jobs do.
So I think that’s actually a big challenge about making it possible for various reasons, various mechanisms, to kind of make it a reasonable kind of goal for not necessarily being a career in government but allowing people to kind of come and go.
I think that’ll be a significant challenge and I think that’s in part about some of the ability to kind of contribute to the research that we spoke about earlier.
I mean, the National Security Commission has a whole strategy that they’ve outlined on it. I’ve seen, again, like, piecemeal kind of efforts to overcome that. But nothing broad and sweeping reform as suggested by the report.
I recommend reading it. It’s, like, five hundred pages long. But there’s a great section on the talent deficit. But, yeah, I think that will definitely be a challenge. I think cyber is facing that challenge.
I just think anything that touches STEM in general, and so—and especially because I think the AI and particular machine learning talent pool is global and so states actually are, interestingly, kind of fighting over this talent pool.
I’ve done a research previously also at the University of Oxford that looked at, like, the immigration preferences of researchers and where they move and things like that, and a lot of them are Chinese and studying in the United States. And they stay here. They move, et cetera. But a lot of it is actually also immigration and visas.
And so other countries—China specifically made kind of for STEM graduates special visas. Europe has done it as well. And so I think that will also be another element at play. There’s a lot of these to kind of attract more talent.
I mean, again, one of the steps that was tried was the Quad Fellowship that was established through the Indo-Pacific strategy. But, again, that’s only going to be for a hundred students. And so there needs to be a broader kind of effort to make it—to facilitate the flow of experts into government.
To your other point about is this going to be what it looks like now about the private sector driving the bus, I think it will be for the time being unless DARPA and the defense agencies’ research arm and DOD change this acquisition process and, again, was able to get that talent, then I think—if something changes, then I think it will be able to, again, be able to contribute in the way that it has in the past.
I think it’s important, too, right. There was breakthroughs out of cryptography. And, again, the internet all came from defense initially. And so I think it would be really sad if that was not the case anymore and I think especially as right now we’re talking about using—being able to kind of cross that bridge and work with the private sector and I think that will be necessary.
I hope it doesn’t go too far that it becomes entirely reliant because I think DOD will need to be self-sufficient. It’s another kind of ecosystem to generate research in applications, and not all problems can be addressed by commercial applications as well.
It’s a very unique problem set that defense and militaries face. And so I think there will need to be—right now, it’s a little bit heavy on needing to—there’s a little bit of a push right now, OK, we need to better work with the private sector.
But I think, hopefully, overall, if it moves forward it will balance out again.
FASKIANOS: Lauren, do you know how much money DOD is allocating towards this in the overall budget?
KAHN: Off the top of my head, I don’t know. It’s a few billion. It’s, like, a billion. I think—I have to look. I can look it up. In the research 2023 budget request there was the highest amount requested ever for STEM research and engineering and testing and evaluation.
I think it was—oh, gosh, it was a couple hundred million (dollars) but they had—it was a huge increase from the last year. So it’s an increasing priority. But I don’t have the specific numbers on how much.
People talk about China funding more. I think it’s about the same. But it’s increasing steadily across the board.
So I’m going to give the final question to Darrin Frye, who’s an associate professor at Joint Special Operations University in the Department of Strategic Intelligence and Emergent Technologies, and his is a practical question.
Managing this type of career how do you structure your time researching and learning about the intricacies of complex technologies such as quantum entanglement or nano-neuro technologies versus informing leadership and interested parties on the anticipated impact of emergent technologies on the future military operational environment?
And maybe you can throw in there why you went into this field and why you settled upon this, too.
KAHN: Yeah. I love this question.
I have always been interested in the militarization of science and how wars are fought because I think it allows you to study a lot of different elements. I think it’s very interesting working at the intersection.
I think, broadly speaking, a lot of the problems that the world is going to face, moving forward, are these transnational large problems that will require academia, industry, and government to kind of work on together from climate change and all of these emerging technologies, for example, global health, as we’ve seen over the past few years.
And so I think it’s a little bit of a striking a balance, right. So I came from a political science background, international relations background, and I did want to talk about the big picture.
And I think there are individuals kind of working on these problems and are recognizing them. But in that I noticed that I’m speaking a lot about artificial intelligence and emerging technologies and I’m not—I’m not from an engineering background.
And so me, personally, I’m, for example, doing a master’s in computer science right now at Penn in order to shore up those kind of deficiencies and lack of knowledge in my sphere. I can’t learn everything. I can’t be a quantum expert and an AI expert.
But I think having the baseline understanding and taking a few of those courses and more regularly has allowed me to when a new technology, for example, shows up that I can learn how—I know how to learn about that technology, which, I think, has been very helpful, speaks both languages, so to speak.
I don’t think anyone’s going to be a master—you can’t be a master of one, let alone master of both. But I think it will be increasingly important to spend time learning about how these things work, and I think just getting a background in coding can’t hurt.
And so it’s definitely something you need to balance. I would say I’m probably balanced more towards what are the implications of this, more broadly, since if you’re talking at such a high level it doesn’t help necessarily people without that technical background to get into the nitty gritty. It can get jargony very quickly, as I’m sure you guys understood listening to me even.
And so I think there’s a benefit to learning about it but also make sure you don’t get too in the weeds. I think there are—I think a big important—there’s a lot of space for people who kind of understand both that can then bring those people who are experts, for example, on quantum entanglement and nanotechnology—to bring them so that when they’re needed they can come in and speak to people in a policy kind of setting.
So there definitely is a room, I think, for intermediaries. There’s policy experts that people kind of sit in between and then, of course, the highly specialized expertise, which I think is definitely, definitely important.
But it’s hard to balance. But I think it’s very fun as well because then you get to learn a lot of new things.
Well, with that we are out of time. I’m sorry that we couldn’t get to all the written questions and the raised hands.
But, Lauren Kahn, thank you very much for this hour, and to all of you for your great questions and comments.
You can follow Lauren on Twitter at @Lauren_A_Kahn, and, of course, go to CFR.org for op-eds, blogs, and insight and analysis.
The last academic webinar of this semester will be on Wednesday, November 16, at 1:00 p.m. (EST). We are going to be talking with Susan Hayward, who is at Harvard University, about religious literacy in international affairs.
So, again, I hope you will all join us then.
Lauren, thank you very much. And I just want to encourage those of you, the students on this call and professors, about our paid internships and our fellowships. You can go to CFR.org/careers for information for both tracks.
Follow us at @CFR_Academic and visit, again, CFR.org, ForeignAffairs.com, and ThinkGlobalHealth.org for research and analysis on global issues.
So thank you all, again. Thank you, Lauren. Have a great day.
KAHN: Thank you so much. Take care.
FASKIANOS: Take care.