bg
Centers & Programs

National Security and Defense Program

The stakes are high. Both Russia and China are increasingly challenging the United States and its allies. Iran is ramping up its nuclear program. North Korea can now hit the United States with nuclear-tipped missiles. Al Qaeda and other terrorist groups may step into the vacuum created by the U.S. departure from Afghanistan. Nations like Libya and Yemen are in chaos. The U.S. military faces the twin challenge of maintaining its combat advantage in the information age while facing looming budget constraints. The National Security and Defense program aims to help policymakers and the public better understand these and other threats facing the United States and the options available for responding to them.

$816.7 billion 2023 Military Budget

Program Experts

Max Boot

Jeane J. Kirkpatrick Senior Fellow for National Security Studies

Richard K. Betts

Adjunct Senior Fellow for National Security Studies

Stephen Biddle

Adjunct Senior Fellow for Defense Policy

Richard A. Falkenrath

Senior Fellow for National Security

Bruce Hoffman

Shelby Cullom and Kathryn W. Davis Senior Fellow for Counterterrorism and Homeland Security

Farah Pandith

Adjunct Senior Fellow

Carla Anne Robbins

Senior Fellow

Jacob Ware

Research Fellow

  • Digital and Cyberspace Policy Program

    Dolores Albarracín, professor and director of the Social Action Lab and the science of science communications division of the Annenberg Public Policy Center at the University of Pennsylvania, discuss…
  • Israeli-Palestinian Conflict

    Farah Pandith, adjunct senior fellow at CFR, discusses the Israel-Hamas war and its implications for Israeli and Palestinian communities in the United States. Niraj Warikoo, reporter at the Detroit F…
  • Intelligence

    Panelists discuss their distinguished careers in intelligence and offer advice to young professionals interested in or already pursuing a career in the intelligence space, as well as the challenges confronting the field on federal and local levels.
  • Military Operations

    Stephen Biddle, adjunct senior fellow for defense policy at CFR and professor of international and public affairs at Columbia University, leads the conversation on military strategy in the contemporary world. FASKIANOS: Welcome to today’s session of the fall 2023 CFR Academic Webinar Series. I’m Irina Faskianos, vice president of the National Program and Outreach here at CFR. Today’s discussion is on the record, and the video and transcript will be available on our website, CFR.org/academic, if you would like to share them with your colleagues or classmates. As always, CFR takes no institutional positions on matters of policy. We’re delighted to have Stephen Biddle with us to discuss military strategy in the contemporary world. Dr. Biddle is an adjunct senior fellow for defense policy at CFR and professor of international and public affairs at Columbia University. Before joining Columbia he was professor of political science and international affairs at George Washington University. He regularly lectures at the U.S. Army War College and other military schools and has served on a variety of government advisory panels and analytical teams, testified before congressional committees on issues relating to the wars in Iraq, Afghanistan, and Syria; force planning; conventional net assessment; and European arms control, just to name a few. And, finally, Dr. Biddle is the author of numerous scholarly publications and several books, including his most recent, Nonstate Warfare, published by Princeton University in 2021 and he just recently authored a piece in CFR’s magazine Foreign Affairs in the September/October 2023 issue entitled “Back in the Trenches: Why New Technology Hasn’t Revolutionized Warfare in Ukraine,” and we shared that out in the background readings for this conversation. So, Steve, thank you for being with us. I thought you could give us an overview of the changes you’ve seen in military operations as a result of technological innovation and say a few words about wartime military behavior especially as you’ve studied it over the years and what we’re seeing now in Ukraine and now with the Israel-Hamas war. BIDDLE: Yeah, I’d be happy to. There’s a lot going on in the world of military affairs and strategy at the moment between Gaza, Taiwan Straits, and, of course, Ukraine. Maybe as a conversation starter I’ll start with Ukraine but we can go in whatever direction the group wants to go in, and the spoiler alert is in the headline of the article from Foreign Affairs that you’ve already assigned. There’s a big debate over what Ukraine means for the future of warfare and what Ukraine means for the way the United States should organize its military, modernize its equipment, write its doctrine and so on. One of the most common interpretations of what Ukraine means for all this is that it’s harboring—it’s a harbinger of a revolutionary transformation. The new technology, drones, space-based surveillance, precision-guided weapons, hypersonics, networked information, artificial intelligence, this whole panoply of things in this argument is making the modern battlefield so lethal, so radically more lethal than the past is that in the present and in the future offensive maneuver will become impossible and we’ll get the dawn of some new age of defense dominance in conventional warfare, which, if true, would then have all sorts of implications for how the United States should make all these kinds of defense policy decisions. As those of you who read the Foreign Affairs article know I don’t buy it because I don’t think the evidence is consistent with that supposition. You’ll be happy to hear that I’m not planning to do a dramatic reading of the Foreign Affairs essay, entertaining as I’m sure that would be, but I did think it might be useful for me to briefly outline the argument as a way of teeing up the subsequent conversation. And the basic argument in the article is that whereas there are, indeed, all sorts of very new technologies in use in this war, when you actually look carefully at the results they’re producing, at the attrition rates that they’re actually causing, at the ability of the two sides to gain ground and to suffer the loss of ground, the actual results being produced by all this very new technology are surprisingly less new than is assumed and supposed in the argument that we’re looking at some transformational discontinuous moment in which a new age of defense dominance is dawning. This doesn’t mean that nothing’s changing or that the United States military should do in the future exactly what it’s done in the past. But the nature of the change that I think we’re seeing is evolutionary and incremental as it has been for the last hundred years, and if you think what’s going on is incremental evolutionary change rather than discontinuous transformation that then has very different implications for what the U.S. should do militarily. So just to unpack a little bit of that by way of pump priming let me just cite some of the examples of what one actually observes and the outcomes of the use of all these new technologies as we’ve seen in Ukraine. So let’s start with casualty rates and attrition. At the heart of this argument that new technology is creating a new era of defense dominance is the argument that fires have made the battlefield so lethal now that the kind of offensive maneuver you saw in World War II or in 1967 or in 1991 is now impossible. And, yet, the actual attrition rates of, for example, tanks, right—tanks tend to be the weapon system that gets the most attention in this context—are remarkably similar to what we saw in the world wars. So in the first twelve months of the fighting in Ukraine, depending on whose estimates you look at the Russians lost somewhere between about half and about 96 percent of their prewar tank fleet in twelve months of fighting. The Ukrainians lost somewhat in excess of 50 percent of their prewar tank fleet, and intuitively that looks like a heavy loss rate, right? Fifty (percent) to 96 percent of what you opened the war with, that seems pretty—you know, pretty dangerous. But in historical context it’s actually lower than it frequently was in World War II. In 1943, the German army suffered an attrition rate to the tanks it owned at the beginning of the year of 113 percent. They lost more tanks in 1943 than they owned in January 1943. Their casualty rate went up in 1944. They lost 122 percent of all the tanks they owned in January of 1944. So these attrition rates while high aren’t unusually high by historical standards. What about artillery, right? Artillery is the single largest casualty inflicter on the modern battlefield defined as since the turn of the twentieth century, 1900. As far as we can tell the attrition rate from Ukrainian artillery fire of Russian forces in this war looks to be on the order of about eight casualties inflicted per hundred rounds of artillery fired and that’s higher than in World War II but not discontinuously radically higher. In World War II that figure would have been about three casualties per hundred rounds fired. In World War I that figure would have been about two casualties per hundred rounds fired. If you chart that over time what you see is an essentially linear straight line incremental increase over a hundred years of about an additional .05 casualties per hundred rounds fired per year over a century of combat experience. There’s no sudden discontinuous increase as a result of drones or networked information or space-based surveillance at the end of the period. What about ground gain and ground loss? The purpose of attrition on a modern battlefield is to change who controls how much territory and the whole transformation argument is that all this putatively much more lethal technology is making ground gain much, much harder than in the past, and yet the Russia offensive that opened the war, mishandled as it was in so many ways, took over 42,000 square miles of Ukraine in the first couple of months of the war. The Ukrainian Kyiv counteroffensive retook more than 19,000 square miles. Their Kharkiv counteroffensive retook 2,300 square miles. The Kharkiv counteroffensive took back more than 200 square miles. There’s been plenty of defensive stalemate in the war, right? The Russian offensive on Bakhmut took ten months to take the city. Cost them probably sixty (thousand) to a hundred thousand casualties to do it. The Mariupol offensive took three months to take the city. But this war has not been a simple story of technologically determined offensive frustration. There have been offensives that have succeeded and offensives that have failed with essentially the same equipment. Drones didn’t get introduced into the war in the last six months. Drones were in heavy use from the very outset of the fighting and this kind of pattern of some offensives that succeed, some offensives that don’t, like the attrition rate is not particularly new. I mean, the popular imagination tends to see World War I as a trench stalemate created by the new technology of artillery and machine guns and barbed wire and World War II as a world offensive maneuver created by the new technologies of the tank, the airplane, the radio. Neither World War I nor World War II were homogeneous experiences where everything was defensive frustration of World War I and everything was offensive success in World War II. That wasn’t the case in either of the two world wars. The Germans advanced almost to the doorsteps of Paris in the initial war opening offensive in 1914. In 1918, the German spring offenses broke clean through Allied lines three times in a row and produced a general advance by the Allies and the subsequent counteroffensive on a hundred-eighty-mile front. There was a lot of ground that changed hands in World War I as a result of offensives in addition to the great defensive trench stalemate of 1915 to mid-1917. In World War II some of the most famous offensive failures in military history were tank-heavy attacks in 1943 and 1944. The Battle of Kursk on the Russian front cost the German attackers more than a hundred and sixty thousand casualties and more than seven hundred lost tanks. The most tank-intensive offensive in the history of war, the British attack at Operation Goodwood in 1944, cost the British a third of all the British armor on the continent of Europe in just three days of fighting. So what we’ve seen in observed military experience over a hundred years of frequent observational opportunity is a mix of offensive success and defensive success with technologies that are sometimes described as defense dominant and, yet, nonetheless, see breakthroughs and technologies that are sometimes seen as offense dominant and, yet, sometimes produce defensive stalemates and what really varies is not so much driven by the equipment, it’s driven by the way people use it. And the central problem in all of this is that military outcomes are not technologically determined. The effects of technology in war are powerfully mediated by how human organizations use them and there are big variations in the way human organizations use equipment. And if you just look at the equipment alone and expect that that’s going to tell you what the result of combat is going to be and you don’t systematically account for how the human organizations involved adapt to what the technology might do on the proving ground to reduce what it can do on the battlefield then you get radically wrong answers and I would argue that’s what’s going on in Ukraine. Both sides are adapting rapidly and the nature of the adaptations that we’re seeing in Ukraine are very similar to the nature of the adaptations we’ve seen in previous great power warfare. Again, incremental lineal extensions of emphases on cover, emphases on concealment, combined arms, defensive depth, mobile reserve withholds—these are the ways that all great power militaries have responded to increasingly lethal equipment over time to reduce their exposure to the nominal proving ground lethality of weapons in actual practice. The problem is this collection of techniques—and in other work I’ve referred to them as the modern system, this kind of transnational epistemic community of practice and the conduct of conventional warfare—to do all these things right and minimize your exposure is technically very challenging. Some military organizations can manage this very complex way of fighting; others cannot. Some can do it on one front and not on another front, and the result is we get a lot of variance in the degree to which any given military at any given moment embraces the entirety of this doctrinal program. Where they do, defenses have been very hard to break through for a hundred years. This isn’t something that came about in February of 2022 because of drones and networked information. This has been the case repeatedly for a century of actual combat. But where they don’t, where defenses are shallow, where reserve withholds are too small, where combined arms aren’t exploited, where cover and concealment isn’t exploited, then casualty rates go way, way up. Then breakthrough becomes possible. Then attackers can gain a lot of ground with tanks or without tanks. The German offensives that broke clean through Allied defensive lines in 1918 had almost no tanks. The first of them, Operation Michael, was a one-million soldier offensive that had exactly nine tanks in support of it. So the differences that have mattered are the interaction of increasingly lethal technology with these variations and the ability of real human organizations to master the complexity needed to fight in a way that reduces exposure to this and that’s the same thing we’ve seen in Ukraine. Where defenses have been shallow and haven’t had enough reserves behind them you’ve gotten breakthroughs. Where they’ve been deep, adequately backed by reserves, as we’ve seen in this summer counteroffensive over the last three or four months, for example, they’ve not been able to break through and this isn’t a new story. This is just a recapitulation of a hundred years’ worth of military experience. If that’s so then what difference does it make to the U.S.? So, again, as I suggested earlier, that doesn’t mean don’t change anything, right? A 1916 tank on a modern battlefield would not fare well. Part of the stability in these kinds of outcomes is because people change the way they do business. They change the way they fight. They update their equipment. They execute measure/countermeasure races and so we need to continue to do that. Depth is probably going to increase. Reserve withhold requirements are going to go up. Demands for cover and concealment are going to increase. There will be technological implications stemming from the particular measure/countermeasure races that are emerging now especially with respect to drones. Almost certainly the U.S. Army is going to have an incentive, for example, to deploy counter drone escort vehicles as part of the combined arms mix, moving forward. But the principle of combined arms that’s behind so much of the way the U.S. Army fights is very unlikely to change very much. What’s going to happen is a new element will be added to the combined arms mix, and escort jammers and anti-aircraft artillery and other air defense systems that are optimized for drones will become part of the mix of tanks and infantry and engineers and signals and air defense and all the rest, moving forward. The whole revolution argument, though, is not that, right? The reason people refer to this as a revolution, as transformation, is they’re using language that’s designed to tee up the idea that ordinary orthodox incremental updating business as usual isn’t enough in this new era because of drones, because of hypersonics, or space-based surveillance or whatever. We need something more than that, and I think if we look closely at what’s going on in Ukraine what we see is not an argument that we need to transform the way the U.S. military does business. What we see is an argument for incremental change that implies incremental adaptation is appropriate, that it’s not the wrong thing to do. I think it’s possible to over-innovate. I think there are ample historical examples of militaries that have gone wrong not by being resistant to innovation—there are plenty of those, too—but by doing too much innovation. In the 1950s and 1960s U.S. Air Force transformed itself around an idea that conventional warfare is a thing of the past, all wars of the future will be nuclear, and they designed airplanes for nuclear weapon delivery that were horribly ill-suited to the conventional war in Vietnam that they then found themselves in. The U.S. Army transformed its doctrine following a particular understanding of the lethality of precision-guided anti-tank weapons in the 1973 Arab-Israeli war, adopted a concept called active defense that relied on static defense in a shallow disposition from fixed positions, emphasizing the ostensible new firepower of anti-tank weapons. Found that that was very innovative but very ineffective and abandoned it in favor of the airline battle doctrine that’s a lineal descendant of the doctrine we use now, which was much more orthodox and conventional. There are plenty of examples of militaries that have over-innovated. This language of revolution and transformation is designed to promote what I’m concerned could be over-innovation again. I think we could talk more about the particulars of what incremental adaptation should comprise but I think that’s the right way forward in light of what we actually observe about what’s going on in Ukraine. FASKIANOS: Fantastic. Thank you for that, Steve. That was great. Let’s go now to all of you for your questions. (Gives queuing instructions.) And so don’t be shy. This is your time. We have our first question from Terrence Kleven. Q: Hello. Can you hear me? FASKIANOS: We can. If you could tell us your affiliation that would be great. Q: Yes, very good. Terrence Kleven. I’m at Central College in Pella, Iowa, and I teach in a philosophy and religious studies department and I teach quite a lot of Middle Eastern studies. Thank you very much for your presentation because so much of this we don’t talk about enough and we don’t understand, and I appreciate the opportunity to hear what you have to say and look forward to reading your—some of your material. Just kind of a practical question, why aren’t the Russians using more planes in this war or are they and we just don’t have a report of that? I assume that the Russian air force is much superior to what the Ukrainians have but it doesn’t seem to give them a great advantage. What’s missing? What’s going on? BIDDLE: Yeah. You’re raising a question that has bedeviled military analysts in this war since its beginning. Part of the issue is the definition of what plane is, right? If we define a plane as something that uses aerodynamic lift to fly through the air and perform military missions the Russians are using lots of planes; they just don’t have pilots. We call them drones. But a drone, to a first approximation, is just a particular inexpensive, low-performance airplane that is relatively expendable because it’s inexpensive. But because it’s inexpensive it’s also low performance. If by airplanes one includes drones, then there’s lots of airplane use going on. What you had in mind with the question, I’m sure, is the airplanes that have people in them—why aren’t they more salient in the military conduct of the war, and the Russians have tried to use piloted aircraft. The trouble is the loss rates have kept them, largely, out of the sky. So this again gets back to the question of human adaptation to new technology. Air forces—and navies, by the way, but that’s a different conversation—are much more exposed to more technology increases—the technology changes that produce increasing lethality than ground armies are. Ground armies have much easier access to cover and concealment. It’s hard to find much cover and concealment up there in the sky, right? You’re highlighted against a largely featureless background. There are things you can do as an air force to try and reduce your exposure to precision-guided anti-aircraft weapons and the U.S. Air Force, for example, practices those extensively. But the complexity of operating an air force to be effective at the mission called SEAD—suppression of enemy air defenses—is very high and it requires a lot of practice and it requires a lot of flight hours and it requires you to burn a lot of fuel in training, and the U.S. Air Force is willing to do that. The Russians historically have not. Therefore, they’re not very good at it. Therefore, they’re very—they have been very exposed to the lethality precision-guided Ukrainian anti-aircraft defenses and, therefore, they’ve mostly decided not to expose themselves to this fire. They fly mostly over friendly terrain, especially in metropolitan Russia, and they fly at low altitudes that keep them under the radar, which is a cliché that’s leached into public conversation because of the actual physics of the way radar works and responds to the curvature of the earth. If the Russians operate over Russian territory at low altitude and launch cruise missiles at huge distances then their airplanes don’t get shot down as much. But then the airplanes are a lot less effective and contribute a lot less and that’s the tradeoff that the Russians have accepted with respect to the use of airplanes. The airplanes they use a lot are unpiloted cheap low-performance drones which they are willing to get shot down in huge numbers and they do get shot down in huge numbers. But piloted aircraft have played a limited role because the air defense environment is too lethal for an air force with skills no better than the Russians are to survive in it. FASKIANOS: Thank you. I’m going to take the next question from Mike Nelson. Q: Thanks for a very interesting overview. I work at the Carnegie Endowment for International Peace and also have taught at Georgetown on internet policy and the impacts of digital technologies. Seems to me that one of the big changes with this war has been the incredible transparency, more information on what’s actually going on on the ground from social media, satellite photos, drone photos. I saw a tweet today about how they’re able to infer how many Russian soldiers have mutinied by counting these soldiers marching back from the front, presumably under armed guard. It just seems that there’s a lot more information on what’s going on hour by hour. I wonder if that is causing some changes on both the Russian and the Ukrainian side and whether the insertion of disinformation to make it appear that things are going differently than it seems is also something that’s getting better and better. Thank you. BIDDLE: Yeah. I mean, the information environment in Ukraine is complicated in ways that the debate often doesn’t deal with very well, in my view. So starting at the superficial level, public perceptions of what the lethality of first-person view kamikaze drones has been against tanks and artillery are wildly exaggerated and the reason why the public impression is wildly exaggerated is because the medium formerly known as Twitter puts up endless videos of successful attacks. But nobody posts a video of their failed attack so we only see the subset of all drone missions that succeeded. The ones that don’t are invisible. Therefore, the public gets this impression that all—that there are successful drone missions by the millions all the time and that that’s—there are serious selection effects with the way the public understands drone success rates in light of that. So one point is that the apparent transparency is subject to a variety of selection biases that lead to misunderstandings of the transparency on the battlefield as a whole. Similarly, there are lots of videos of images of Russian soldiers in a trench and especially videos of Russian soldiers in a trench before a quadcopter drone drops a grenade on them and then kills them. You don’t see any video feeds of a drone flying over a camouflaged position where you can’t see anything because nobody’s going to post that, right? It’s not interesting enough. But, therefore, again, we get the selection effect. People believe that everything is visible and everything is transparent because every video feed they see, and they see a lot of them, shows a visible target. The trouble is you’re not seeing the failed drone missions that didn’t produce a visible target and those are the vast majority as far as we can tell from more careful analyses that try to look at the totality of drone missions rather than just the selected subset that appear on now X, formerly Twitter. Now, that leads to the general issue of how transparent is the modern battlefield and I would argue that the modern battlefield is a lot less transparent than people popularly imagine that it is. The cover and concealment available in the earth’s surface to a military that’s capable of exploiting it is still sufficient to keep a sizeable fraction of both militaries’ targets invisible to the other side most of the time and that’s why the artillery casualty rate hasn’t gone up dramatically as a result of all this. It’s because cover and concealment is still keeping most of the targets out of the way. So I would argue the battlefield is less transparent than we often assume that it is and in part that’s because the systems that would generate information are countered by the other side so that they generate less information. Again, take drones, which have been the thing that everybody’s been focusing on. There have been multiple waves of measure/countermeasure races just on the technical side, setting aside technical adaptation, with respect to drones already. When the war opened the primary drone in use, especially on the Ukrainian side, was the Bayraktar TB2, Turkish-built large, you know, capable, fairly expensive drone which was very lethal against exposed Russian armored columns. Then several things happened. One is the armored columns decided to get less exposed. Smart move on the Russians’ part. The other thing is the air defense system under the Russians adapted and started shooting down Bayraktar TB2s at a huge rate to the point where the Ukrainians stopped flying them because they were so vulnerable and, instead, drones shifted from big expensive higher performance drones to smaller, cheaper, lower performance drones, which were so cheap that it didn’t make sense to fire expensive guided anti-aircraft missiles at them anymore and then the air defense environment shifted to emphasize jamming, which is even cheaper than the drones, and anti-aircraft artillery firing bullets that are cheaper than drones. So the systems that would create this transparency and that would give you this information don’t get a free ride. The opponent systematically attacks them and systematically changes the behavior of the target so that the surviving seekers have less to find, and in addition to cover and concealment and complementary to it is dispersion and what dispersion of ground targets does is even if you find a target it may very well not be worth the expenditure of an expensive precision munition to kill. A guided 155-millimeter artillery shell costs on the order of a hundred thousand dollars a shell. If you’re shooting it at a concentrated platoon of enemy infantry that’s a good expenditure. If you’re shooting it at a dispersed target where they’re in one- or two-soldier foxholes now even if you know where all the foxholes are—even if your drones have survived, the concealment has failed and the drone has accurately located where every single two-soldier foxhole is does it make sense to fire a $100,000 guided artillery shell at each of them or are you going to run out of guided artillery shells before they run out of foxholes, right? So the net of all of this—the technical measure/countermeasure race and the tactical adaptation is that I would argue that the battlefield is actually not as transparent as people commonly assume. If it were we’d be seeing much higher casualty rates than what we’re actually seeing. There’s incremental change, right? The battlefield is more transparent now, heaven knows, than it was in 1943. But the magnitude of the difference and the presence of technical measures and countermeasures is incremental rather than transformational and that’s a large part of the reason why the change in results has been incremental rather than transformational. FASKIANOS: So we have a lot of questions but I do want to just ask you, Steve, to comment on Elon Musk’s—you know, he shut down his Starlink satellite communications so that the Ukrainians could not do their assault on the—on Russia. I think it was the submersible—they were going to strike the Russian naval vessels off of Crimea. So that, obviously—the technology did affect how the war was—the battlefield. BIDDLE: It did, but you’ll notice that Crimea has been attacked multiple times since then and metropolitan Russia has been attacked multiple times since then. So there are technical workarounds. On the technical side rather than the tactical side there are multiple ways to skin a cat. One of these has been that the U.S. has tried to make Ukraine less dependent on private satellite communication networks by providing alternatives that are less subject to the whims of a single billionaire. But tactical communications of the kind that Starlink has enabled the Ukrainians are very useful, right? No doubt about it, and that’s why the U.S. government is working so hard to provide alternatives to commercial Starlink access. But even there, even if you didn’t have them at all the Ukrainian military wouldn’t collapse. I mean, in fact, most military formations were taught how to function in a communications-constrained environment because of the danger that modern militaries will jam their available communication systems or destroy communication nodes or attack the satellites that are providing the relays. Certainly, the U.S. military today is not prepared to assume that satellite communications are always going to be available. We train our soldiers how to operate in an environment in which those systems are denied you because they might be. So, again, I mean, tactical adaptation doesn’t eliminate the effects of technological change—having Starlink, being denied Starlink, right, this Musk-owned communication satellite constellation that was the source of all the kerfuffle. It’s not irrelevant whether you have it or not but it’s less decisive than you might imagine if you didn’t take into account the way that militaries adapt to the concern that they might be denied them or that the enemy might have them and they might not, which are serious concerns. Certainly, if the U.S. and Russia were true belligerents both the danger of anti-satellite warfare destroying significant fractions of those constellations is serious, or jamming or otherwise making them unavailable is a serious problem so militaries try to adapt to deal with it—with their absence if they have to. FASKIANOS: Great. We have a question—a written question from Monica Byrne at—a student at Bard College: Can you share thoughts and strategy for Israel and Gaza, given the conditions in Gaza? BIDDLE: Yeah. So shifting gears now from Ukraine to the Middle East, given Israel’s declared war aim, right—if Israel’s aim is to topple the Hamas regime and then hopefully replace it with something that’s another conversation. But let’s for the moment just talk about the military dynamics of realizing their stated war aim of toppling the Hamas regime. That will certainly require a ground invasion that reoccupies at least temporarily the entirety of Gaza, right? Airstrikes aren’t going to accomplish that war aim. Special forces raids aren’t going to accomplish that war aim. The Hamas administrative apparatus is, A, too large and, B, too easily concealed, especially underground, for those kinds of techniques to be sufficient. So if the Israelis really are going to topple Hamas a large-scale ground invasion is needed. That has obvious horrible implications for collateral damage and civilian fatalities in Gaza—urban warfare is infamously destructive of capital and of civilian human life—but also for military casualties to the Israelis. Urban warfare is a radically advantageous military environment for defenders and so Israel inevitably will take serious losses if they really expect to completely reoccupy Gaza as would be needed to depose Hamas. Now, there are ways that conventional militaries can try and reduce either the loss of innocent civilian life or casualty rates to their own forces but none of these things are perfect and the techniques militaries use to reduce civilian fatalities can be exploited by defenders who want to take advantage of them to increase Israeli military casualties and limit the Israelis’ ability to limit collateral damage. You can fire only at identified targets and not at entire buildings. You can use small-caliber weapons rather than large-caliber artillery and missiles. You can warn the civilian occupants of a building either with leaflets or text messages or the Israeli technique that’s called knocking on the roof where they drop a nonexplosive weapon on the ceiling to create a sound that tells the occupants they are about to be attacked so they leave. There are a variety of things like that that you can do and that the U.S. should hope that the Israelis are going to do. But the whole problem here is that the Hamas political and military infrastructure is deeply intermingled with the civilian population in Gaza, and so even if you’re going to be as discriminating as modern technology and military skill potentially could make you, you’re still going to kill a lot of civilians and Hamas is not going to conveniently remove the military infrastructure from the civilian population to make it easier for the Israelis to kill the fighters and not kill the civilians. They’re going to keep them tightly intermingled. Now, the Israelis can reduce their losses by being slower and more deliberate and methodical in the way they enter Gaza. There’s been a discussion in recent weeks about the difference between Mosul and Fallujah and the U.S. experience of urban warfare in Iraq. In Fallujah, we entered quickly with a large ground force that was fairly dependent on small arms direct fire and relatively less reliant on artillery and airstrikes. In Mosul with Iraqi allies on the ground, we did the opposite. Very slow entry. The campaign took months. Limited exposure, small-caliber weapons, heavy emphasis on airstrikes and artillery to reduce the ground—even so, thousands of civilians were killed in Mosul. Even so, our Iraqi allies took serious casualties. There’s no way for the Israelis to do this Gaza offensive if they’re going to realize their war aim that won’t destroy Gaza, kill a lot of civilians, and suffer a lot of casualties themselves. All these things are marginal differences at the most. FASKIANOS: Thank you. I’m going to go next to Dan Caldwell. Q: Oh, Steve, thanks very much for a very interesting overview. I’d like to raise another subject that is, obviously, very broad but I would really appreciate your comments on it and that’s the question of intelligence and its relationship to military operations that you’ve described. Broadly speaking, we can separate out tactical intelligence from strategic intelligence, and in the case of tactical intelligence the use of breaking down terrorists’ cell phones’ records and things like contributed to military successes in Iraq and Afghanistan. In a strategic sense, the breaking of the Japanese codes, Purple, and the Ultra Enigma secret in World War II contributed to the Allies’ success, and in terms of the Middle East the strategic failures of Israeli intelligence in 1973 and, I would argue, in the recent Hamas attacks contributed to the losses that Israel has suffered. So how do you think about the relationship of intelligence to military strategy? BIDDLE: Yeah. I mean, intelligence is central to everything in security policy, right? It’s central to forcible diplomacy. It’s central to preparation for war. It’s central to the conduct of military. So intelligence underlies everything. All good decision making requires information about the other side. The intelligence system has to provide that. The ability of the intelligence system to create transformational change is limited. Let’s take the national level strategic intelligence question first and then we’ll move to things like Ultra and battlefield uses. As you know, the problem of military surprise has been extensively studied, at least since the 1973 war in which Israel was famously surprised by the Egyptian attack in the Sinai. There’s been an extensive scholarly focus on this problem of intelligence failure and surprise—how can this possibly happen. And the central thrust of that literature, I would argue, has been that almost always after a surprise you discover later that the surprised intelligence system had information that should have told them an attack was coming. They almost always receive indicators. They almost always get photographic intelligence. All sorts of pieces of information find their way into the owning intelligence system. And yet, they got surprised anyway. How could this happen? And the answer is that the information has to be processed by human organizations, and the organizational challenges and the cognitive biases that individuals have when they’re dealing with this information combine in such a way to frequently cause indicators not to be understood and used and exploited to avoid surprise and part of the reason for that—the details, of course, are extensive and complex. But part of the reason for that is you get indicators of an attack that didn’t—that then didn’t happen way more often than you get the indicators of the attack that does happen. You get indicators all the time but usually there’s no attack and the trick then is how do you distinguish the indicator that isn’t going to become an attack from the indicator that is going to become the attack when you’ve always got both. And if you—especially in a country like Israel where mobilizing the reserves has huge economic consequences, if you mobilize the reserves every time you get indicators of an attack you exhaust the country and the country stops responding to the indicators anymore. It’s the cry wolf problem. I mean, the first couple of times you cry wolf people take it seriously. The eighth, ninth, tenth, twelfth time they don’t. So because of this the ability to change, to do away with surprise, with, for example, new technology, all right, a more transparent world in which we have a better ability to tap people’s cell phones and tap undersea cables to find out what governments are saying to themselves we have better ability to collect information. But there are still organizational biases, cognitive problems, and just the basic signal-to-noise, wheat-to-chaff ratio issue of lots and lots of information, most of which is about an attack that isn’t going to happen. And distinguishing that from the ones that are going to happen is an ongoing problem that I doubt is going to be solved because it isn’t a technological issue. It resides in the structure of human organizations and the way the human mind operates to filter out extraneous and focus on important sensory information, and human cognitive processes aren’t changing radically and human organizations aren’t either. So at the strategic level I don’t see transformation coming soon. Then we’ve got the battlefield problem of what about intercepted communications, for example, which have changed the historiography of World War II in an important way. We’ll note that that didn’t cause the Allies to defeat the Germans in 1944, right? I mean, the Allies cracked the German and the Japanese codes long before the war ended and, yet, the war continued, and this gets back to this question of how militaries adapt to the availability of information about them on the other side. At sea where there’s not a lot of terrain for cover and concealment, right, then these kinds of communications intercepts were more important and as a result the Japanese navy was, largely, swept from the Pacific long before the war ended in 1945. But wars are ultimately usually about what goes on on land, and on land even if you intercept people’s communications if they’re covered, concealed, dispersed, and in depth being able to read German communications, which we could do in 1944, didn’t enable us to quickly break through, rapidly drive to Berlin and end the war three months after the Normandy invasions. In spite of the fact that we could read the communications traffic we couldn’t do those things because the communications traffic is only part of success and failure on the battlefield. So if that was the case in World War II where we had, you know, unusually good comment and usually good ability to break the enemy’s codes and read their message traffic, again, I would argue that improvements in intelligence technology today were certainly helpful, and they’re worth having and we should pursue them and use them, but it’s not likely to transform combat outcomes in a theater of war any more than—to a radically greater degree than it did when we had that kind of information in 1944. FASKIANOS: So I’m going to combine the next two questions because they’re about innovation from the Marine Corps University and Rutgers University: You mentioned over innovation. Can you explain what that is and how it can be detrimental? And then are you concerned that the Department of Defense R&D program could be at risk of being out of balance by over emphasizing advanced technology versus getting useful technology deployed and into the field? BIDDLE: I think that’s one of the most important implications of this war is that the United States has historically chosen to get way out on the envelope of what technology makes possible for weapon acquisition, creating extremely expensive weapons that we can buy in very small numbers that we evaluate and we decide to buy because of their proving ground potential because what they can do against targets that haven’t adapted to them yet. What the record of adaptation in Ukraine, I think, shows is that the actual lethality of very sophisticated weapons is not as high as it looks on a proving ground because the targets are going to be noncooperative and the real-world performance of extremely expensive sophisticated technologies is normally less than it looks, and if that’s the case we are probably overspending on very sophisticated, very expensive weapons which we can only buy in very small numbers and which if they don’t produce this radical lethality wouldn’t be worth the expenditure that they cost. And if the adaptation of the target is going to reduce their lethality and increase their vulnerability, which is certainly what we’re observing in Ukraine, then we’re going to have a dickens of a time replacing them when they get lost, right, because very sophisticated high technology weapons, among other things, require a supply chain of materials that are often quite scarce—rare earths, cobalt, lithium. One of the reasons why the American Defense Industrial Base has had a hard time responding rapidly to the demands that the expenditure rate of things in Ukraine has created is because of these complicated supply chains that we can manage when we’re building things in small numbers, which we think is sufficient because we’re expecting that each one of them is going to be tremendously lethal. If we now realize that they’re less lethal in practice than we expect them to be and therefore we need larger numbers of them, how are we going to get the materials we need to do that? And the experience in Ukraine has been that the kind of revolution in military affairs expectation for the lethality of high technology just hasn’t been realized. Yes, weapons are very lethal in Ukraine, but not orders of magnitude differently than they were in 1944, right, and so I think this ought to suggest to us that the historical post-World War II U.S. strategy emphasizing very high technology at very high cost in very small numbers to compensate for small numbers with radical lethality may very well be misguided. It works well when you’re fighting an opponent like the Iraqis who can’t handle the complexity of cover and concealment, combined arms, and all the rest. They’re exposed and the weapons have the kind of proving ground effect that you expect because the targets are not undercover. Not clear that it has been producing that kind of results in Ukraine and it’s not clear that it would produce those kinds of results for the United States in a coming great power conflict. FASKIANOS: Thank you. I’m going take the next question from Genevieve Connell at the Fordham graduate program in international political economy and development. How much does successful military strategy rely on stable domestic economic systems to fund it or is this less of an issue when one or both sides have strong geopolitical support and aid? BIDDLE: War is very expensive, as the Ukraine war is reminding us, right? This isn’t news. The expenditure rates in modern industrial age warfare are massively expensive to maintain and that in turn means that the strength of the national economy is a fundamental foundational requirement for success in modern great power warfare. This, of course, leads to the set of tradeoffs that are fundamental in grand strategy, right? Grand strategy, as opposed to operational art, military strategy, or tactics, integrates military and nonmilitary means in pursuit of the ultimate security objectives of the state and one of the more important of the nonmilitary means is the economy. So you need a large GDP to support a large expensive war effort. The way you maximize GDP is with international trade. International trade makes you vulnerable to cutoff in time of war through blockade. Therefore, if we just maximize GDP in the short run we run the risk—we increase our vulnerability in time of war or blockades. We say: Oh, no, we don’t want to do that. Let’s reduce the amount of international trade we do, make ourselves more self-sufficient. Now GDP growth rates go down and now the size of the military you can support in steady state goes down. There’s a fundamental tradeoff involving the interaction between classically guns and butter in the way you design the economy in support of the grand strategy you have in mind for how you’re going to pursue your security interest in the international system at any given time. So, yeah, a productive expanding economy is essential if you plan to be able to afford the cost of modern warfare. The implications for what that means for things like international trade, though, are complicated. FASKIANOS: Great. I’ll try to sneak in one last question from David Nachman. Q: Thank you. Thank you for this really interesting presentation. I teach at the Yale Law School, nothing related to the topic of today’s submission and discussion. I’m just wondering, and you captured it towards the end here where you said something about wars are won and lost on land. With the advent of cyber and all the technological development that we’re seeing in our armed forces is that still true as a matter, you know, and are we—is the Ukraine and even Gaza experience sort of nonrepresentative of the true strategic threats that the United States as opposed to its allies really faces at sea and in the air? BIDDLE: Yeah. Let me briefly address cyber but then extend it into the sea and the air. One of the interesting features of cyber is it’s mostly been a dog that hasn’t barked, at least it hasn’t barked very loudly. There were widespread expectations as Russia was invading that cyberattacks would shut down the Ukrainian economy, would shut down the Ukrainian military effort, or vice versa, and neither of those things have happened. So I don’t—there have been plenty of cyberattacks, right, and there have been plenty of efforts at break in and surveillance and manipulation. So far none of them have been militarily decisive and it’s an interesting and I think still open question for the cyber community about why that has been so and what, if anything, does that tell us about the future of cyber threats to national military projects. But so far it hasn’t radically—it hasn’t produced a result that would have been different in the pre-cyber era. Now, when I say wars are won on land what I mean by that is that people live on the land, right? People don’t live in the air and people don’t live on the surface of the water. People live on land. Economies are on land. Populations are on land. That means that usually the stakes that people fight wars over are things having to do with the land. That doesn’t mean that navies and air forces are irrelevant. We own a large one. I’m in favor of owning a large one. The Navy—my friends in the Navy would be very upset if I said otherwise. But the purpose of the Navy is to affect people who live on the land, right? In classic Mahanian naval strategy the purpose of the Navy is destroy the opposing fleet, blockade the enemy’s ports, destroy the enemy’s commerce, and ruin the land-based economy and it’s the effect of the land-based economy that causes surrender or compromise or concession to the opponent or whatever else ends the war in ways that you hope are favorable to you. What this means then is that especially where we’re dealing with large continental powers like Russia, classically—China’s an interesting sub case but let’s talk about Russia—the ability to influence the Russian decision-making calculus that leads to an end to a war or the beginning of a war without affecting the life of people on land is very limited. Cyber has not proven able to do that. Air attack historically has not been a good tool for doing that. Navies do that by affecting the land-based economy and I don’t see that changing rapidly anytime soon. FASKIANOS: Well, Steve, thank you very much for this really insightful hour. I’m sorry to all of you we couldn’t get to the questions, raised hands, so we’ll just have to have you back. And thanks to all those of you who did ask questions. I commend to you, again, Steve Biddle’s Foreign Affairs piece, “Back in the Trenches,” and hope you will read that. Our next Academic Webinar will be on Wednesday, November 8, at 1:00 p.m. (EST) with José Miguel Vivanco, who is an adjunct senior fellow here for human rights, to talk about human rights in Latin America. So, Steve, thank you again. BIDDLE: Thanks for having me. FASKIANOS: And I—yes. And I’d just encourage you all to learn about CFR paid internships for students and fellowships for professors at CFR.org/careers. Our tenured professor and our fellowship deadlines is at the end of October. I believe it’s October 31, so there’s still time. And you can follow us on X at CFR_Academic. Visit CFR.org, ForeignAffairs.com, and ThinkGlobalHealth.org for research and analysis on global issues. Thank you all again for being with us today. (END)
  • Immigration and Migration

    Julia Gelatt, senior policy analyst at the Migration Policy Institute, discuss the Biden administration’s expansion of the Temporary Protected Status (TPS) program and recent developments in U.S. imm…
  • Robots and Artificial Intelligence

    Lauren Kahn, research fellow at CFR, leads the conversation on AI military innovation and U.S. defense strategy.   FASKIANOS: Thank you, and welcome to today’s session of the Fall 2022 CFR Academic Webinar Series. I’m Irina Faskianos, vice president of the National Program and Outreach at CFR. Today’s discussion is on the record, and the video and transcript will be available on our website CFR.org/Academic if you would like to share it with your colleagues or classmates. As always, CFR takes no institutional positions on matters of policy. We’re delighted to have Lauren Kahn with us to talk about AI military innovation and U.S. defense strategy. Ms. Kahn is a research fellow at CFR, where she focuses on defense, innovation, and the impact of emerging technologies on international security. She previously served as a research fellow at Perry World House at the University of Pennsylvania’s global policy think tank where she helped launch and manage projects on emerging technologies and global politics, and her work has appeared in Foreign Affairs, Defense One, Lawfare, War on the Rocks, Bulletin of the Atomic Scientists, and the Economist, just to name a few publications. So, Lauren, thanks very much for being with us. I thought we could begin by having you set the stage of why we should care about emerging technologies and what do they mean for us in—as we look ahead in today’s world. KAHN: Excellent. Thank you so much for having me. It’s a pleasure to be here and be able to speak to you all today. So I’m kind of—when I’m setting the stage I’m going to speak a little bit about recent events and current geopolitical situations and why we care about emerging technologies like artificial intelligence, quantum computing—things that seem a little bit like science fiction but are now coming into realities and how our military is using them. And then we’ll get a little bit more into the nitty gritty about U.S. defense strategy, in particular, and how they’re approaching adoption of some of these technologies with a particular focus in artificial intelligence, since that’s what I’m most interested in. Look, awesome. Thank you so much for kicking us off. So I’ll say that growing political competition between the United States, China, and Russia is increasing—the risk of great power conventional war in ways that we have not seen since the end of the Cold War. I think what comes to everyone’s mind right now is Russia’s ongoing invasion of Ukraine, which is the largest land war in Europe that we’ve seen since World War II, and the use of a lot of these new emerging capabilities. And so I’ll say for the past few decades, really, until now we thought about war as something that was, largely, contained to where it was taking place and the parties particularly involved, and most recent conflicts have been asymmetric warfare being limited to terrestrial domains. So, on the ground or in the air or even at sea, where most prominent conflicts were those between nation states and either weak states or nonstate actors, like the U.S. wars—led wars in Afghanistan and Iraq or intervention in places like Mali and related conflicts as part of the broader global war on terrorism, for example. And so while there might have been regional ripple effects and dynamics that shifted due to these wars, any spillover from these conflicts was a little bit more narrow or due to the movement of people themselves, for example, in refugee situations. I’ll say, however, that the character of wars is shifting in ways that are expanding where conflicts are fought and where they take place and who is involved, and a large part of this, I think, is due to newer capabilities and emerging technologies. I’ll say it’s not entirely due to them, but I think that there are some things, like, with the prominence of influence operations, and misinformation, deep fakes, artificial intelligence, commercial drones, that make access to high-end technology very cheap and accessible for the average person has meant that these wars are going to be fought in kind of new ways. We’re seeing discussion of things like information wars where things are being fought on TikTok and social media campaigns where individuals can kind of film what’s happening on the ground live and kind of no longer do states have, so to speak, a monopoly on the dissemination of information. I’ll speak a little bit more about some of the examples of technologies that we’re seeing. But, broadly speaking, this means that the battlefield is no longer constrained to the physical. It’s being fought in cyberspace, even in outer space, with the involvement of satellites and the reliance on satellite imagery and open source satellite imagery like Google Maps and, again, in cyberspace. And so as a result, it’ll not only drive new sectors and new actors kind of into the foray when it comes to fighting wars, and militaries have been preparing for this for quite a while. They’ve been investing in basic science research and development, testing and evaluation in all of these new capabilities, from artificial intelligence, robotics, quantum computing, hypersonics. And these have been priorities for a few years but I’ll say that that conflict in Ukraine and the way that we’re seeing these technologies are being used has really kind of put a crunch on the time frame that states are facing, and I’m going to speak a little bit more about that in a minute. But to kind of give you an example of what are—what does it mean to use artificial intelligence on the battlefield—what do these kind of look like, there’s—largely, my work before this conflict was a little hypothetical. It was hard to kind of point to. But I think now, as these technologies mature, you’re seeing that they’re being used in more ways. So artificial intelligence, for example, are used to create—has been used by Russia to create deep fakes. There was a very famous one of President Zelensky that they used that they then combined with a cyberattack to put it at a very—to put it on national news in Ukraine, to make it look a little bit more believable even though the deep fake itself, it was a little, like, OK, they could tell it was computer generated. These are kind of showing how some of these technologies are evolving and, especially when combined with other kinds of technological tools, are going to be used to kind of make some of these more influence operations and propaganda campaigns a little bit more persuasive. Other examples of artificial intelligence, there’s facial recognition technology being used to identify civilians and casualties, for example. They’re being used to—they’re using natural language processing, which is a type of artificial intelligence that kind of analyzes the way people speak. You think of Siri. You think of chat bots. But more advanced versions being used to kind of read in radio transmissions and translate them and tag them so that they’re able to—that forces are able to go through more quickly and identify what combatants are saying. There’s the use of 3D printing and additive manufacturing where individuals are able to for very cheap—a 3D printer costs a couple—a thousand dollars and you can get it for maybe less if you build it yourself. You can add—you can add different components to grenades to make—and then people are taking smaller commercial drones to kind of make a MacGyvered smart bomb that you can maneuver. So those are some of the kind of commercial technologies that are being pulled into the kind of military sphere and into the battlefield. They might not be large. They might not be military in its first creation. But because they’re so general purpose technologies—they’re dual use—they’re being developed in the private sector and you’re seeing them being used on the battlefield and weaponized in new ways. There are other technologies that are more based originally in the military and defense kind of sectors and who’s created them, things like loitering munitions, which we’re seeing more of now, and a little—a lot more drones. I’m sure a lot of you have been seeing a lot of—about the Turkish TB2 drones and the Iranian drones that are now being used by Russia in the conflict. And these are not as new technologies. We’ve seen them. They’ve been around for a couple of decades. But they’re reaching a maturity in their technological lifecycle where they’re a lot more cheap and they’re a lot more accessible and they’re a lot more familiar now that they’re being used in innovative and new ways. They’re being seen as less precious and less expensive. And so not that they’re being used willy nilly or that they’re expendable but militaries, we’re seeing, are willing to use them in more flexible ways. And so, for example, Ukraine, in the early days of the campaign, there were some—allegedly, Ukraine used it as—the TB2 as a distraction when it wanted to sink a war ship rather than actually using it to try and sink the war ship itself. And so using it for things that they’re good for but maybe not the initial thought or the initial what they were designed to be used for. Iran—I mean, excuse me, Russia, now using the Iranian-made loitering munitions. They’re pretty reasonable in price. They’re about $20,000 a pop, and so using them in swarms to be able to take out some of the Ukrainian infrastructure has been a pretty good technique. Ukraine, for example, is very good at shooting them down. I think they were reporting at some point they had an ability to shoot them down at a rate of around 85 percent to 90 percent. And so the swarms weren’t necessarily all of them were getting through but because they’re so reasonably priced it was still—it was still a reasonable tactic and strategy to take. There’s even some kind of more cutting edge, a little bit more unbelievable, applications like now being touted as an Uber for artillery, whether you’re using similar kind of algorithms that Uber uses to kind of identify which passengers to pick up first and where to drop them off, about how to target artillery systems—what target is most efficient to hit first. And so we’re seeing a lot of these technologies being used, like I said, in new and practical ways, and it’s really condensed the timeline that, I think, states are seeing, especially the United States—that they want to adopt these technologies. Back in 2017, Vladimir Putin famously stated that he believed that whoever became leader in AI would become leader of the world, and China has very much publicized their plans to invest a lot more in AI research and development, to invest in bridging the gaps between its civil and military engineers and technologists to take advantage of AI by the year 2023. So we’ve got about one more year to go. And so I think that the United States, recognizing this, the time crunch has been—the heat is on, so to speak, for adopting some of these newer capabilities. And so we’re seeing that a lot now. There’s a lot of reorganization happening within the Department of Defense to kind of better leverage and better adapt in order to take advantage of some of these technologies. There’s the creation of a new chief data—digital and artificial intelligence office, the new emerging capabilities policy office, that are efforts in order to better integrate data systems ongoing projects in the Department of Defense, et cetera, to implement it for broader U.S. strategy. There’s been efforts as well to partner with allies in order to develop artificial intelligence. I mean, as part of the Indo-Pacific strategy that the Biden administration announced back in February of 2022 they announced that along with the Quad partners—so Japan, Australia, and I’m forgetting—and India, excuse me—they are going to fund research, for example, for any graduates from any of those four countries to come study in the United States if they focused on science, technology, engineering, and mathematics, and so to foster that integration and collaboration between our allies and partners to better take use of some of these things. I’ll say, even so, recently, in April 2022, for example, I think, looking at how Ukraine was using a lot of these technologies, the United States was able to fast track one of its programs. It was called the Phoenix Ghost. It’s a loitering munition. Little—it’s still a little—not well known. But, for example, I saw that the capabilities requirement that Ukraine had and fast tracked their own program in order to fulfill that. So they’re being used for the first time. So, again, we’re seeing that the United States is kind of using this as an opportunity to learn as well as to really take advantage and start kicking into high gear AI in defense innovation development. And so I’ll say that doesn’t mean that it’s not without its challenges, acquisitions process in particular. So how the United States—how Department of Defense takes a program from research and development all the way to an actual capability that it’s able to use on the battlefield. Before, in the 1950s where it used to take maybe five years now takes a few decades, there’s a lot of processes in between that make it a little bit challenging. All these sorts of checks and balances in place, which are great, but have made the process slow down the process a little bit. And so it’s harder for smaller companies and contractors to kind of—that are driving a lot of this—driving the cutting-edge research in a lot of these fields to work with the defense sector. And so there are some of these challenges, which, hopefully, some of this reorganization that the Pentagon is doing will help us. But that’s the next step, looking forward. And so that’s going to, I think, be the next big challenge that I’m watching for the—over the rest of this year and the next six months. But I think I threw a lot out there but I’m happy to open it for questions now and focus on anything in particular. But I think that gave an overview of some of the things that we’re seeing now. FASKIANOS: Absolutely. That was insightful and a little scary—(laughs)—and look forward now to everybody’s questions. As a reminder, after two and a half years of doing this, you can click on the raise hand icon on your screen to ask a question, and on an iPad or Tablet click the more button to access the raise hand feature. When you’re called upon, please accept the unmute prompt and state your name and affiliation. You can also submit a written question via the Q&A icon, and please include your affiliation there, and we are going to try to get through as many questions as we can. All right. So the first question—raised hand comes from Michael Leong. Q: Hi. Is this working? FASKIANOS: It is. Please tell us your affiliation. Q: Hi. My name is Michael Leong. I’m an MPA student in public administration at the University of Arizona in Tucson. And I just have a question about, basically, with the frequent use and successful use of drones in Ukraine is there any concern domestically about—because of how easily they are adapting such accessible technology to warfare that those can be used maliciously domestically and what steps they might be considering. Thanks. KAHN: Absolutely. That’s a great question. I think it’s broader than just drones as well when you have this proliferation of commercial technology into defense space and you have these technologies that are not necessarily, for example, weapons, right. So for—I think a good example is Boston Dynamics. They make this quad pet robot with four legs. It looks kind of like a dog. His name is Spot. And he’s being used in all sorts of commercial applications—help fund local police forces, et cetera—for very benevolent uses. However, there’s been a lot of concern that someone will go and, essentially, duct tape a gun to Spot and what will that kind of mean. And so I think it’s a similar kind of question when you have some of these technologies, again, that aren’t—it depends on how you use them and so it’s really up to the user. And so when you get things like commercial drones, et cetera, that you’re seeing that individuals are using for either reconnaissance or, again, using in combination with things like 3D printing to make weapons and things like that, it is going to be increasingly, increasingly difficult to control the flow. We’ve seen Professor Michael Horowitz over at the University of Pennsylvania, who’s now in government, he’s done a lot of research on this and you see that the diffusion of technologies happens a lot—a lot quicker when they’re commercially based rather than when they’re from a military origination. And so I think it’s definitely going to pose challenges, especially when you get things like software and things like artificial intelligence, which are open source and you can use from anywhere. So putting—kind of like controlling export and extrolling (sic) after the fact how they’re used is going to be extremely difficult. A lot of that right now is currently falling to kind of companies who are producing them to self-regulate since they have the best, like, ability to kind of limit access to certain technologies. Like, for example, open AI. If any of you have played with DALL-E 2 or DALL-E Mini, the image generating prompt sandbox tool that’s—they have limited what the public can access—certain features, right—and are testing themselves to see, OK, how are these being used maliciously. I think a lot of them are testing how they’re being used for influence operations, for example. And so making sure that some of those features that allow that to be more malicious they’re able to regulate that. But it is going to be extremely hard and the government will have to work hand in hand with a lot of these companies and private actors that are developing these capabilities in order to do that. But it’s a very great question and it is not one that I have a very easy answer to on how to address that. But it is, like, something that I’ve been thinking about a lot. FASKIANOS: Thank you. I’m going to take the next question from Arnold Vela, who’s an adjunct faculty at Northwest Vista College. What is the potential value of AI for strategy, e.g., war planning, versus tactical uses? KAHN: Great. So I think—honestly, I think a lot of artificial intelligence the benefit is replacing repetitive human—repetitive redundant tasks, right. So it’s not replacing the human. It’s making the human be more efficient by reducing things like data entry and cleaning and able to pull resources from all together. And so it’s actually already being used, for example, in war planning and war gaming and things like that and Germany and Israel have created things to make 3D AI to create sort of 3D battlefields where they can see all the different kind of inputs of information and sensors. And so I think that’s really where the value add—the competitive advantage of artificial intelligence is. It’s not necessarily—having an autonomous drone is very useful but I think what will really be the kind of game changer, so to speak, will be in making forces more efficient and both have a better sense of themselves as well as their adversaries, for example. And so, definitely, I think, I’m more in the background with the nonsexy—the data cleaning and all the numbers bit will be a lot more important, I think, than the having a drone with encased AI capabilities, even though those kind of suck the oxygen out a little bit because it’s really exciting. It’s shiny. It’s Terminator. It’s I, Robot-esque, right? But I think a lot of it will be the making linguists within the intelligence community able to process and translate documents at a much faster pace. So making individuals’ lives easier, I think. So definitely. FASKIANOS: Great. Thank you. I’m going to go next to Dalton Goble. Please accept the unmute. Q: Thank you. FASKIANOS: There you go. Q: Hi. I’m Dalton. I’m from the University of Kentucky and I’m at the Patterson School for Diplomacy and International Commerce. Thank you for having this talk. I really wanted to ask about the technology divide between the developed and developing world, and I wanted to hear your comments about how the use of AI in warfare and the technologies such as—and their proliferation can exasperate that divide. KAHN: Absolutely. I actually think, we’re—I think that I’ve been focusing a lot on how the U.S. and China and Russia, in particular, have been adopting these technologies because they’re the ones that are investing in it the most. I mean, countries in Europe are as well and, Israel, et cetera, and Australia also. Except I still think we’re in those early stages where a lot of countries—I think, over a hundred or something—have the national AI strategies right now. I don’t think it’s as far along yet in terms of its—at least its military applications or applications for government. I will say that, more broadly, I think, again, because these technologies are developed in the commercial sector and are a lot more reasonably priced, I think there’s actually a lot of space for countries in the developing world, so to speak, to adopt these technologies. There’s not as many barriers, I think, when it’s, again, necessarily a very expensive, super specific military system. And so I think that it’s actually quite diffusing rapidly in terms—and pretty equally. I haven’t done extensive research into that. It’s a very good question. But my first gut reaction is that it actually can—it actually can help kind of speak—not necessarily exacerbate the divide but kind of close the gap a little bit. A colleague of mine works a lot in health care and in health systems in developing countries and she works specifically with them to develop a lot of these technologies and find that they actually adopt them quicker because they don’t have all of these existing preconceived notions about what the systems and organizations should look like and are a lot more open to using some of these tools. But I will say, again, they are just tools. No technology is a silver bullet, and so I think that, again, being in the commercial sector these technologies will diffuse a lot more rapidly than other kind of military technologies. But it is something to be cognizant of, for sure. FASKIANOS: Thank you. I’m going to go next to Alice Somogyi. She’s a master’s student in international relations at the Central European University. Could you tell us more on the implications of deep fakes within the military sector and as a defense strategy? KAHN: Absolutely. I think influence operations in general are going to be increasingly part of the—part of the game, so to speak. I mean, I mentioned there’s going to be—it’s very visible to see in the case of Ukraine about how the information war, especially in the early days of the conflict, was super, super important, and the United States did a very good job of releasing information early to allies and partners, et cetera, to kind of make the global reaction time to the invasion so quick. And so I think that was a lot—very unexpected and I think has shown just—not to overstate it but the power of individuals and that a lot of propaganda will have. We’ve known—I’m sure if you studied warfare history, you can see the impact of propaganda. It’s always been—it’s always been an element at play. I will just say it’s another tool in the toolkit to make it a little bit more believable, to make it harder, to make these more efficient, and I think what’s really, really interesting, again, is how a lot of these technologies are going to be worked together to kind of make them more believable. Like, again, creating deep fakes. The technology isn’t there yet to make them super believable, at least on a—like, a large scale that many people at—that a state could believe. But combining them with something like a cyberattack, to place that in a place that you would have a little bit more—more willing to believe it, I think, will be increasingly important. And we’ll see it, I’m sure, combined in other ways that I can’t even imagine. And that goes back to one of the earlier questions we had about the proliferation of these technologies and, like, it being commercial and being able to contain the use and you can’t, and that’s the hardest part. And I think that especially when it comes to software and things where once you sell it out there they can use it for whatever they want. And so it’s this kind of creativity where you can’t prevent against any possible situation that you don’t know. So it has to be a little bit reactive. But I think there are measures that states and others can take to be a little bit proactive to protect against the use. This isn’t specifically about deep fakes but about artificial intelligence in general. There’s a space, I think, for confidence-building measures so informal agreements that states can kind of come to to set norms and kind of general rules of the road about, like, expectations for artificial intelligence and other kind of emerging technologies that they can put in place before they’re used so that when situations that are unexpected or have never seen before arise that there’s not—there’s not totally no game plan, right. There’s a kind of things and processes to kind of fall back on to guide how to advance and work on that situation without having to—without regulating too much too quickly that they become outdated very quickly. But I think it’ll definitely be as the technology develops that we’ll be using a lot more deep fakes. FASKIANOS: Yes. So Nicholas Keeley, a Schwarzman Scholar at Tsinghua University, has a question that goes along these lines. Ukrainian government and Western social media platforms were pretty successful at preempting, removing, and counteracting the Zelensky deep fake. How did this happen? I mean, he’s—asks about the cutting-edge prevention measures against AI-generated disinformation today that you just touched upon. But can you just talk about the Ukrainian—this specific what we’re seeing now in Ukraine? KAHN: Yeah. I think Ukraine has been very, very good at using these tools in a way that we haven’t seen before and I think that’s, largely, why a lot of these countries now are looking and watching and are changing their tack when it comes to using these. Again, they seem kind of far off. Like, what’s the benefit of using these newer technologies when we have things that are known and work. But I think Ukraine, kind of being the underdog in this situation and knowing since 2013 that this was a future event that might happen has been preparing, I think, in particular, their digital minister. I’m not sure what the exact title was, but they were able to mobilize that very quickly. It was originally set up to better digitize their government platforms and provide access to individuals, I think, on a phone app. But then they had these experts that work on how—OK, how can we use digital tools to kind of engage the public and engage media. I think when they—they militarized them, essentially. And so I think a lot of the early days, asking for—a lot of people in that organization asked Facebook, asked Apple, et cetera, to either put sanctions, to put guardrails up. You know, a lot of the early, like, Twitter, taking down the media, et cetera, was also engaged because specifically this organization within Ukraine made it their mission to do so and to kind of work as the liaison between Silicon Valley, so to speak, and to get—and to engage the commercial sector so they could self-regulate and help kind of the government do these sort of things, which, I think, inevitably led to them catching the deep fake really quickly. But also, if you look at it, it’s pretty—it’s pretty clear that it’s computer generated. It’s not great. So I think that, in part, was it and, again, in combination with a cyberattack you could then notice that there was a service attack. And so, while it made it more realistic, there’s also risks about that because they’re practiced in identifying when a cyberattack just occurred, more so than other things. But, absolutely. FASKIANOS: Thank you. I’m going to go next to Andrés Morana, who’s raised his hand. Q: Hi. Good afternoon. I’m Andrés Morana, affiliated with Johns Hopkins SAIS International Relations. Master’s degree. I wanted to ask you about AI and then maybe emerging technology as well. But I think artificial intelligence, as it applies to kind of the defense sector, like, the need to also at the same time reform in parallel the acquisitions process, which is notorious for—as we think about AI kind of where these servers are hosted a lot of commercial companies might come with maybe some new shiny tech that could be great. But if their servers are hosted in maybe a place that’s so easy to access then maybe this is not great, as it applies to that defense sector. So I don’t know if you have thoughts on maybe the potential to reform or the need to reform the acquisitions process. Thank you. KAHN: Yeah, absolutely. I mean, this is some people’s, like, favorite, favorite topic on this because it has become sort of a valley of death, right, where things go and they die. They don’t—they don’t move. Of course, there’s some bridges. But it is problematic for a reason. There’s been a few kind of efforts to create mechanisms to circumvent that. The Defense Innovation Unit has created some kind of funding mechanisms to avoid it. But, overall, I do think it needs—I don’t know what that looks like. I’m not nearly an expert on specifically the acquisitions process that a lot of folks are. But it is pretty—it would make things a lot easier. China, for example, people are talking about, oh, it’s so far ahead on artificial intelligence, et cetera, et cetera. I would argue that it’s not. It’s better at translating what it has in the civilian and academic sectors into the military sphere and being able to use and integrate that. And so overcome that gap. It does so with civil-military fusion. You know, they can kind of do—OK, well, we’re saying we’re doing it this way so it’s going to happen, whereas the United States doesn’t have that kind of ability. But I would say the United States has all the academic and industry leading on artificial intelligence. Stanford recently put out their 2022 AI Index that has some really great charts and numbers on this about how much—how much research is being done in the world on artificial intelligence and which countries and which regions and specifically who’s funding that, whether it’s governments, academia, or industry. And the United States is still leading in industry and academia. It’s just that the government has a problem tapping into that, whereas China, for example, its government funding is a lot greater and there’s a lot more collaboration across government, academia, and industry. And so I think that is right now the number-one barrier that I see. The second one, I’ll say, is accessing data and making sure you have all the bits and pieces that you need to be able to use AI, right. What’s the use of having a giant model that—an algorithm that could do a million things if you don’t have all of the data set up for it. And so those are the two kind of organizational infrastructure problems that I’ll say are really hindering the U.S. when it comes to kind of adopting these technologies. But, unfortunately, I do not have a solve for it. I would be super famous in the area if I did, but I do not, unfortunately. FASKIANOS: Thank you. I’m going to take the next question from Will Carpenter, a lecturer at the University of Texas at Austin. Also got an up vote. What are the key milestones in AI development and quantum computing to watch for in the years ahead from a security perspective? Who is leading in the development of these technologies—large cap technology companies such as Google, ByteDance? Venture capital-backed private companies, government-funded entities, et cetera? KAHN: Great. Great question. I’ll say for quantum, quantum is a little bit more down the line since we do not have a quantum computer, like, a really big quantum computer yet that can handle enough data. China’s kind of leading in that area, so to speak. So it’s curious to watch them. They’ve created their first, I think, quantum-encrypted communications line and they’ve done a few works on that. So I think to keep an eye on that will be important. But, really, just getting a computer large enough that it’s reasonable to use quantum, I think, will be the next big milestone there. But that’s quite a few years down the line. But when it comes to artificial intelligence, I’ll say that artificial intelligence has had waves and kind of divots in interest and then research. They call them AI winters and AI springs. Winter is when there’s not a lot of funding and spring is when there is. It’s featured a lot of—right now we’re in a spring, obviously, and it was a large part because of breakthroughs in, like, the 2010s in things like natural language processing and computer vision, et cetera. And so I think continued milestones in those will be key. There’s a few that I’ve worked on. There’s a—there’s the paper right now—hopefully, it will be out in the next few months—on forecasting on when we actually think those—when AI experts and machine learning experts think those milestones will be hit. I mean, there were, like, two that were hit, like, there was ones where you’d have AI being able to beat all the Atari games. You have AI being able to play Angry Birds. There’s ones that’s, like, OK—well, and there are lots of those mini milestones that—bigger leaps than just the efficiency of these algorithms. I think things like artificial or general intelligence. Some say there are some abilities for you to create one algorithm that can play a lot of different games. You know, it can play chess and Atari and Tetris. But I think, broadly speaking, it’s a little bit down the line also. But I’ll say for, like, the next few months, it’ll—and the next few years, it’ll probably be just, like, more efficient in some of these algorithms, making them better, making them leaner, use a lot less data. But I think we’ve, largely, hit the big ones and so I think it’ll be—we’ll see these short, smaller milestones being achieved in the next few years. And I think there was another part to the question in the—let me just go look in the answer for what it was. Who’s developing these. FASKIANOS: Right. KAHN: I would say these, like, large companies like Google, Open AI, et cetera. But I’ll say a lot of these models are open source, for example, which means that the models themselves are out there and they’re available to anyone who wants to kind of take them and use them. I mean, I’m sure you’ve seen—once you saw DALL-E Mini you saw DALL-E 2 and DALL-E X. So, like, they proliferate really quickly and they adapt, and that’s a large part what’s driving the acceleration of artificial intelligence. It’s moving so quickly because there is this nature of collaboration and sharing that companies are incentivized to participate in, where they just take the models, train them against their own data, and if it works better they use that. And so those kind of companies are all playing a part, so to speak. But I would say, largely, academia right now is still really pushing the forefront, which is really cool to see. So I think that means that a lot more Blue Skies kind of just basic research being funded will—if it’s being pumped into that we’ll continue to kind of—we’ll see these advances continue. I’ll say also a lot of—when it comes to defense applications, in particular, I think, and where the challenge is is that a lot of—a lot more than typically when it comes to artificial intelligence these capabilities are being developed by niche smaller startup companies that might not be— that might not have the capabilities that, say, a Google or a Microsoft has when it comes to working and contracting with the U.S. government. So that’s also a challenge. When you have this acquisitions process it’s a little bit challenging at best, even for the big companies. I think for these smaller companies that really do have great applications and great specific uses for AI, I think that’s also a significant challenge. So I think it’s, basically, everybody. Everyone’s working together, which is great. FASKIANOS: Great. I’m going to go next to DJ Patil. Q: Thanks, Irina. Good to see you. FASKIANOS: Likewise. Q: And thanks for this, Lauren. So I’m DJ Patil and I’m at the Harvard Kennedy School Belfer Center, as well as Devoted Health and Venrock Partners. And so, Lauren, the question you addressed a little bit on the procurement side, I’m curious what your advice to the secretary of defense would be around capabilities, specifically, given the question of large language models or the efforts that we’re seeing in industry and how much separation of results that we’re seeing even in industry compared to academia. Just the breakthroughs that we’re seeing reported are so stunning. And then if we look at the datasets that are—that they’re building on—those companies are building on, they’re, basically, open or there’s copyright issues in there. There’s defense applications which have very small data sets, and also, as you mentioned, in the procurement side a lack of access to the ability of these things. And so what is the mechanisms if you looked across this from a policy perspective of how we start tapping into those capabilities to ensure that we have competitiveness as the next set of iterations of these technologies take place? KAHN: Absolutely. I think that’s a great question. I’ve done a little bit of work on this. When they were creating the chief digital AI office, I think they had, like, people brainstorming about, like, what kind of things we would like to see and I think everyone agreed that they would love for them to get kind of a better access to data. If the defense secretary asks, can I have data on all the troop movements for X, Y, and Z, there’s a lot of steps to go through to pull all that information. The U.S. defense enterprise is great at collecting data from a variety of sources—from the intelligence community, analysts, et cetera. I think what’s challenging to know—and, of course, there are natural challenges built in with different levels of how confidential things are and how—the classifications, et cetera. But I think being able to pull those together and to clean that data and to organize it will be a key first step and that is a big infrastructure systems software kind of challenge. A lot of it’s actually getting hardware in the defense enterprise up to date and a lot of it is making sure you have the right people. I think another huge one—and, I mean, the National Security Commission on AI on their final report announced that the biggest hindrance to actually leveraging these capabilities is the lack of AI and STEM talent in the intelligence community in the Pentagon. There’s just a lack of people that, one, have the vision to—have the background and are willing to kind of say, OK, like, this is even a possible tool that we can use and to understand that, and then once it’s there to be able to train them to be able to use them to do these kind of capacities. So I think that’ll be a huge one. And there are ways that kind of—there are efforts right now ongoing with the Joint Artificial Intelligence Center—the JAIC—to kind of pilot AI educational programs for this reason as a kind of AI crash course. But I think there needs to be, like, a broader kind of effort to encourage STEM graduates to go into government and that can be done, again, by kind of playing ball, so to speak, with this whole idea of open source. Of course, the DOD can’t do—Department of Defense can’t make all of its programs open and free to the public. But I think it can do a lot more to kind of show that it’s a viable option for individuals working in these careers to address some of the same kind of problems and will also have the most up to date tech and resources and data as well. And I think right now it’s not evident that that’s the case. They might have a really interesting problem set, which is shown to be attractive to AI PhD graduates and things like that. But it doesn’t have the same kind of—again, they’re not really promoting and making resources and setting up their experts in the best way, so to speak, to be able to use these capabilities. FASKIANOS: Thank you. I’m going to take the next question from Konstantin, who actually wrote a question—Tkachuk—but also raised his hand. So if you could just ask your question that would be best. Q: Yes. I’m just happy to say it out loud. So my name is Konstantin. I’m half Russian, half Ukrainian. I’m connecting here from Schwarzman Scholarship at Tsinghua University. And my question is more coming towards the industry as a whole, how it has to react on what’s happening to the technology that the industry is developing. Particularly, I am curious whether it’s the responsibility and interest of industry and policymakers to protect the technology from such a misuse and whether they actually do have control and responsibility to make these technology frameworks unusable for certain applications. Do you think this effort could be possible, give the resources we have, the amount of knowledge we have? And, more importantly, I would even be curious on your perspective whether you think countries have to collaborate on that in order to such effort be efficient, or it should be incentive models based inside countries that will make an effort to the whole community. KAHN: Awesome. I think all of the above. I think right now, because there’s so—the relatively little understanding of how these work, I think a lot of it is the private companies self-regulating, which I think is a necessary component. But there are also now indications of efforts to kind of work with governments on things like confidence-building measures or other kind of mechanisms to kind of best understand and best develop transparency measures, testing and evaluation, other kind of guardrails against use. I think there are, like, different layers to this, of course, I think, and all of them are correct and all of them are necessary. I think the specific applications themselves there needs to be an element of regulation. I think at some point there needs to be, like, a user agreement as well about when they’re selling technologies and selling capabilities, how they agree to kind of abide by the terms. You sign it when you—the terms of use, right. And I think also then there are, of course, export controls that can be put on and certain—you’re allowed to do, the commercial side but you make the system itself—incompatibles are being used with other kinds of systems that would make it dangerous. But I think there’s also definitely room and necessary space for interstate collaboration on some of these, especially when you get—say, for example, when you introduce artificial intelligence into military systems, right, they make them faster. They make the decision-making process a lot more speedy, basically, and so the individual has to make quicker decisions. And so if you have things and when you introduce things like artificial intelligence to increasingly complex systems you have the ability for accidents to kind of snowball, right, where they become—as they go through. Like, one little decision can make a huge kind of impact and end up with a mistake, unfortunately. And so when you have the kind of situation when you’re forbid it’s in a—in a battlefield context, right. And let’s say the adversary says, oh, well, you intentionally shot down XYZ plane; and the individual said no, it was an auto malfunction and we had an AI in charge of it; who, in that fact, is responsible now? If it was not an individual now is it the—the blame kind of shifts up the pipeline. And so you’ve got problems like these. Like, that’s just one example. But, like, where you have increasingly automated systems and artificial intelligence that kind of shift how dynamics play out, especially in accidents, which require a lot of visibility, traditionally, and you have these technologies that are not so visible, not so transparent. You don’t really get to see how they work or understand how they think in the same way that you can say, if I pressed a button and you see the causality of that chain reaction. And so I think there is very much a need because of that for even adversaries—not necessarily just allies—to agree on how certain weapons will be used and I think that’s why there’s this space for confidence-building measures. I think a really—like, for example, a really simple kind of everyone already agrees on this is to have a human in the loop, right—a human control. When we eventually use artificial intelligence and automated systems increasingly in nuclear context, right, with nuclear weapons, I think everyone’s kind of on board with that. And so I think those are the kind of, like, building block agreements and kind of establishment of norms that can happen and that need to take place now before these technologies really start to be used. That will be essential to avoiding those worst case scenarios in the future. FASKIANOS: Great. Thank you. I’m going to take the next question—written question—from Alexander Beck, undergraduate at UC Berkeley. In the context of military innovation literature, what organizational characteristics or variables have the greatest effect on adoption and implementation, respectively? KAHN: Absolutely. I’m not an organizational expert. However, I’ll say, like before, I think that’s shifting, at least from the United States perspective. I think, for example, when the Joint Artificial Intelligence Center was created it was, like, the best advice was to create separate organizations that had the capability to kind of enact their own kind of agenda and to create separate programs for all of these to kind of best foster growth. And so that worked for a while, right. The JAIC was really great at promoting artificial intelligence and raising it to a level of preeminence in the United States. A lot of early success in making—raising awareness, et cetera. But now we’re seeing, there was some—a little bit of confusion, a little bit of concern, over the summer when they did establish the chief data—a digital and artificial intelligence office—excuse me. A lot of acronyms—when they—because they took over the JAIC. They subsumed the JAIC. There was a lot of worry about that, right. Like, they just established this great organization that we’ve had in 2019 and now they’re redoing it. And so I think they realized that as the technology develop, organizational structures need to develop and change as well. Like, in the beginning, artificial intelligence was kind of seen as its own kind of microcosm. But because it’s in a general purpose enabling technology it touches a lot more and so it needs to be thought more broadly rather than just, OK, here’s our AI project, right. You need to better integrate it and situate it next to necessary preconditions like the food for AI, which is data, right. So they reorganized to kind of ideally do that, right. They integrate it research and engineering, which is the arm in the Defense Department that kind of funds the basic research, to kind of have people understand policy as well. So they have all of these different arms now within this broader organization. And so there are shifts in the literature, I think, and there are different best cases for different kind of technologies. But I’m not as familiar with where the literature is going now. But that was kind of the idea has shifted, I think, even from 2018 to 2022. FASKIANOS: Thanks. We’re going to go next to Harold Schmitz. Q: Hey, guys. I think a great, great talk. I wanted to get your thoughts on AlphaFold, RoseTTAFold—DeepMind—and biological warfare and synthetic biology, that sort of area. Thank you. KAHN: Of course. I— Q: And, by the way—sorry—I should say I’m with the University of California Davis School of Management and also with the March Group—a general partner. Thank you. KAHN: I am really—so I’m really not familiar much with the bio elements. I know it’s an increasing area of interest. But I think, at least in my research, kind of taking a step back, I think it was hard enough to get people within the defense sector to acknowledge artificial intelligence. So I haven’t seen much in the debate, unfortunately, recently, just because I think a lot of the defense innovation strategy, at least in the Biden administration, is focused directly on the pacing—addressing the pacing challenge of China. And so they’ve mentioned biowarfare and biotechnology as well as nanotechnology and et cetera, but not as much in a comprehensive way as artificial intelligence and quantum in a way that I’m able to answer your question. I’m sorry. FASKIANOS: Thank you. I’ll go next to Alex, who has raised—and you’ll have to give us your last name and identify yourself. Q: Hi. Yes. Thank you. I’m Alex Grigor. I just completed my PhD at University of Cambridge. My research is specifically looking at U.S. cyber warfare and cybersecurity capabilities, and in my interviews with a lot of people in the defense industry, their number-one complaint, I suppose, was just not getting the graduates applying to them the way that they had sort of hoped to in the past. And if we think back at ARPANET and all the amazing innovations that have come out of the internet and can come out of the defense, do you see a return to that? Or do you see us now looking very much to procure and whatever from the private industry, and how might that sort of recruitment process be? They cited security clearances as one big impediment. But what else might you think that could be done differently there? KAHN: Yeah. Absolutely. I think security clearances, all the bureaucratic things, are a challenge, but even assuming that individual wants to work, I think right now if you’re working in STEM and you want to do research I think having two years, for example, in government and being a civilian, working in the Pentagon, for example, it looks—it doesn’t necessarily look like—allow you to jump then back into the private sector and academia, whereas other jobs do. So I think that’s actually a big challenge about making it possible for various reasons, various mechanisms, to kind of make it a reasonable kind of goal for not necessarily being a career in government but allowing people to kind of come and go. I think that’ll be a significant challenge and I think that’s in part about some of the ability to kind of contribute to the research that we spoke about earlier. I mean, the National Security Commission has a whole strategy that they’ve outlined on it. I’ve seen, again, like, piecemeal kind of efforts to overcome that. But nothing broad and sweeping reform as suggested by the report. I recommend reading it. It’s, like, five hundred pages long. But there’s a great section on the talent deficit. But, yeah, I think that will definitely be a challenge. I think cyber is facing that challenge. I just think anything that touches STEM in general, and so—and especially because I think the AI and particular machine learning talent pool is global and so states actually are, interestingly, kind of fighting over this talent pool. I’ve done a research previously also at the University of Oxford that looked at, like, the immigration preferences of researchers and where they move and things like that, and a lot of them are Chinese and studying in the United States. And they stay here. They move, et cetera. But a lot of it is actually also immigration and visas. And so other countries—China specifically made kind of for STEM graduates special visas. Europe has done it as well. And so I think that will also be another element at play. There’s a lot of these to kind of attract more talent. I mean, again, one of the steps that was tried was the Quad Fellowship that was established through the Indo-Pacific strategy. But, again, that’s only going to be for a hundred students. And so there needs to be a broader kind of effort to make it—to facilitate the flow of experts into government. To your other point about is this going to be what it looks like now about the private sector driving the bus, I think it will be for the time being unless DARPA and the defense agencies’ research arm and DOD change this acquisition process and, again, was able to get that talent, then I think—if something changes, then I think it will be able to, again, be able to contribute in the way that it has in the past. I think it’s important, too, right. There was breakthroughs out of cryptography. And, again, the internet all came from defense initially. And so I think it would be really sad if that was not the case anymore and I think especially as right now we’re talking about using—being able to kind of cross that bridge and work with the private sector and I think that will be necessary. I hope it doesn’t go too far that it becomes entirely reliant because I think DOD will need to be self-sufficient. It’s another kind of ecosystem to generate research in applications, and not all problems can be addressed by commercial applications as well. It’s a very unique problem set that defense and militaries face. And so I think there will need to be—right now, it’s a little bit heavy on needing to—there’s a little bit of a push right now, OK, we need to better work with the private sector. But I think, hopefully, overall, if it moves forward it will balance out again. FASKIANOS: Lauren, do you know how much money DOD is allocating towards this in the overall budget? KAHN: Off the top of my head, I don’t know. It’s a few billion. It’s, like, a billion. I think—I have to look. I can look it up. In the research 2023 budget request there was the highest amount requested ever for STEM research and engineering and testing and evaluation. I think it was—oh, gosh, it was a couple hundred million (dollars) but they had—it was a huge increase from the last year. So it’s an increasing priority. But I don’t have the specific numbers on how much. People talk about China funding more. I think it’s about the same. But it’s increasing steadily across the board. FASKIANOS: Great. So I’m going to give the final question to Darrin Frye, who’s an associate professor at Joint Special Operations University in the Department of Strategic Intelligence and Emergent Technologies, and his is a practical question. Managing this type of career how do you structure your time researching and learning about the intricacies of complex technologies such as quantum entanglement or nano-neuro technologies versus informing leadership and interested parties on the anticipated impact of emergent technologies on the future military operational environment? And maybe you can throw in there why you went into this field and why you settled upon this, too. KAHN: Yeah. I love this question. I have always been interested in the militarization of science and how wars are fought because I think it allows you to study a lot of different elements. I think it’s very interesting working at the intersection. I think, broadly speaking, a lot of the problems that the world is going to face, moving forward, are these transnational large problems that will require academia, industry, and government to kind of work on together from climate change and all of these emerging technologies, for example, global health, as we’ve seen over the past few years. And so I think it’s a little bit of a striking a balance, right. So I came from a political science background, international relations background, and I did want to talk about the big picture. And I think there are individuals kind of working on these problems and are recognizing them. But in that I noticed that I’m speaking a lot about artificial intelligence and emerging technologies and I’m not—I’m not from an engineering background. And so me, personally, I’m, for example, doing a master’s in computer science right now at Penn in order to shore up those kind of deficiencies and lack of knowledge in my sphere. I can’t learn everything. I can’t be a quantum expert and an AI expert. But I think having the baseline understanding and taking a few of those courses and more regularly has allowed me to when a new technology, for example, shows up that I can learn how—I know how to learn about that technology, which, I think, has been very helpful, speaks both languages, so to speak. I don’t think anyone’s going to be a master—you can’t be a master of one, let alone master of both. But I think it will be increasingly important to spend time learning about how these things work, and I think just getting a background in coding can’t hurt. And so it’s definitely something you need to balance. I would say I’m probably balanced more towards what are the implications of this, more broadly, since if you’re talking at such a high level it doesn’t help necessarily people without that technical background to get into the nitty gritty. It can get jargony very quickly, as I’m sure you guys understood listening to me even. And so I think there’s a benefit to learning about it but also make sure you don’t get too in the weeds. I think there are—I think a big important—there’s a lot of space for people who kind of understand both that can then bring those people who are experts, for example, on quantum entanglement and nanotechnology—to bring them so that when they’re needed they can come in and speak to people in a policy kind of setting. So there definitely is a room, I think, for intermediaries. There’s policy experts that people kind of sit in between and then, of course, the highly specialized expertise, which I think is definitely, definitely important. But it’s hard to balance. But I think it’s very fun as well because then you get to learn a lot of new things. FASKIANOS: Wonderful. Well, with that we are out of time. I’m sorry that we couldn’t get to all the written questions and the raised hands. But, Lauren Kahn, thank you very much for this hour, and to all of you for your great questions and comments. You can follow Lauren on Twitter at @Lauren_A_Kahn, and, of course, go to CFR.org for op-eds, blogs, and insight and analysis. The last academic webinar of this semester will be on Wednesday, November 16, at 1:00 p.m. (EST). We are going to be talking with Susan Hayward, who is at Harvard University, about religious literacy in international affairs. So, again, I hope you will all join us then. Lauren, thank you very much. And I just want to encourage those of you, the students on this call and professors, about our paid internships and our fellowships. You can go to CFR.org/careers for information for both tracks. Follow us at @CFR_Academic and visit, again, CFR.org, ForeignAffairs.com, and ThinkGlobalHealth.org for research and analysis on global issues. So thank you all, again. Thank you, Lauren. Have a great day. KAHN: Thank you so much. Take care. FASKIANOS: Take care.