bg
Centers & Programs

Digital and Cyberspace Policy Program

The Digital and Cyberspace Policy program addresses one of the most challenging issues facing the country in the twenty-first century: keeping the global internet open, secure, and resilient in the face of unprecedented threats. The program informs policymakers, business leaders, and the general public about the politics of cyberspace through briefings, reports, publications, and podcasts.

4.9 billion people are on the internet

Program Experts

Program Director

Adam Segal

Ira A. Lipman Chair in Emerging Technologies and National Security and Director of the Digital and Cyberspace Policy Program

Jared Cohen

Adjunct Senior Fellow

James P. Dougherty

Adjunct Senior Fellow for Business and Foreign Policy

Richard A. Falkenrath

Senior Fellow for National Security

David P. Fidler

Senior Fellow for Global Health and Cybersecurity

Lauren Kahn

Research Fellow

Catherine Powell

Adjunct Senior Fellow for Women and Foreign Policy

Matthew C. Waxman

Adjunct Senior Fellow for Law and Foreign Policy

Tarah Wheeler

Senior Fellow for Global Cyber Policy

  • China

    This symposium convenes senior government officials and experts from academia and the private sector to address the U.S. Department of State’s newly created Bureau of Cyberspace and Digital Policy, the goals of American cyber diplomacy, and how major public and private international stakeholders can advance global cyber cooperation amidst threats from authoritarian states like Russia and China. The John B. Hurford Memorial Lecture was inaugurated in 2002 in memory of CFR member John B. Hurford, and features individuals who represent critical new thinking in international affairs and foreign policy.
  • Robots and Artificial Intelligence

    Lauren Kahn, research fellow at CFR, leads the conversation on AI military innovation and U.S. defense strategy.   FASKIANOS: Thank you, and welcome to today’s session of the Fall 2022 CFR Academic Webinar Series. I’m Irina Faskianos, vice president of the National Program and Outreach at CFR. Today’s discussion is on the record, and the video and transcript will be available on our website CFR.org/Academic if you would like to share it with your colleagues or classmates. As always, CFR takes no institutional positions on matters of policy. We’re delighted to have Lauren Kahn with us to talk about AI military innovation and U.S. defense strategy. Ms. Kahn is a research fellow at CFR, where she focuses on defense, innovation, and the impact of emerging technologies on international security. She previously served as a research fellow at Perry World House at the University of Pennsylvania’s global policy think tank where she helped launch and manage projects on emerging technologies and global politics, and her work has appeared in Foreign Affairs, Defense One, Lawfare, War on the Rocks, Bulletin of the Atomic Scientists, and the Economist, just to name a few publications. So, Lauren, thanks very much for being with us. I thought we could begin by having you set the stage of why we should care about emerging technologies and what do they mean for us in—as we look ahead in today’s world. KAHN: Excellent. Thank you so much for having me. It’s a pleasure to be here and be able to speak to you all today. So I’m kind of—when I’m setting the stage I’m going to speak a little bit about recent events and current geopolitical situations and why we care about emerging technologies like artificial intelligence, quantum computing—things that seem a little bit like science fiction but are now coming into realities and how our military is using them. And then we’ll get a little bit more into the nitty gritty about U.S. defense strategy, in particular, and how they’re approaching adoption of some of these technologies with a particular focus in artificial intelligence, since that’s what I’m most interested in. Look, awesome. Thank you so much for kicking us off. So I’ll say that growing political competition between the United States, China, and Russia is increasing—the risk of great power conventional war in ways that we have not seen since the end of the Cold War. I think what comes to everyone’s mind right now is Russia’s ongoing invasion of Ukraine, which is the largest land war in Europe that we’ve seen since World War II, and the use of a lot of these new emerging capabilities. And so I’ll say for the past few decades, really, until now we thought about war as something that was, largely, contained to where it was taking place and the parties particularly involved, and most recent conflicts have been asymmetric warfare being limited to terrestrial domains. So, on the ground or in the air or even at sea, where most prominent conflicts were those between nation states and either weak states or nonstate actors, like the U.S. wars—led wars in Afghanistan and Iraq or intervention in places like Mali and related conflicts as part of the broader global war on terrorism, for example. And so while there might have been regional ripple effects and dynamics that shifted due to these wars, any spillover from these conflicts was a little bit more narrow or due to the movement of people themselves, for example, in refugee situations. I’ll say, however, that the character of wars is shifting in ways that are expanding where conflicts are fought and where they take place and who is involved, and a large part of this, I think, is due to newer capabilities and emerging technologies. I’ll say it’s not entirely due to them, but I think that there are some things, like, with the prominence of influence operations, and misinformation, deep fakes, artificial intelligence, commercial drones, that make access to high-end technology very cheap and accessible for the average person has meant that these wars are going to be fought in kind of new ways. We’re seeing discussion of things like information wars where things are being fought on TikTok and social media campaigns where individuals can kind of film what’s happening on the ground live and kind of no longer do states have, so to speak, a monopoly on the dissemination of information. I’ll speak a little bit more about some of the examples of technologies that we’re seeing. But, broadly speaking, this means that the battlefield is no longer constrained to the physical. It’s being fought in cyberspace, even in outer space, with the involvement of satellites and the reliance on satellite imagery and open source satellite imagery like Google Maps and, again, in cyberspace. And so as a result, it’ll not only drive new sectors and new actors kind of into the foray when it comes to fighting wars, and militaries have been preparing for this for quite a while. They’ve been investing in basic science research and development, testing and evaluation in all of these new capabilities, from artificial intelligence, robotics, quantum computing, hypersonics. And these have been priorities for a few years but I’ll say that that conflict in Ukraine and the way that we’re seeing these technologies are being used has really kind of put a crunch on the time frame that states are facing, and I’m going to speak a little bit more about that in a minute. But to kind of give you an example of what are—what does it mean to use artificial intelligence on the battlefield—what do these kind of look like, there’s—largely, my work before this conflict was a little hypothetical. It was hard to kind of point to. But I think now, as these technologies mature, you’re seeing that they’re being used in more ways. So artificial intelligence, for example, are used to create—has been used by Russia to create deep fakes. There was a very famous one of President Zelensky that they used that they then combined with a cyberattack to put it at a very—to put it on national news in Ukraine, to make it look a little bit more believable even though the deep fake itself, it was a little, like, OK, they could tell it was computer generated. These are kind of showing how some of these technologies are evolving and, especially when combined with other kinds of technological tools, are going to be used to kind of make some of these more influence operations and propaganda campaigns a little bit more persuasive. Other examples of artificial intelligence, there’s facial recognition technology being used to identify civilians and casualties, for example. They’re being used to—they’re using natural language processing, which is a type of artificial intelligence that kind of analyzes the way people speak. You think of Siri. You think of chat bots. But more advanced versions being used to kind of read in radio transmissions and translate them and tag them so that they’re able to—that forces are able to go through more quickly and identify what combatants are saying. There’s the use of 3D printing and additive manufacturing where individuals are able to for very cheap—a 3D printer costs a couple—a thousand dollars and you can get it for maybe less if you build it yourself. You can add—you can add different components to grenades to make—and then people are taking smaller commercial drones to kind of make a MacGyvered smart bomb that you can maneuver. So those are some of the kind of commercial technologies that are being pulled into the kind of military sphere and into the battlefield. They might not be large. They might not be military in its first creation. But because they’re so general purpose technologies—they’re dual use—they’re being developed in the private sector and you’re seeing them being used on the battlefield and weaponized in new ways. There are other technologies that are more based originally in the military and defense kind of sectors and who’s created them, things like loitering munitions, which we’re seeing more of now, and a little—a lot more drones. I’m sure a lot of you have been seeing a lot of—about the Turkish TB2 drones and the Iranian drones that are now being used by Russia in the conflict. And these are not as new technologies. We’ve seen them. They’ve been around for a couple of decades. But they’re reaching a maturity in their technological lifecycle where they’re a lot more cheap and they’re a lot more accessible and they’re a lot more familiar now that they’re being used in innovative and new ways. They’re being seen as less precious and less expensive. And so not that they’re being used willy nilly or that they’re expendable but militaries, we’re seeing, are willing to use them in more flexible ways. And so, for example, Ukraine, in the early days of the campaign, there were some—allegedly, Ukraine used it as—the TB2 as a distraction when it wanted to sink a war ship rather than actually using it to try and sink the war ship itself. And so using it for things that they’re good for but maybe not the initial thought or the initial what they were designed to be used for. Iran—I mean, excuse me, Russia, now using the Iranian-made loitering munitions. They’re pretty reasonable in price. They’re about $20,000 a pop, and so using them in swarms to be able to take out some of the Ukrainian infrastructure has been a pretty good technique. Ukraine, for example, is very good at shooting them down. I think they were reporting at some point they had an ability to shoot them down at a rate of around 85 percent to 90 percent. And so the swarms weren’t necessarily all of them were getting through but because they’re so reasonably priced it was still—it was still a reasonable tactic and strategy to take. There’s even some kind of more cutting edge, a little bit more unbelievable, applications like now being touted as an Uber for artillery, whether you’re using similar kind of algorithms that Uber uses to kind of identify which passengers to pick up first and where to drop them off, about how to target artillery systems—what target is most efficient to hit first. And so we’re seeing a lot of these technologies being used, like I said, in new and practical ways, and it’s really condensed the timeline that, I think, states are seeing, especially the United States—that they want to adopt these technologies. Back in 2017, Vladimir Putin famously stated that he believed that whoever became leader in AI would become leader of the world, and China has very much publicized their plans to invest a lot more in AI research and development, to invest in bridging the gaps between its civil and military engineers and technologists to take advantage of AI by the year 2023. So we’ve got about one more year to go. And so I think that the United States, recognizing this, the time crunch has been—the heat is on, so to speak, for adopting some of these newer capabilities. And so we’re seeing that a lot now. There’s a lot of reorganization happening within the Department of Defense to kind of better leverage and better adapt in order to take advantage of some of these technologies. There’s the creation of a new chief data—digital and artificial intelligence office, the new emerging capabilities policy office, that are efforts in order to better integrate data systems ongoing projects in the Department of Defense, et cetera, to implement it for broader U.S. strategy. There’s been efforts as well to partner with allies in order to develop artificial intelligence. I mean, as part of the Indo-Pacific strategy that the Biden administration announced back in February of 2022 they announced that along with the Quad partners—so Japan, Australia, and I’m forgetting—and India, excuse me—they are going to fund research, for example, for any graduates from any of those four countries to come study in the United States if they focused on science, technology, engineering, and mathematics, and so to foster that integration and collaboration between our allies and partners to better take use of some of these things. I’ll say, even so, recently, in April 2022, for example, I think, looking at how Ukraine was using a lot of these technologies, the United States was able to fast track one of its programs. It was called the Phoenix Ghost. It’s a loitering munition. Little—it’s still a little—not well known. But, for example, I saw that the capabilities requirement that Ukraine had and fast tracked their own program in order to fulfill that. So they’re being used for the first time. So, again, we’re seeing that the United States is kind of using this as an opportunity to learn as well as to really take advantage and start kicking into high gear AI in defense innovation development. And so I’ll say that doesn’t mean that it’s not without its challenges, acquisitions process in particular. So how the United States—how Department of Defense takes a program from research and development all the way to an actual capability that it’s able to use on the battlefield. Before, in the 1950s where it used to take maybe five years now takes a few decades, there’s a lot of processes in between that make it a little bit challenging. All these sorts of checks and balances in place, which are great, but have made the process slow down the process a little bit. And so it’s harder for smaller companies and contractors to kind of—that are driving a lot of this—driving the cutting-edge research in a lot of these fields to work with the defense sector. And so there are some of these challenges, which, hopefully, some of this reorganization that the Pentagon is doing will help us. But that’s the next step, looking forward. And so that’s going to, I think, be the next big challenge that I’m watching for the—over the rest of this year and the next six months. But I think I threw a lot out there but I’m happy to open it for questions now and focus on anything in particular. But I think that gave an overview of some of the things that we’re seeing now. FASKIANOS: Absolutely. That was insightful and a little scary—(laughs)—and look forward now to everybody’s questions. As a reminder, after two and a half years of doing this, you can click on the raise hand icon on your screen to ask a question, and on an iPad or Tablet click the more button to access the raise hand feature. When you’re called upon, please accept the unmute prompt and state your name and affiliation. You can also submit a written question via the Q&A icon, and please include your affiliation there, and we are going to try to get through as many questions as we can. All right. So the first question—raised hand comes from Michael Leong. Q: Hi. Is this working? FASKIANOS: It is. Please tell us your affiliation. Q: Hi. My name is Michael Leong. I’m an MPA student in public administration at the University of Arizona in Tucson. And I just have a question about, basically, with the frequent use and successful use of drones in Ukraine is there any concern domestically about—because of how easily they are adapting such accessible technology to warfare that those can be used maliciously domestically and what steps they might be considering. Thanks. KAHN: Absolutely. That’s a great question. I think it’s broader than just drones as well when you have this proliferation of commercial technology into defense space and you have these technologies that are not necessarily, for example, weapons, right. So for—I think a good example is Boston Dynamics. They make this quad pet robot with four legs. It looks kind of like a dog. His name is Spot. And he’s being used in all sorts of commercial applications—help fund local police forces, et cetera—for very benevolent uses. However, there’s been a lot of concern that someone will go and, essentially, duct tape a gun to Spot and what will that kind of mean. And so I think it’s a similar kind of question when you have some of these technologies, again, that aren’t—it depends on how you use them and so it’s really up to the user. And so when you get things like commercial drones, et cetera, that you’re seeing that individuals are using for either reconnaissance or, again, using in combination with things like 3D printing to make weapons and things like that, it is going to be increasingly, increasingly difficult to control the flow. We’ve seen Professor Michael Horowitz over at the University of Pennsylvania, who’s now in government, he’s done a lot of research on this and you see that the diffusion of technologies happens a lot—a lot quicker when they’re commercially based rather than when they’re from a military origination. And so I think it’s definitely going to pose challenges, especially when you get things like software and things like artificial intelligence, which are open source and you can use from anywhere. So putting—kind of like controlling export and extrolling (sic) after the fact how they’re used is going to be extremely difficult. A lot of that right now is currently falling to kind of companies who are producing them to self-regulate since they have the best, like, ability to kind of limit access to certain technologies. Like, for example, open AI. If any of you have played with DALL-E 2 or DALL-E Mini, the image generating prompt sandbox tool that’s—they have limited what the public can access—certain features, right—and are testing themselves to see, OK, how are these being used maliciously. I think a lot of them are testing how they’re being used for influence operations, for example. And so making sure that some of those features that allow that to be more malicious they’re able to regulate that. But it is going to be extremely hard and the government will have to work hand in hand with a lot of these companies and private actors that are developing these capabilities in order to do that. But it’s a very great question and it is not one that I have a very easy answer to on how to address that. But it is, like, something that I’ve been thinking about a lot. FASKIANOS: Thank you. I’m going to take the next question from Arnold Vela, who’s an adjunct faculty at Northwest Vista College. What is the potential value of AI for strategy, e.g., war planning, versus tactical uses? KAHN: Great. So I think—honestly, I think a lot of artificial intelligence the benefit is replacing repetitive human—repetitive redundant tasks, right. So it’s not replacing the human. It’s making the human be more efficient by reducing things like data entry and cleaning and able to pull resources from all together. And so it’s actually already being used, for example, in war planning and war gaming and things like that and Germany and Israel have created things to make 3D AI to create sort of 3D battlefields where they can see all the different kind of inputs of information and sensors. And so I think that’s really where the value add—the competitive advantage of artificial intelligence is. It’s not necessarily—having an autonomous drone is very useful but I think what will really be the kind of game changer, so to speak, will be in making forces more efficient and both have a better sense of themselves as well as their adversaries, for example. And so, definitely, I think, I’m more in the background with the nonsexy—the data cleaning and all the numbers bit will be a lot more important, I think, than the having a drone with encased AI capabilities, even though those kind of suck the oxygen out a little bit because it’s really exciting. It’s shiny. It’s Terminator. It’s I, Robot-esque, right? But I think a lot of it will be the making linguists within the intelligence community able to process and translate documents at a much faster pace. So making individuals’ lives easier, I think. So definitely. FASKIANOS: Great. Thank you. I’m going to go next to Dalton Goble. Please accept the unmute. Q: Thank you. FASKIANOS: There you go. Q: Hi. I’m Dalton. I’m from the University of Kentucky and I’m at the Patterson School for Diplomacy and International Commerce. Thank you for having this talk. I really wanted to ask about the technology divide between the developed and developing world, and I wanted to hear your comments about how the use of AI in warfare and the technologies such as—and their proliferation can exasperate that divide. KAHN: Absolutely. I actually think, we’re—I think that I’ve been focusing a lot on how the U.S. and China and Russia, in particular, have been adopting these technologies because they’re the ones that are investing in it the most. I mean, countries in Europe are as well and, Israel, et cetera, and Australia also. Except I still think we’re in those early stages where a lot of countries—I think, over a hundred or something—have the national AI strategies right now. I don’t think it’s as far along yet in terms of its—at least its military applications or applications for government. I will say that, more broadly, I think, again, because these technologies are developed in the commercial sector and are a lot more reasonably priced, I think there’s actually a lot of space for countries in the developing world, so to speak, to adopt these technologies. There’s not as many barriers, I think, when it’s, again, necessarily a very expensive, super specific military system. And so I think that it’s actually quite diffusing rapidly in terms—and pretty equally. I haven’t done extensive research into that. It’s a very good question. But my first gut reaction is that it actually can—it actually can help kind of speak—not necessarily exacerbate the divide but kind of close the gap a little bit. A colleague of mine works a lot in health care and in health systems in developing countries and she works specifically with them to develop a lot of these technologies and find that they actually adopt them quicker because they don’t have all of these existing preconceived notions about what the systems and organizations should look like and are a lot more open to using some of these tools. But I will say, again, they are just tools. No technology is a silver bullet, and so I think that, again, being in the commercial sector these technologies will diffuse a lot more rapidly than other kind of military technologies. But it is something to be cognizant of, for sure. FASKIANOS: Thank you. I’m going to go next to Alice Somogyi. She’s a master’s student in international relations at the Central European University. Could you tell us more on the implications of deep fakes within the military sector and as a defense strategy? KAHN: Absolutely. I think influence operations in general are going to be increasingly part of the—part of the game, so to speak. I mean, I mentioned there’s going to be—it’s very visible to see in the case of Ukraine about how the information war, especially in the early days of the conflict, was super, super important, and the United States did a very good job of releasing information early to allies and partners, et cetera, to kind of make the global reaction time to the invasion so quick. And so I think that was a lot—very unexpected and I think has shown just—not to overstate it but the power of individuals and that a lot of propaganda will have. We’ve known—I’m sure if you studied warfare history, you can see the impact of propaganda. It’s always been—it’s always been an element at play. I will just say it’s another tool in the toolkit to make it a little bit more believable, to make it harder, to make these more efficient, and I think what’s really, really interesting, again, is how a lot of these technologies are going to be worked together to kind of make them more believable. Like, again, creating deep fakes. The technology isn’t there yet to make them super believable, at least on a—like, a large scale that many people at—that a state could believe. But combining them with something like a cyberattack, to place that in a place that you would have a little bit more—more willing to believe it, I think, will be increasingly important. And we’ll see it, I’m sure, combined in other ways that I can’t even imagine. And that goes back to one of the earlier questions we had about the proliferation of these technologies and, like, it being commercial and being able to contain the use and you can’t, and that’s the hardest part. And I think that especially when it comes to software and things where once you sell it out there they can use it for whatever they want. And so it’s this kind of creativity where you can’t prevent against any possible situation that you don’t know. So it has to be a little bit reactive. But I think there are measures that states and others can take to be a little bit proactive to protect against the use. This isn’t specifically about deep fakes but about artificial intelligence in general. There’s a space, I think, for confidence-building measures so informal agreements that states can kind of come to to set norms and kind of general rules of the road about, like, expectations for artificial intelligence and other kind of emerging technologies that they can put in place before they’re used so that when situations that are unexpected or have never seen before arise that there’s not—there’s not totally no game plan, right. There’s a kind of things and processes to kind of fall back on to guide how to advance and work on that situation without having to—without regulating too much too quickly that they become outdated very quickly. But I think it’ll definitely be as the technology develops that we’ll be using a lot more deep fakes. FASKIANOS: Yes. So Nicholas Keeley, a Schwarzman Scholar at Tsinghua University, has a question that goes along these lines. Ukrainian government and Western social media platforms were pretty successful at preempting, removing, and counteracting the Zelensky deep fake. How did this happen? I mean, he’s—asks about the cutting-edge prevention measures against AI-generated disinformation today that you just touched upon. But can you just talk about the Ukrainian—this specific what we’re seeing now in Ukraine? KAHN: Yeah. I think Ukraine has been very, very good at using these tools in a way that we haven’t seen before and I think that’s, largely, why a lot of these countries now are looking and watching and are changing their tack when it comes to using these. Again, they seem kind of far off. Like, what’s the benefit of using these newer technologies when we have things that are known and work. But I think Ukraine, kind of being the underdog in this situation and knowing since 2013 that this was a future event that might happen has been preparing, I think, in particular, their digital minister. I’m not sure what the exact title was, but they were able to mobilize that very quickly. It was originally set up to better digitize their government platforms and provide access to individuals, I think, on a phone app. But then they had these experts that work on how—OK, how can we use digital tools to kind of engage the public and engage media. I think when they—they militarized them, essentially. And so I think a lot of the early days, asking for—a lot of people in that organization asked Facebook, asked Apple, et cetera, to either put sanctions, to put guardrails up. You know, a lot of the early, like, Twitter, taking down the media, et cetera, was also engaged because specifically this organization within Ukraine made it their mission to do so and to kind of work as the liaison between Silicon Valley, so to speak, and to get—and to engage the commercial sector so they could self-regulate and help kind of the government do these sort of things, which, I think, inevitably led to them catching the deep fake really quickly. But also, if you look at it, it’s pretty—it’s pretty clear that it’s computer generated. It’s not great. So I think that, in part, was it and, again, in combination with a cyberattack you could then notice that there was a service attack. And so, while it made it more realistic, there’s also risks about that because they’re practiced in identifying when a cyberattack just occurred, more so than other things. But, absolutely. FASKIANOS: Thank you. I’m going to go next to Andrés Morana, who’s raised his hand. Q: Hi. Good afternoon. I’m Andrés Morana, affiliated with Johns Hopkins SAIS International Relations. Master’s degree. I wanted to ask you about AI and then maybe emerging technology as well. But I think artificial intelligence, as it applies to kind of the defense sector, like, the need to also at the same time reform in parallel the acquisitions process, which is notorious for—as we think about AI kind of where these servers are hosted a lot of commercial companies might come with maybe some new shiny tech that could be great. But if their servers are hosted in maybe a place that’s so easy to access then maybe this is not great, as it applies to that defense sector. So I don’t know if you have thoughts on maybe the potential to reform or the need to reform the acquisitions process. Thank you. KAHN: Yeah, absolutely. I mean, this is some people’s, like, favorite, favorite topic on this because it has become sort of a valley of death, right, where things go and they die. They don’t—they don’t move. Of course, there’s some bridges. But it is problematic for a reason. There’s been a few kind of efforts to create mechanisms to circumvent that. The Defense Innovation Unit has created some kind of funding mechanisms to avoid it. But, overall, I do think it needs—I don’t know what that looks like. I’m not nearly an expert on specifically the acquisitions process that a lot of folks are. But it is pretty—it would make things a lot easier. China, for example, people are talking about, oh, it’s so far ahead on artificial intelligence, et cetera, et cetera. I would argue that it’s not. It’s better at translating what it has in the civilian and academic sectors into the military sphere and being able to use and integrate that. And so overcome that gap. It does so with civil-military fusion. You know, they can kind of do—OK, well, we’re saying we’re doing it this way so it’s going to happen, whereas the United States doesn’t have that kind of ability. But I would say the United States has all the academic and industry leading on artificial intelligence. Stanford recently put out their 2022 AI Index that has some really great charts and numbers on this about how much—how much research is being done in the world on artificial intelligence and which countries and which regions and specifically who’s funding that, whether it’s governments, academia, or industry. And the United States is still leading in industry and academia. It’s just that the government has a problem tapping into that, whereas China, for example, its government funding is a lot greater and there’s a lot more collaboration across government, academia, and industry. And so I think that is right now the number-one barrier that I see. The second one, I’ll say, is accessing data and making sure you have all the bits and pieces that you need to be able to use AI, right. What’s the use of having a giant model that—an algorithm that could do a million things if you don’t have all of the data set up for it. And so those are the two kind of organizational infrastructure problems that I’ll say are really hindering the U.S. when it comes to kind of adopting these technologies. But, unfortunately, I do not have a solve for it. I would be super famous in the area if I did, but I do not, unfortunately. FASKIANOS: Thank you. I’m going to take the next question from Will Carpenter, a lecturer at the University of Texas at Austin. Also got an up vote. What are the key milestones in AI development and quantum computing to watch for in the years ahead from a security perspective? Who is leading in the development of these technologies—large cap technology companies such as Google, ByteDance? Venture capital-backed private companies, government-funded entities, et cetera? KAHN: Great. Great question. I’ll say for quantum, quantum is a little bit more down the line since we do not have a quantum computer, like, a really big quantum computer yet that can handle enough data. China’s kind of leading in that area, so to speak. So it’s curious to watch them. They’ve created their first, I think, quantum-encrypted communications line and they’ve done a few works on that. So I think to keep an eye on that will be important. But, really, just getting a computer large enough that it’s reasonable to use quantum, I think, will be the next big milestone there. But that’s quite a few years down the line. But when it comes to artificial intelligence, I’ll say that artificial intelligence has had waves and kind of divots in interest and then research. They call them AI winters and AI springs. Winter is when there’s not a lot of funding and spring is when there is. It’s featured a lot of—right now we’re in a spring, obviously, and it was a large part because of breakthroughs in, like, the 2010s in things like natural language processing and computer vision, et cetera. And so I think continued milestones in those will be key. There’s a few that I’ve worked on. There’s a—there’s the paper right now—hopefully, it will be out in the next few months—on forecasting on when we actually think those—when AI experts and machine learning experts think those milestones will be hit. I mean, there were, like, two that were hit, like, there was ones where you’d have AI being able to beat all the Atari games. You have AI being able to play Angry Birds. There’s ones that’s, like, OK—well, and there are lots of those mini milestones that—bigger leaps than just the efficiency of these algorithms. I think things like artificial or general intelligence. Some say there are some abilities for you to create one algorithm that can play a lot of different games. You know, it can play chess and Atari and Tetris. But I think, broadly speaking, it’s a little bit down the line also. But I’ll say for, like, the next few months, it’ll—and the next few years, it’ll probably be just, like, more efficient in some of these algorithms, making them better, making them leaner, use a lot less data. But I think we’ve, largely, hit the big ones and so I think it’ll be—we’ll see these short, smaller milestones being achieved in the next few years. And I think there was another part to the question in the—let me just go look in the answer for what it was. Who’s developing these. FASKIANOS: Right. KAHN: I would say these, like, large companies like Google, Open AI, et cetera. But I’ll say a lot of these models are open source, for example, which means that the models themselves are out there and they’re available to anyone who wants to kind of take them and use them. I mean, I’m sure you’ve seen—once you saw DALL-E Mini you saw DALL-E 2 and DALL-E X. So, like, they proliferate really quickly and they adapt, and that’s a large part what’s driving the acceleration of artificial intelligence. It’s moving so quickly because there is this nature of collaboration and sharing that companies are incentivized to participate in, where they just take the models, train them against their own data, and if it works better they use that. And so those kind of companies are all playing a part, so to speak. But I would say, largely, academia right now is still really pushing the forefront, which is really cool to see. So I think that means that a lot more Blue Skies kind of just basic research being funded will—if it’s being pumped into that we’ll continue to kind of—we’ll see these advances continue. I’ll say also a lot of—when it comes to defense applications, in particular, I think, and where the challenge is is that a lot of—a lot more than typically when it comes to artificial intelligence these capabilities are being developed by niche smaller startup companies that might not be— that might not have the capabilities that, say, a Google or a Microsoft has when it comes to working and contracting with the U.S. government. So that’s also a challenge. When you have this acquisitions process it’s a little bit challenging at best, even for the big companies. I think for these smaller companies that really do have great applications and great specific uses for AI, I think that’s also a significant challenge. So I think it’s, basically, everybody. Everyone’s working together, which is great. FASKIANOS: Great. I’m going to go next to DJ Patil. Q: Thanks, Irina. Good to see you. FASKIANOS: Likewise. Q: And thanks for this, Lauren. So I’m DJ Patil and I’m at the Harvard Kennedy School Belfer Center, as well as Devoted Health and Venrock Partners. And so, Lauren, the question you addressed a little bit on the procurement side, I’m curious what your advice to the secretary of defense would be around capabilities, specifically, given the question of large language models or the efforts that we’re seeing in industry and how much separation of results that we’re seeing even in industry compared to academia. Just the breakthroughs that we’re seeing reported are so stunning. And then if we look at the datasets that are—that they’re building on—those companies are building on, they’re, basically, open or there’s copyright issues in there. There’s defense applications which have very small data sets, and also, as you mentioned, in the procurement side a lack of access to the ability of these things. And so what is the mechanisms if you looked across this from a policy perspective of how we start tapping into those capabilities to ensure that we have competitiveness as the next set of iterations of these technologies take place? KAHN: Absolutely. I think that’s a great question. I’ve done a little bit of work on this. When they were creating the chief digital AI office, I think they had, like, people brainstorming about, like, what kind of things we would like to see and I think everyone agreed that they would love for them to get kind of a better access to data. If the defense secretary asks, can I have data on all the troop movements for X, Y, and Z, there’s a lot of steps to go through to pull all that information. The U.S. defense enterprise is great at collecting data from a variety of sources—from the intelligence community, analysts, et cetera. I think what’s challenging to know—and, of course, there are natural challenges built in with different levels of how confidential things are and how—the classifications, et cetera. But I think being able to pull those together and to clean that data and to organize it will be a key first step and that is a big infrastructure systems software kind of challenge. A lot of it’s actually getting hardware in the defense enterprise up to date and a lot of it is making sure you have the right people. I think another huge one—and, I mean, the National Security Commission on AI on their final report announced that the biggest hindrance to actually leveraging these capabilities is the lack of AI and STEM talent in the intelligence community in the Pentagon. There’s just a lack of people that, one, have the vision to—have the background and are willing to kind of say, OK, like, this is even a possible tool that we can use and to understand that, and then once it’s there to be able to train them to be able to use them to do these kind of capacities. So I think that’ll be a huge one. And there are ways that kind of—there are efforts right now ongoing with the Joint Artificial Intelligence Center—the JAIC—to kind of pilot AI educational programs for this reason as a kind of AI crash course. But I think there needs to be, like, a broader kind of effort to encourage STEM graduates to go into government and that can be done, again, by kind of playing ball, so to speak, with this whole idea of open source. Of course, the DOD can’t do—Department of Defense can’t make all of its programs open and free to the public. But I think it can do a lot more to kind of show that it’s a viable option for individuals working in these careers to address some of the same kind of problems and will also have the most up to date tech and resources and data as well. And I think right now it’s not evident that that’s the case. They might have a really interesting problem set, which is shown to be attractive to AI PhD graduates and things like that. But it doesn’t have the same kind of—again, they’re not really promoting and making resources and setting up their experts in the best way, so to speak, to be able to use these capabilities. FASKIANOS: Thank you. I’m going to take the next question from Konstantin, who actually wrote a question—Tkachuk—but also raised his hand. So if you could just ask your question that would be best. Q: Yes. I’m just happy to say it out loud. So my name is Konstantin. I’m half Russian, half Ukrainian. I’m connecting here from Schwarzman Scholarship at Tsinghua University. And my question is more coming towards the industry as a whole, how it has to react on what’s happening to the technology that the industry is developing. Particularly, I am curious whether it’s the responsibility and interest of industry and policymakers to protect the technology from such a misuse and whether they actually do have control and responsibility to make these technology frameworks unusable for certain applications. Do you think this effort could be possible, give the resources we have, the amount of knowledge we have? And, more importantly, I would even be curious on your perspective whether you think countries have to collaborate on that in order to such effort be efficient, or it should be incentive models based inside countries that will make an effort to the whole community. KAHN: Awesome. I think all of the above. I think right now, because there’s so—the relatively little understanding of how these work, I think a lot of it is the private companies self-regulating, which I think is a necessary component. But there are also now indications of efforts to kind of work with governments on things like confidence-building measures or other kind of mechanisms to kind of best understand and best develop transparency measures, testing and evaluation, other kind of guardrails against use. I think there are, like, different layers to this, of course, I think, and all of them are correct and all of them are necessary. I think the specific applications themselves there needs to be an element of regulation. I think at some point there needs to be, like, a user agreement as well about when they’re selling technologies and selling capabilities, how they agree to kind of abide by the terms. You sign it when you—the terms of use, right. And I think also then there are, of course, export controls that can be put on and certain—you’re allowed to do, the commercial side but you make the system itself—incompatibles are being used with other kinds of systems that would make it dangerous. But I think there’s also definitely room and necessary space for interstate collaboration on some of these, especially when you get—say, for example, when you introduce artificial intelligence into military systems, right, they make them faster. They make the decision-making process a lot more speedy, basically, and so the individual has to make quicker decisions. And so if you have things and when you introduce things like artificial intelligence to increasingly complex systems you have the ability for accidents to kind of snowball, right, where they become—as they go through. Like, one little decision can make a huge kind of impact and end up with a mistake, unfortunately. And so when you have the kind of situation when you’re forbid it’s in a—in a battlefield context, right. And let’s say the adversary says, oh, well, you intentionally shot down XYZ plane; and the individual said no, it was an auto malfunction and we had an AI in charge of it; who, in that fact, is responsible now? If it was not an individual now is it the—the blame kind of shifts up the pipeline. And so you’ve got problems like these. Like, that’s just one example. But, like, where you have increasingly automated systems and artificial intelligence that kind of shift how dynamics play out, especially in accidents, which require a lot of visibility, traditionally, and you have these technologies that are not so visible, not so transparent. You don’t really get to see how they work or understand how they think in the same way that you can say, if I pressed a button and you see the causality of that chain reaction. And so I think there is very much a need because of that for even adversaries—not necessarily just allies—to agree on how certain weapons will be used and I think that’s why there’s this space for confidence-building measures. I think a really—like, for example, a really simple kind of everyone already agrees on this is to have a human in the loop, right—a human control. When we eventually use artificial intelligence and automated systems increasingly in nuclear context, right, with nuclear weapons, I think everyone’s kind of on board with that. And so I think those are the kind of, like, building block agreements and kind of establishment of norms that can happen and that need to take place now before these technologies really start to be used. That will be essential to avoiding those worst case scenarios in the future. FASKIANOS: Great. Thank you. I’m going to take the next question—written question—from Alexander Beck, undergraduate at UC Berkeley. In the context of military innovation literature, what organizational characteristics or variables have the greatest effect on adoption and implementation, respectively? KAHN: Absolutely. I’m not an organizational expert. However, I’ll say, like before, I think that’s shifting, at least from the United States perspective. I think, for example, when the Joint Artificial Intelligence Center was created it was, like, the best advice was to create separate organizations that had the capability to kind of enact their own kind of agenda and to create separate programs for all of these to kind of best foster growth. And so that worked for a while, right. The JAIC was really great at promoting artificial intelligence and raising it to a level of preeminence in the United States. A lot of early success in making—raising awareness, et cetera. But now we’re seeing, there was some—a little bit of confusion, a little bit of concern, over the summer when they did establish the chief data—a digital and artificial intelligence office—excuse me. A lot of acronyms—when they—because they took over the JAIC. They subsumed the JAIC. There was a lot of worry about that, right. Like, they just established this great organization that we’ve had in 2019 and now they’re redoing it. And so I think they realized that as the technology develop, organizational structures need to develop and change as well. Like, in the beginning, artificial intelligence was kind of seen as its own kind of microcosm. But because it’s in a general purpose enabling technology it touches a lot more and so it needs to be thought more broadly rather than just, OK, here’s our AI project, right. You need to better integrate it and situate it next to necessary preconditions like the food for AI, which is data, right. So they reorganized to kind of ideally do that, right. They integrate it research and engineering, which is the arm in the Defense Department that kind of funds the basic research, to kind of have people understand policy as well. So they have all of these different arms now within this broader organization. And so there are shifts in the literature, I think, and there are different best cases for different kind of technologies. But I’m not as familiar with where the literature is going now. But that was kind of the idea has shifted, I think, even from 2018 to 2022. FASKIANOS: Thanks. We’re going to go next to Harold Schmitz. Q: Hey, guys. I think a great, great talk. I wanted to get your thoughts on AlphaFold, RoseTTAFold—DeepMind—and biological warfare and synthetic biology, that sort of area. Thank you. KAHN: Of course. I— Q: And, by the way—sorry—I should say I’m with the University of California Davis School of Management and also with the March Group—a general partner. Thank you. KAHN: I am really—so I’m really not familiar much with the bio elements. I know it’s an increasing area of interest. But I think, at least in my research, kind of taking a step back, I think it was hard enough to get people within the defense sector to acknowledge artificial intelligence. So I haven’t seen much in the debate, unfortunately, recently, just because I think a lot of the defense innovation strategy, at least in the Biden administration, is focused directly on the pacing—addressing the pacing challenge of China. And so they’ve mentioned biowarfare and biotechnology as well as nanotechnology and et cetera, but not as much in a comprehensive way as artificial intelligence and quantum in a way that I’m able to answer your question. I’m sorry. FASKIANOS: Thank you. I’ll go next to Alex, who has raised—and you’ll have to give us your last name and identify yourself. Q: Hi. Yes. Thank you. I’m Alex Grigor. I just completed my PhD at University of Cambridge. My research is specifically looking at U.S. cyber warfare and cybersecurity capabilities, and in my interviews with a lot of people in the defense industry, their number-one complaint, I suppose, was just not getting the graduates applying to them the way that they had sort of hoped to in the past. And if we think back at ARPANET and all the amazing innovations that have come out of the internet and can come out of the defense, do you see a return to that? Or do you see us now looking very much to procure and whatever from the private industry, and how might that sort of recruitment process be? They cited security clearances as one big impediment. But what else might you think that could be done differently there? KAHN: Yeah. Absolutely. I think security clearances, all the bureaucratic things, are a challenge, but even assuming that individual wants to work, I think right now if you’re working in STEM and you want to do research I think having two years, for example, in government and being a civilian, working in the Pentagon, for example, it looks—it doesn’t necessarily look like—allow you to jump then back into the private sector and academia, whereas other jobs do. So I think that’s actually a big challenge about making it possible for various reasons, various mechanisms, to kind of make it a reasonable kind of goal for not necessarily being a career in government but allowing people to kind of come and go. I think that’ll be a significant challenge and I think that’s in part about some of the ability to kind of contribute to the research that we spoke about earlier. I mean, the National Security Commission has a whole strategy that they’ve outlined on it. I’ve seen, again, like, piecemeal kind of efforts to overcome that. But nothing broad and sweeping reform as suggested by the report. I recommend reading it. It’s, like, five hundred pages long. But there’s a great section on the talent deficit. But, yeah, I think that will definitely be a challenge. I think cyber is facing that challenge. I just think anything that touches STEM in general, and so—and especially because I think the AI and particular machine learning talent pool is global and so states actually are, interestingly, kind of fighting over this talent pool. I’ve done a research previously also at the University of Oxford that looked at, like, the immigration preferences of researchers and where they move and things like that, and a lot of them are Chinese and studying in the United States. And they stay here. They move, et cetera. But a lot of it is actually also immigration and visas. And so other countries—China specifically made kind of for STEM graduates special visas. Europe has done it as well. And so I think that will also be another element at play. There’s a lot of these to kind of attract more talent. I mean, again, one of the steps that was tried was the Quad Fellowship that was established through the Indo-Pacific strategy. But, again, that’s only going to be for a hundred students. And so there needs to be a broader kind of effort to make it—to facilitate the flow of experts into government. To your other point about is this going to be what it looks like now about the private sector driving the bus, I think it will be for the time being unless DARPA and the defense agencies’ research arm and DOD change this acquisition process and, again, was able to get that talent, then I think—if something changes, then I think it will be able to, again, be able to contribute in the way that it has in the past. I think it’s important, too, right. There was breakthroughs out of cryptography. And, again, the internet all came from defense initially. And so I think it would be really sad if that was not the case anymore and I think especially as right now we’re talking about using—being able to kind of cross that bridge and work with the private sector and I think that will be necessary. I hope it doesn’t go too far that it becomes entirely reliant because I think DOD will need to be self-sufficient. It’s another kind of ecosystem to generate research in applications, and not all problems can be addressed by commercial applications as well. It’s a very unique problem set that defense and militaries face. And so I think there will need to be—right now, it’s a little bit heavy on needing to—there’s a little bit of a push right now, OK, we need to better work with the private sector. But I think, hopefully, overall, if it moves forward it will balance out again. FASKIANOS: Lauren, do you know how much money DOD is allocating towards this in the overall budget? KAHN: Off the top of my head, I don’t know. It’s a few billion. It’s, like, a billion. I think—I have to look. I can look it up. In the research 2023 budget request there was the highest amount requested ever for STEM research and engineering and testing and evaluation. I think it was—oh, gosh, it was a couple hundred million (dollars) but they had—it was a huge increase from the last year. So it’s an increasing priority. But I don’t have the specific numbers on how much. People talk about China funding more. I think it’s about the same. But it’s increasing steadily across the board. FASKIANOS: Great. So I’m going to give the final question to Darrin Frye, who’s an associate professor at Joint Special Operations University in the Department of Strategic Intelligence and Emergent Technologies, and his is a practical question. Managing this type of career how do you structure your time researching and learning about the intricacies of complex technologies such as quantum entanglement or nano-neuro technologies versus informing leadership and interested parties on the anticipated impact of emergent technologies on the future military operational environment? And maybe you can throw in there why you went into this field and why you settled upon this, too. KAHN: Yeah. I love this question. I have always been interested in the militarization of science and how wars are fought because I think it allows you to study a lot of different elements. I think it’s very interesting working at the intersection. I think, broadly speaking, a lot of the problems that the world is going to face, moving forward, are these transnational large problems that will require academia, industry, and government to kind of work on together from climate change and all of these emerging technologies, for example, global health, as we’ve seen over the past few years. And so I think it’s a little bit of a striking a balance, right. So I came from a political science background, international relations background, and I did want to talk about the big picture. And I think there are individuals kind of working on these problems and are recognizing them. But in that I noticed that I’m speaking a lot about artificial intelligence and emerging technologies and I’m not—I’m not from an engineering background. And so me, personally, I’m, for example, doing a master’s in computer science right now at Penn in order to shore up those kind of deficiencies and lack of knowledge in my sphere. I can’t learn everything. I can’t be a quantum expert and an AI expert. But I think having the baseline understanding and taking a few of those courses and more regularly has allowed me to when a new technology, for example, shows up that I can learn how—I know how to learn about that technology, which, I think, has been very helpful, speaks both languages, so to speak. I don’t think anyone’s going to be a master—you can’t be a master of one, let alone master of both. But I think it will be increasingly important to spend time learning about how these things work, and I think just getting a background in coding can’t hurt. And so it’s definitely something you need to balance. I would say I’m probably balanced more towards what are the implications of this, more broadly, since if you’re talking at such a high level it doesn’t help necessarily people without that technical background to get into the nitty gritty. It can get jargony very quickly, as I’m sure you guys understood listening to me even. And so I think there’s a benefit to learning about it but also make sure you don’t get too in the weeds. I think there are—I think a big important—there’s a lot of space for people who kind of understand both that can then bring those people who are experts, for example, on quantum entanglement and nanotechnology—to bring them so that when they’re needed they can come in and speak to people in a policy kind of setting. So there definitely is a room, I think, for intermediaries. There’s policy experts that people kind of sit in between and then, of course, the highly specialized expertise, which I think is definitely, definitely important. But it’s hard to balance. But I think it’s very fun as well because then you get to learn a lot of new things. FASKIANOS: Wonderful. Well, with that we are out of time. I’m sorry that we couldn’t get to all the written questions and the raised hands. But, Lauren Kahn, thank you very much for this hour, and to all of you for your great questions and comments. You can follow Lauren on Twitter at @Lauren_A_Kahn, and, of course, go to CFR.org for op-eds, blogs, and insight and analysis. The last academic webinar of this semester will be on Wednesday, November 16, at 1:00 p.m. (EST). We are going to be talking with Susan Hayward, who is at Harvard University, about religious literacy in international affairs. So, again, I hope you will all join us then. Lauren, thank you very much. And I just want to encourage those of you, the students on this call and professors, about our paid internships and our fellowships. You can go to CFR.org/careers for information for both tracks. Follow us at @CFR_Academic and visit, again, CFR.org, ForeignAffairs.com, and ThinkGlobalHealth.org for research and analysis on global issues. So thank you all, again. Thank you, Lauren. Have a great day. KAHN: Thank you so much. Take care. FASKIANOS: Take care.
  • Foreign Policy

    The era of the global internet is over, and the early advantages the United States and its allies held in cyberspace have largely disappeared. China and Russia in particular are working to export their authoritarian models of the internet around the world. The CFR-sponsored Independent Task Force proposes a new foreign policy for cyberspace founded on three pillars: building an internet coalition, employing pressure on adversaries and establishing pragmatic cyber norms, and getting the U.S. cyber house in order.
  • United States

    Our panelists discuss the future of defense innovation, the efficacy of public-private partnerships in informing U.S. national security and technology policy, and the future of AI, quantum computing, and cyber in warfare. The CFR Young Professionals Briefing Series provides an opportunity for those early in their careers to engage with CFR. The briefings feature remarks by experts on critical global issues and lessons learned in their careers. These events are intended for individuals who have completed their undergraduate studies and have not yet reached the age of thirty to be eligible for CFR term membership.
  • Russia

    This symposium convenes senior government officials and experts from think tanks, academia, and the private sector to address the interaction of cyber conflict and foreign policy goals, examining the current state of Russian, Chinese, Iranian, and North Korean cyber operations, as well as how the United States is responding and its own vulnerability to cyberattacks as a symptom of a broken geopolitical order. Click here to download the full agenda for the symposium.
  • China

    Adam Segal, Ira A. Lipman chair in emerging technologies and national security and director of the Digital and Cyberspace Policy Program at CFR, leads a conversation on cyberspace and U.S.-China relations. FASKIANOS: Welcome to the first session of the Winter/Spring 2022 CFR Academic Webinar Series. I’m Irina Faskianos, vice president of the National Program and Outreach here at CFR. Today’s discussion is on the record, and the video and transcript will be available on our website, CFR.org/academic. As always, CFR takes no institutional positions on matters of policy. We are delighted to have Adam Segal with us to discuss cyberspace and U.S.-China relations. Adam Segal is CFR’s Ira A. Lipman chair in emerging technologies and national security and director of the Council’s Digital and Cyberspace Policy program. Previously, he served as an arms control analyst for the China Project at the Union of Concerned Scientists. He has been a visiting scholar at Stanford University’s Hoover Institution, MIT’s Center for International Studies, the Shanghai Academy of Social Sciences, and Tsinghua University in Beijing. And he’s taught courses at Vassar College and Columbia University. Dr. Segal currently writes for the CFR blog, Net Politics—you should all sign up for those alerts, if you haven’t already. And he is the author several books, including his latest, The Hacked World Order: How Nations Fight, Trade, Maneuver, and Manipulate in the Digital Age. So, Adam, thanks very much for being with us. We can begin with a very broad brush at cyberspace, the role cyberspace plays in U.S.-China relations, and have you make a few comments on the salient points. And then we’ll open it up to the group for questions. SEGAL: Great. Irina, thanks very much. And thanks, everyone, for joining us this afternoon. I’m looking forward to the questions and the discussion. So broadly, I’m going to argue that the U.S. and China have the most far-reaching competition in cyberspace of any countries. And that competition goes all the way from the chip level to the rules of the road. So global governance all the way down the to the chips that we have in all of our phones. Coincidentally, and nicely timed, last week the Washington Post did a survey of their network of cyber experts about who was the greater threat to the United States, China or Russia. And it was actually almost exactly evenly split—forty to thirty-nine. But I, not surprisingly, fell into the China school. And my thinking is caught very nicely by a quote from Rob Joyce, who’s a director at the National Security Agency, that Russia is like a hurricane while China is like climate change. So Russia causes sudden, kind of unpredictable damage. But China represents a long-term strategic threat. When we think about cyberspace, I think it’s good to think about why it matters to both sides. And on the Chinese side, I think there are four primary concerns. The first is domestic stability, right? So China is worried that the outside internet will influence domestic stability and regime legitimacy. And so that’s why it’s built an incredibly sophisticated system for controlling information inside of China that relies both on technology, and intermediate liability, and other types of regulation. China is worried about technological dependence on other players, in particular the U.S., for semiconductors, network equipment, and other technologies. And they see cybersecurity as a way of reducing that technology. China has legitimate cybersecurity concerns like every other country. They’re worried about attacks on their networks. And the Snowden revelations from the—Edward Snowden, the former NSA contractor—show that the U.S. has significant cyber capabilities, and it has attacked and exploited vulnerabilities inside of China. And while the Chinese might have used to think that they were less vulnerable to cyberattacks given the shape of the Chinese network in the past, I think that probably changed around 2014-2015, especially as the Chinese economy has become increasingly dependent on ecommerce and digital technology. It’s now—GDP is about a third dependent on digital technology. So they’re worried about the same types of attacks the United States is worried about. And then, fourth and finally, China does not want the United States to be able to kind of define the rules of the road globally on cyber, create containing alliances around digital or cyber issues, and wants to constrain the ability of the U.S. to freely maneuver in cyberspace. Those are China’s views. The U.S. has stated that it’s working for a free, open, global, and interoperable internet, or an interoperable cyberspace. But when it looks at China, it has a number of specific concerns. The first is Chinese cyber operations, in particular Chinese espionage, and in particular from that Chinese industrial espionage, right? So the Chinese are known for being the most prolific operators, stealing intellectual property. But they’re also hacking into political networks, going after think tanks, hacking activists—Uighur activists, Tibetan activists, Taiwanese independence activists. We know they’re entering into networks to prepare the battlefield, right, so to map critical infrastructure in case there is a kinetic conflict with the United States—perhaps in the South China Sea or over the Taiwan Strait—and they want to be able to deter the U.S., or perhaps cause destructive attacks on the U.S. homeland, or U.S. bases in South Korea, or Japan. The U.S. is also extremely concerned about the global expansion of Chinese tech firms and Chinese platforms, for the collection of data, right? The U.S. exploited the globalization of U.S. tech firms. Again, that was something that we learned from the Snowden documents, that the U.S. both had legal and extralegal measures to be able to get data from users all around the world because of their knowledge of and relationship to U.S. tech firms. And there’s no reason to believe that the Chinese will not do the same. Now, we hear a lot about, you know, Huawei and the national intelligence law in China that seems to require Chinese companies to turnover data. But it would be very hard to believe that the Chinese would not want to do the same thing that the U.S. has done, which is exploit these tech platforms. And then finally, there is increasingly a framing of this debate as one over values or ideology, right? That democracies use cybertechnologies or digital technologies in a different way than China does. China’s promoting digital authoritarianism, that has to do about control of information as well as surveillance. And the U.S. has really pushed back and said, you know, democracies have to describe how we’re going to use these technologies. Now, the competition has played itself out both domestically and internationally. The Chinese have been incredibly active domestically. Xi Jinping declared that cybersecurity was national security. He took control of a small leadership group that became a separate commission. The Cyberspace Administration of China was established and given lots of powers on regulating cybersecurity. We had a creation of three important laws—the cybersecurity law, the data security law, and the private—personal information protection law. We see China pushing very hard on specific technologies they think are going to be important for this competition, especially AI and quantum. And we see China pushing diplomatically, partly through the idea of what’s called cyber-sovereignty. So not the idea that internet is free and open and should be somewhat free from government regulation, but instead that cyberspace, like every other space, is going to be regulated, and that states should be free to do it as they see fit, as fits their own political and social characteristics, and they should not be criticized by other states. They promoted this view through U.N. organizations in particular. And they’ve been working with the Russians to have a kind of treaty on information and communication technologies that would include not only cybersecurity, but their concerns about content and the free flow of information. The U.S. right now is essentially continuing a policy that was started under the Trump administration. So part of that is to try and stop the flow of technology to Chinese firms, and in particular to handicap and damage Huawei, the Chinese telecom supplier, to put pressure on friends to not use Huawei. But the most important thing it did was put Huawei on an entity list, which cut it off from semiconductors, most importantly from Taiwan Semiconductor, which has really hurt the Huawei of products. The U.S. tried to come to an agreement about—with China about what types of espionage are considered legitimate. And not surprisingly, the U.S. said there was good hacking and back hacking. And the good hacking is the type of hacking that the U.S. tends to do, and the bad hacking is the type of hacking that the Chinese tend to do. So, basically the argument was, well, all states were going to conduct political and military espionage, but industrial espionage should be beyond the pale. Or if you put it—you can think of it as the way President Obama put it, you can hack into my iPhone to get secrets about what I’m discussing with my Cabinet, but you can’t hack into Apple to get the secrets about how iPhones are made to give to Huawei. There was an agreement formed in 2015, where both sides said they weren’t going to engage in industrial espionage—cyber industrial espionage. For about a year and a half, that agreement seemed to hold. And then it—and then it fell apart. The Chinese are engaged in that activity again. And as a result, the U.S. has once again started indicting Chinese hackers, trying to create—enforce that norm through indictments and naming and shaming. The U.S. probably also—although I have no evidence of it—has engaged in disrupting Chinese hackers. So we know under the Trump administrationm Cyber Command moved to a more forward-leaning posture, called defending forward or persistent engagement. We’ve heard about some of those operations against Russian or Iranian actors. John Bolton, before he left the NSC, suggested they were getting used against Chinese cyberhackers as well. So what comes next? And it’s often hard, if not impossible, to end cyber talks on a positive note, but I will try. So I think from a U.S. perspective, clearly the kind of tech pressure, not only of Huawei but on a broader range of companies, is going to continue. The Biden administration has shown no signal that it is going to roll any of that back. And it’s actually expanded it, to more companies working on quantum and other technologies. The Biden administration has worked much more actively than the Trump administration on building alliances around cybersecurity. So in particular, the tech and trade competition group with the Europeans and the quad, with Australia, India, and Japan all have discussions on cybersecurity norms. So how do you actually start imposing them? Now, where you would hope that the U.S. and China would start talking to each other, again, is where I hope the Biden administration can eventually get to. So there were some very brief discussions in the Obama administration. The Trump administration had one round of talks, but that were not particularly useful. The Chinese were very unwilling to bring people from the People’s Liberation Army to actually kind of talk about operations, and generally were in denial about that they had any cyber forces. But you want both sides really to start talking more about where the threshold for the use of force might be in a cyberattack, right? So if you think about—most of what we’ve seen, as I said, is spying. And so that is kind of the—is below the threshold for use of force or an armed attack, the thing that generally triggers kinetic escalation. But there’s no general understanding of where that threshold might be. And in particular, during a crisis, let’s stay, in the street or in the South China Sea, you want to have some kind of clarity about where that line might be. Now, I don’t think we’re ever going to get a very clear picture, because both sides are going to want to be able to kind of skate as close to it as possible, but we would certainly want to have a conversation with the Chinese about how we might signal that. Can we have hotlines to discuss those kind of thresholds? Also, we want to make sure that both sides aren’t targeting each other’s nuclear command and control systems, right, with cyberattacks, because that would make any crisis even worse. There’s some debate about whether the Chinese command and control systems are integrated with civilian systems. So things that the U.S. might go after could then perhaps spillover into the Chinese nuclear system, which would be very risky. So you want to have some talks about that. And then finally, you probably want to talk—because the Chinese open-source writing seems to suggest that they are not as concerned about escalation in cyber as we are. There’s been a lot of debate in the U.S. about if escalation is a risk in cyber. But the Chinese don’t actually seem to think it’s much of a risk. And so it would be very useful to have some discussions on that point as well. I’ll stop there, Irina, and looking forward to the questions. FASKIANOS: Thank you, Adam. That was great analysis and overview and specifics. So we’re going to go first to Babak Salimitari, an undergrad student at the University of California, Irvine. So please be sure to unmute yourself. Q: I did. Can you guys hear me? SEGAL: Yeah. Q: Thank you for doing this. I had a question on the Beijing Olympics that are coming up. Recently the told the athletes to use, like, burner phones because the health apps are for spying, or they’ve got, like, security concerns. What specific concerns do they have regarding those apps, and what do they do? SEGAL: So I think the concerns are both specific and broad. I think there was a concern that one of the apps that all of the athletes had to download had significant security vulnerabilities. So I think that was a study done by Citizens Lab at the University of Toronto. And it basically said, look, this is a very unsafe app and, as you said, allowed access to health data and other private information, and anyone could probably fairly easily hack that. So, you know, if you’re an athlete or anyone else, you don’t want that private information being exposed to or handled by others. Then there’s, I think, the broader concern is that probably anybody who connects to a network in China, that’s going to be unsafe. And so, you know, because everyone is using wi-fi in the Chinese Olympics, and those systems are going to be monitored, those—your data is not going to be safe. You know, I’m not all that concerned for most athletes. You know, there’s probably not a lot of reason why Chinese intelligence or police are interested in them. But there are probably athletes who are concerned, for example, about Xinjiang and the treatment of the Uighurs, or, you know, maybe Tibetan activists or other things, and maybe have somewhere in the back of their minds some idea about making statements or making statements when they get back to the U.S. or safer places. And for those people, definitely I would be worried about the risk of surveillance and perhaps using that data for other types of harassment. FASKIANOS: I’m going to take the written question from Denis Simon, who received two upvotes. And Denis is senior advisor to the president for China affairs and professor of China business and technology. When you say “they” with respect to Chinese cyber activity, who is “they”? To what extent are there rogue groups and ultranationalists as well as criminals involved? SEGAL: Yes, Denis, will send me a nasty email if I don’t mention that Denis was my professor. We’re not going to go how many years ago, but when I was at Fletcher. So, and Denis was one of the first people I took—was the first person I took a class on Chinese technology. So, you know, and then I ended up here. So I think, “they.” So it depends what type of attacks we’re talking about. On the espionage side, cyber espionage side, what we’ve generally seen is that a lot of that was moved from the PLA to the Ministry of State Security. The most recent indictments include some actors that seem to be criminal or at least front organizations. So some technology organizations. We do know that there are, you know, individual hackers in China who will contract their services out. There were in the ’90s a lot of nationalist hacktivist groups, but those have pretty much dissipated except inside of China. So we do see a lot of nationalist trolls and others going after people inside of China, journalists and others, for offending China or other types of violations. So “they” is kind of a whole range of actors depending upon the types of attack we’re talking about. FASKIANOS: Thank you. So our next question we’re going to take from Terron Adlam, who is an undergraduate student at the University of Delaware. And if you can unmute yourself. Q: Can you hear me now? FASKIANOS: Yes. Q: Hi. Good evening. Yes. So I was wondering, do you think there will be a time were we have net neutrality? Like, we have a peace agreement amongst every nation? Because I feel like, honestly, if Russia, U.S., Mexico, any other country out there that have a problem with each other, this would be, like, there’s rules of war. You don’t biohazard attack another country. Do you think—(audio break)—or otherwise? SEGAL: So I think it’s very hard to imagine a world where there’s no cyber activity. So there are discussions about can you limit the types of conflict in cyberspace, though the U.N. primarily. And they have started to define some of the rules of the road that are very similar to other international law applying to armed conflict. So the U.S.’ position is essentially that international law applies in cyberspace, and things like the International Humanitarian Law apply in cyberspace. And you can have things like, you know, neutrality, and proportionality, and distinction. But they’re hard to think about in cyber, but we can—that’s what we should be doing. The Chinese and Russians have often argued we need a different type of treaty, that cyber is different. But given how valuable it seems, at least on the espionage side so far, I don’t think it’s very likely we’ll ever get an agreement where we have no activity in cyberspace. We might get something that says, you know, certain types of targets should be off limits. You shouldn’t go after a hospital, or you shouldn’t go after, you know, health data, things like that. But not a, you know, world peace kind of treaty. FASKIANOS: Thank you. So I’m going to take the next question from David Woodside at Fordham University. Three upvotes. What role does North Korea play in U.S.-China cyber discussions? Can you China act outside of cybersecurity agreements through its North Korean ally? SEGAL: Yeah. I think, you know, like many things with North Korea, the Chinese probably have a great deal of visibility. They have a few levers that they really don’t like using, but not a huge number. So, in particular, if you remember when North Korea hacked Sony and because of the—you know, the movie from Seth Rogan and Franco about the North Korean leader—those hackers seemed to be located in northern China, in Shenyang. So there was some sense that the Chinese probably could have, you know, controlled that. Since then, we have seen a migration of North Korean operators out of kind of north China. They now operate out of India, and Malaysia, and some other places. Also, Russia helped build another cable to North Korea, so the North Koreans are not as dependent on China. I think it’s very unlikely that the Chinese would kind of use North Korean proxies. I think the trust is very low of North Korean operators that they would, you know, have China’s interest in mind or that they might not overstep, that they would bring a great deal of kind of blowback to China there. So there’s been very little kind of—I would say kind of looking the other way earlier in much of North Korea’s actions. These days, I think probably less. FASKIANOS: Thank you. I’m going to take the next question from Joan Kaufman at Harvard University. And if you can unmute yourself. Q: Yes. Thank you very much. I’m also with the Schwarzman Scholars program, the academic director. And I wanted to ask a follow up on your point about internet sovereignty. And, you know, the larger global governance bodies and mechanisms for, you know, internet governance and, you know, China’s role therein. I know China’s taken a much more muscular stance on, you know, the sovereignty issue, and justification for firewalls. So there’s a lot—there are a lot of countries that are sort of in the me too, you know, movement behind that, who do want to restrict the internet. So I just—could you give us a little update on what’s the status of that, versus, like, the Net Mundial people, who call for the total openness of the internet. And where is China in that space? How much influence does it have? And is it really—do you think the rules of the road are going to change in any significant way as a result of that? SEGAL: Yeah. So, you know, I think in some ways actually China has been less vocal about the phrase “cyber sovereignty.” The Wuzhen Internet Conference, which is kind of—China developed as a separate platform for promoting its ideas—you don’t see the phrase used as much, although the Chinese are still interjecting it, as we mentioned, in lots of kind of U.N. documents and other ideas. I think partly they don’t—they don’t promote as much because they don’t have to, because the idea of cyber sovereignty is now pretty widely accepted. And I don’t think it’s because of Chinese actions. I think it’s because there is widespread distrust and dissatisfaction with the internet that, you know, spans all types of regime types, right? Just look at any country, including the United States. We’re having a debate about how free and open the internet should be, what role firms should play in content moderation, should the government be allowed to take things down? You know, we’ve seen lots of countries passing fake news or online content moderation laws. There’s a lot of concern about data localization that countries are doing because of purported economic or law enforcement reasons. So I don’t think the Chinese really have to push cyber sovereignty that much because it is very attractive to lots of countries for specific reasons. Now, there is still, I think, a lot of engagement China has with other countries around what we would call cyber sovereignty, because China—countries know that, you know, China both has the experience with it, and will help pay for it. So certainly around the Belt and Road Initiative and other developing economies we do see, you know, the Chinese doing training of people on media management, or online management. There was this story just last week about, you know, Cambodia’s internet looking more like the Chinese internet. We know Vietnam copied part of their cybersecurity law from the Chinese law. A story maybe two years ago about Huawei helping in Zambia and Zimbabwe, if I remember correctly, in surveilling opposition members. So I think China, you know, still remains a big force around it. I think the idea still is cyber sovereignty. I just don’t think we see the phrase anymore. And I think there’s lots of demand pulls. Not China pushing it on other countries, I think lots of countries have decided, yeah, of course we’re going to regulate the internet. FASKIANOS: Thank you. Next question, from Ken Mayers, senior adjunct professor of history and political science at St. Francis College. Following up on Denis Simon’s question, to what extent to Chinese state actors and U.S. state actors share concerns about asymmetric threats to cybersecurity? Is there common ground for discussion? And I’m going to—actually, I’ll stop there, because— SEGAL: All right. So I’m going to interpret asymmetric threats meaning kind of cyber threats from other actors, meaning kind of nonstate or terrorist actors, or criminal actors. So I think there could be a shared interest. It’s very hard to operationalize. Probably about six or seven years ago I wrote a piece with a Chinese scholar that said, yes, of course we have a shared interest in preventing the proliferation of these weapons to terrorist actors and nonstate actors. But then it was very hard to figure out how you would share that information without exposing yourself to other types of attacks, or perhaps empowering your potential adversary. On cyber—for example, on ransomware, you would actually expect there could be some shared interest, since the Chinese have been victims of a fair number of Russian ransomware attacks. But given the close relationship between Putin and Xi these days, it’s hard to imagine that the U.S. and China are going to gang up on Russia on ransomware. So, again, I think there could be, it’s just very hard to operationalize. FASKIANOS: Great. Thank you. So just to follow on from Skyler Duggan, who is an undergraduate at the University of Waterloo. Likewise, to these questions, how do we differentiate individual criminal groups from the state? And how can we be sure this isn’t China just trying to abdicate—or, one party, he doesn’t specify, trying to abdicate the responsibility? SEGAL: Yeah, I think—because there’s—one of the challenges faced by the U.S. and other liberal democracies is that we tend to primarily keep a fairly tight legal control over the cyber operations. They tend to be, you know, intelligence operations or military operations. So Title 10 or Title 50. There’s kind of a whole set of legal norms around it. The U.S. does not rely on proxy actors. And other, you know, liberal democracies tend to don’t. And U.S. adversaries in this space tend to do so. We know Iran does. We know Russia does. We know China does, although less than the others. Now according to this discussion group that I mentioned before at the U.N., the group of—what’s called the group of government experts, one of the norms that all the actors agreed upon was the norm of state responsibility, which is a common one in international law, that you are responsible for whatever happens in your territory. So using proxies should not, you know, be able to give you an out. You shouldn’t be able to say, well, it’s happening from our territory, we just—you know, we don’t know who they are and we can’t control them. But, you know, in operation that norm is being fairly widely ignored. Now, the other problem, of course, is the—is how do you actually decide who the actor is, the attribution problem, right? So here, you know, a lot of people are basically saying, well, we have to rely on the U.S. or the U.K. or others to say, well, you know, we say it’s these actors, and how do we know—how do we know for sure? Now, attribution is not as hard as we once thought it was going to be. When I first, you know, started doing the research for the book that Irina mentioned, attribution was considered, you know, a pretty big challenge. But now, you know, there’s a fairly high expectation that the U.S. will be able to eventually identify who’s behind an attack. Now, it may take some time. And we may not be able to completely identify who ordered the attack, which is, you know, as you mentioned, the problem with the proxies. But it’s not—it’s also not completely reliant on digital clearances. It’s not just the code or the language of the keyboard. All those things can be manipulated, don’t necessarily give you proof. Lots of time the U.S. is pulling in other intelligence—like, human intelligence, signals intelligence, other types of gathering. So, you know, part of it is how much do we believe the attribution, and then how much of it is—you know, what can you do with it afterwards? And, you know, I don’t think the proxy problem is going to go away. FASKIANOS: Great. So I’m going next to Tim Hofmockel’s question. It’s gotten seven upvotes. He’s a graduate student at Georgetown University. To flip Denis Simon’s question: Who should the “we” be? To what extent should the U.S. intelligence community and the Department of Defense cooperate on offensive cyber operations? And how would we signal our intentions in a crisis given the overlap in authorities between the intelligence community and DOD? SEGAL: Yeah. I mean, so right now NSA and Cyber Command are dual hatted, meaning that one person is in charge of both of them, General Nakasone. So to some extent that could theoretically help deconflict between kind of intelligence gathering, offensive operations, and kind of signaling to the Chinese. But it’s unclear. It’s very—signaling in cyber so far seems to be kind of developing and unknown. That seems to be one of the big theories between the U.S. taking these more kinds of operations and, in fact, kind of bringing the fight to the Chinese is a very kind of sociological understanding of deterrence is that over time both sides will kind of understand where those red lines are by engaging and seeing where they’re acting. You know, others have talked about could you create some kind of watermark on the actual attack or vulnerability, so that the—you know, you might discover some type of malware in your system and there’d be like a little, you know, NFT, maybe, of sorts, that says, you know, the U.S. government was here. We’re warning you not to do this thing. You know, a lot of these have, you know, kind of technical problems. But the question of signaling I think is really hard, and that’s part of the reason why, you know, I think these discussions are so important, that at least we have a sense that we’re talking about the same types of things, and the same general set of tools. But I think probably through cyber signaling is going to be really hard. It’s going to be mostly other types of signaling. FASKIANOS: Next question from Maryalice Mazzara. She’s the director of educational programs at the State University of New York’s Office of Global Affairs. How can people who are working with China and have a very positive relationship with China balance the issues of cybersecurity with the work we are doing? Are there some positive approaches we can take with our Chinese colleagues in addressing these concerns? SEGAL: Good question, Ali. How are you? So I guess it’s very—so I do think there are forward-looking things that we can talk about. You know, several of the questions have asked, are there shared interests here? And I do think there are shared interests. You know, you we mentioned the proliferation one. We mentioned the nonstate actors. You know, there is a lot of language in the most recent statement from the Chinese government about—you know, that the internet should be democratic and open. I don’t think they mean it in the same way that we do, but we can, I think, certainly use that language to have discussions about it and hope push to those sides. But I think it is hard because it is—you know, partly because government choices, right? The U.S. government chooses to attribute lots of attacks to China and be very public about it. Chinese for the most part don’t attribute attacks, and don’t—they talk about the U.S. as being the biggest threat in cyberspace, and call the U.S. The Matrix and the most, you know, damaging force in cyberspace. But for the most part, don’t call out specific actors. So they kind of view it—the Chinese side is often in a kind of defensive crouch, basically saying, you know, who are you to judge us, and you guys are hypocrites, and everything else. So I think there are lots of reasons that make it hard. I think probably the way to do it is to try to look forward to these shared interests and this idea that we all benefitted immensely from a global internet. We now have different views of how open that internet should be. But I think we still want to maintain—the most remarkable thing about it is that we can, you know, still communicate with people around the world, we can still learn from people around the world, we can still draw information, most information, from around the world. And we want to, you know, keep that, which is a—which is—you know, not to use a Chinese phrase—but is a win-win for everybody. FASKIANOS: Great. I see a raised hand from Austin Oaks. And I can’t get my roster up fast enough, so, Austin, if you can unmute and identify yourself. Q: So I’m Austin Oaks. And I come from the University of Wisconsin at Whitewater. And I used to live in Guangdong province in China. And I used to go visit Hong Kong and Macau, more Hong Kong, very often. And Hong Kong has this very free internet, which China doesn’t particularly like. Macau tends to be more submissive to Beijing rather than Hong Kong does. But Chinese government has kind of started to put in people in the Hong Kong government to kind of sway the government into Beijing’s orbit more. So then how—so what is China doing in the cyberspace world for both of its separate administrative regions? Because one is a lot easier to control than the other. SEGAL: Yeah. So I think the idea of Hong Kong’s internet being independent and free is—it’s pretty much ending, right? So the national security law covers Hong Kong and allows the government to increasingly censor and filter and arrest people for what they are posting. We saw pressure on U.S. companies to handover data of some users. A lot of the U.S. companies say they’re going to move their headquarters or personnel out of Hong Kong because of those concerns. So, you know, it certainly is more open than the mainland is, but I think long-term trends are clearly pretty negative for Hong Kong. I expect Macau is the same direction, but as you mentioned, you know, the politics of Macau is just so much different from Hong Kong that it’s less of a concern for the Chinese. FASKIANOS: Thank you. I’m going to take the next written question from Robert Harrison, a law student at Washburn University School of Law. My understanding is that there have been significant thefts of American small and medium-size business intellectual property by Chinese-based actors. This theft/transfer of knowledge may reduce the competitive edge from the original property holder. Are there any current efforts to curb IP thefts? Any ongoing analysis of the Belt and Road Initiative to evaluate the use of IP acquired by theft? SEGAL: Yeah. So, you know, as I mentioned, the U.S. tried to reach this agreement with China on the IP theft challenge. China held to it for about a year, and then essentially kind of went back to it. It’s been very hard to quantify the actual impact of what the theft has been. You know, there are numbers thrown around, a certain percent of GDP, or 250 billion (dollars) a year. There is what’s called the IP Commission, which is run out of the National Bureau of Asia Research that has been updating its report. But it’s very hard because, you know, a lot of the knowledge and data that’s stolen is tacit knowledge. Or, you know, is actual blueprints or IP, but they don’t have the tactic knowledge. So you can have the blueprints, but it’s then hard to turn from that to an actual product. And it’s hard in the civilian space to kind of track lots of products that seem stolen from U.S. products, as opposed to—on the military side you can look at, oh, here’s the Chinese stealth jet. It looks a lot like the U.S. stealth jet. Now, this could be physics. It could be intellectual property theft. But it’s harder on the commercial side to kind of put a number on it and see what the impact is. Although clearly, it’s had an impact. We do know that Chinese operators, you know, go after other targets other than the U.S., right? So they certainly go—are active in Europe. We’ve seen them in Southeast Asia. Most of that is probably political espionage, not as much industrial espionage. Although, there has been—has been some. I don’t know of any specific cases where we can point to anything along the Belt and Road Initiative that, you know, seems in and of itself the outcome of IP theft. FASKIANOS: I’m going to take a written question from Caroline Wagner, who is the Milton and Roslyn Wolf chair in international affairs at Ohio State University. Chinese actors seem to have incredibly pervasive links to track online discussions critical of China. Are these mostly bots, or are there human actors behind them? SEGAL: So I’m going to interpret that to me for the net outside of China. So, yes. I think what we’re learning is there’s several things going on. Part of it is bots. So they have, you know, a number of bots that are triggered by certain phrases. Some of it is human, but increasingly probably a lot of it is machine learning. So there was a story maybe last month in the Post, if I remember it correctly, about, you know, Chinese analytical software data companies offering their services to local Ministry of State Security to basically kind of scrape and monitor U.S. platforms. And that is primarily going to be done through, you know, machine learning, and maybe a little human operations as well. FASKIANOS: Thank you. And this is a bit of a follow-on, and then I’ll go to more. William Weeks, who is an undergraduate at Arizona State University asks: What role does unsupervised machine learning play in China’s cyberspace strategy? SEGAL: Yeah, it’s a good question. I don’t have a lot of details. You know, like everybody else there, they are going to start using it on defense. It is a big push on what’s called military-civil fusion. You know, we know that they are trying to pull in from the private sector on AI, both for the defense and the offense side. But right now, all I can give you is kind of general speculation about how actors think about offense and defense with ML and AI. Not a lot of specifics from the Chinese here. FASKIANOS: Thank you. OK, Morton Holbrook, who’s at Kentucky Wesleyan College. Q: Yes. Following up on your comment about Hong Kong, about U.S. companies reconsidering their presence due to internet controls, what about U.S. companies in China and Beijing and Shanghai? Do you see a similar trend there regarding internet controls, or regarding IPR theft? SEGAL: I think, you know, almost all firms that have been in China, this has been a constant issue for them. So it’s not particularly new. I think almost all of them have, you know, made decisions both about how to protect their intellectual property theft—intellectual property from theft, and how to maintain connections to the outside, to make them harder. You know, VPNs were fairly widely used. Now they’re more tightly regulated. We know that the Chinese actually can attack VPNs. So it think, you know, those issues have been constant irritants. I think, you know, COVID and the lack of travel, the worry about getting kind of caught up in nationalist backlashes online to, you know, Xinjiang issues or if you refer to Taiwan incorrectly, those are probably higher concerns right now than these kind of more constant concerns about cyber and IP. FASKIANOS: Thank you. Anson Wang, who’s an undergraduate at the University of Waterloo. We have three upvotes. Is China considered the major threat to the U.S. hegemony because China is actively trying to replace the U.S. as the new global hegemon? Or simply because China is on a trajectory to get there, without or without their active intention in involving other countries’ internal politics, the same way that the U.S. does? SEGAL: Yeah. So I think this is a—you know, a larger question about what China wants in the world. And do we—you know, we do we think it has a plan or ideology of replacing the U.S.? And does it want—or, would it be happen even with regional dominance? Does it just want to block U.S. interest and others? It’s a big debate. You know, lots of people have contrasting views on where they think China is coming. I’ll just use the cyber example. And I think here, you know, the Chinese started with wanting to block the U.S., and prevent the U.S. from criticizing China, and protect itself. I don’t think it had any desire to reshape the global internet. But I think that’s changed. I think under Xi Jinping they really want to change the definitions of what people think the state should do in this space. I think they want to change the shape of the internet. I don’t think they want to spread their model to every country, but if you want to build their model they’re certainly welcome to help you. And they don’t mind pushing, perhaps highlighting, in some cases exploiting the weaknesses they see in the U.S. as well. FASKIANOS: OK. Thank you. I’m going to go to Helen You, who’s a student at NYU. It appears that governments are reluctant to restrict their cyber capabilities because they fundamentally do not want to limit their own freedom to launch cyberattacks. As a result, countries fail to follow voluntary norms on what is permissible in cyberspace. To what extent are industry standards influencing international cybersecurity norms? And what incentives would need to be in place to move these conversations forward? SEGAL: Yeah, that’s a great point. I mean, I think that’s one of the reasons why we haven’t seen a lot of progress, is because states don’t have a lot of reason to stop doing it. The costs are low, and the benefits seem to be high. Now, I understand your question in two separate ways. One, there is a kind of private attempt to push these norms, and basically arguing that states are going too slow. Part of that was promoted by Microsoft, the company, right? So it promoted the idea of what they were calling the Digital Geneva Convention, and then they have been involved in what’s now known as the Paris Accords that define some of these rules, that the U.S. just signed onto, and some other states have signed onto. But again, the norms are pretty vague, and haven’t seemed to have that much effect. There’s a thing called the cybersecurity—Global Cybersecurity Stability Commission that the Dutch government helped fund but was mainly through think tanks and academics. It also has a list of norms. So there is a kind of norm entrepreneurship going on. And those ideas are slowly kind of bubbling out there. But you need to see changes in the state to get there. That’s when we know that norms matter. And that we really haven’t seen. On the—there is a lot of work, of course, going on, on the standards of cybersecurity, and what companies should do, how they should be defined. And that happens both domestically and internationally. And of course, the companies are very involved in that. And, you know, that is much further, right? Because that has to do about regulation inside of markets, although there’s still, you know, a fair amount of difference between the U.S. and EU and other close economies about how those standards should be defined, who should do the defining, how they should be implemented. FASKIANOS: Thank you. I’m going to take group two questions from Dr. Mursel Dogrul of the Turkish National Defense University. In a most recent article we focused on the blockchain literature expansion of superpowers. In terms of publications and citations, China clearly outperformed the United States and Russia. Do you believe the technological advancement will have an impact on the cybersecurity race? And the Michael Trevett—I don’t have an affiliation—wanted you to speak a little bit more about the cyber triangle with Russia. How are China and Russia coordinating and cooperating? SEGAL: Yeah. So the first question, you know, clearly, as I have briefly mentioned in my opening comments, that the Chinese are pushing very hard on the technologies they think are going to be critical to the—to the future competition in this space—blockchain, quantum, AI. The Chinese have made a lot of advances on quantum communication and quantum key distribution. Probably behind the U.S. on quantum computing, but it’s hard to say for sure. And blockchain is a space the Chinese have developed some usages and are rolling some test cases out on the security side and the internet platforming side. On the China-Russia question, so closer cooperation. Most of it has been around cyber sovereignty, and the ideas of kind of global governance of cyberspace. The Chinese were, you know, pretty helpful at the beginning stages, when Russia started using more technological means to censoring and controlling the Russian internet. So helping kind of build some of the—or, export some of the technologies used in the China great firewall, that the Russians could help develop. Russia is pretty much all-in with Huawei on 5G. And so a lot of cooperation there. Although, the Russians are also worried about, you know, Chinese espionage from Russian technology and other secrets. They did sign a nonaggression cyber pact between the two, but both sides continue to hack each other and steal each other’s secrets. And have not seen any evidence of cooperation on the operations side, on intelligence. with them doing more and more military exercises together, I would suspect we would perhaps start seeing some suggestion that they were coordinating on the military side in cyber. But the last time I looked, I didn’t really see any—I did not see any analysis of that. FASKIANOS: Thank you. Next question from Jeffrey Rosensweig, who is the director of the program for business and public policy at Emory University. Q: Adam, I wonder if you could fit India in here anywhere you would like to? Because it think it’ll be the other great economy of the future. SEGAL: Yeah. So India’s a—you know, a really interesting actor in this space, right? So, you know, India basically think that it has two major cyber threats—Pakistan, and China being the other. China, you know, was reportedly behind some of the blackouts in Mumbai after the border clash. I am somewhat skeptical about reporting, but it’s certainly a possibility, and there’s no reason to doubt the Chinese have been mapping critical infrastructure there. India pushed back on TikTok and ByteDance. You know, also concerns about data control and other things. There is a long history of kind of going back and forth on Huawei. The intelligence agency has not really wanted to use, but others wanting to help, you know, bridge the digital divide and build out pretty quickly. India right now is talking about its own type of 5G. But from a U.S. perspective, you know, I think the most important thing—and this is often how India comes up—is that, you know, we want India to be an amplifier, promoter of a lot of these norms on cyber governance, because it is a, you know, developing, multiethnic, multiparty democracy. And so we want it just not to be the U.S.’ voice. Now, India’s a pretty complicated, difficult messenger for those things these days, right? India leads the world in internet shutdowns, and we’ve seen a lot of harassment of opposition leaders and other people who are opposed to Modi. So it’s not going to be easy. But I think the U.S. for a long time has hoped that we could forge a greater understanding on the cyber side with India. FASKIANOS: Great. I’m going to take the next question from Michael O’Hara, who is a professor at the U.S. Naval War College. And I’m going to shorten it. He asks about China’s fourteenth five-year plan, from 2021 to 2025. It includes a section titled “Accelerate digitalization-based development and construct a digital China.” Do you see their five-year plan as a useful way for thinking about Chinese future in cyberspace? SEGAL: Yes. So we’re on the same page, the digital plan came out two or three weeks ago. It was just translated. Yeah, I mean, the plan is useful. Like, all Chinese plans are useful in the sense that it certainly gives us clear thinking about the direction that China wants to go, and the importance it puts on a topic. You know, the implementation and bureaucratic obstacles and all those other things are going to play a role. But as I mentioned, I think, you know, the Chinese economy is becoming increasingly digitalized. And in particular, they want to digitize, you know, more and more of the manufacturing sector and transportation, mining, other sectors that are traditionally not, you know, thought of as being digital, but the Chinese really want to move into that space. Now, from a cybersecurity perspective, that, you know, raises a whole range of new vulnerabilities and security issues. And so I think that’s going to be very high on their thinking. And just today I tweeted a story that they held a meeting on thinking about cybersecurity in the metaverse. So, you know, they’re looking forward, and cybersecurity is going to be a very high concern of people. FASKIANOS: Well, we couldn’t have the Naval Academy without the U.S. Air Force Academy. So, Chris Miller, you wrote your question, but you’ve also raised your hand. So I’m going to ask to have you articulate it yourself. Q: Well, actually, I changed questions, Irina. Adam, thank you. FASKIANOS: Oh, OK. (Laughs.) But still, the Air Force Academy. Q: So two quick questions. I’ll combine them. One is: I’m curious how you see the new cyber director—national cyber director’s role changing this dynamic, if it at all, or changing the parts of it on our side of the Pacific that we care about. And second of all, curious how you see China viewing the Taiwanese infrastructure that they probably desire, whether or not they eventually take it by force or by persuasion. SEGAL: Yeah. So I don’t think the NCD changes the dynamic very much. You know, I think there’s lots of—you know, everyone is watching to see how the NCD and the National Security Council, and CISA, the Cybersecurity Infrastructure and Security Agency, work out the responsibilities among the three of them, which will have an impact, you know, of making us more secure. And, you know, Chris Inglis, the head of the NCD has given lots of talks about how they’re going to manage and work together. And I think we’re beginning to see some signs of that. But I think that’s probably the most direct impact it’ll have on the dynamic. Your second question, you know, I think primarily is about, you know, Taiwan Semiconductor. And, you know, do the Chinese eventually decide, well, chips are so important, and the U.S. is working so hard to cut us off, that, you know, for all the other reasons that we might want to see Taiwan, you know, that one is going to get moved up? You know, I think it’s a possibility. I think it’s a very low possibility. I do think we don’t know what the red lines are on the tech war, right? You know, there’s been talk about cutting off SMIC, the Shanghai manufacturer of integrated circuits, are also a very important company to the Chinese. Would that push the Chinese to do more aggressive or assertive things in this space? You know, what is it that we do in that space that eventually pulls them out? But I think it’s very hard—(audio break)—that they could capture TSMC in a shape that would be useful. Am I breaking up? FASKIANOS: Just a little bit, but it was fine. We have you now. SEGAL: Yeah. That you could capture TSMC in a shape that would be useful, right? I mean, there was that piece, I think, that was written by an Army person, maybe in Parameters, that, you know, the U.S. and Taiwan’s plan should be basically just to—you know, to sabotage TSMC in case there’s any invasion, and make that clear that that’s what it’s going to do. But even without that risk, you’re still dealing—you know, any damage and then, flight of people outside of Taiwan, because the Taiwanese engineers are really important. So it would be very high risk, I think, that they could capture it and then use it. FASKIANOS: Thank you. Well, I am sorry that we couldn’t get to all the questions, but this has been a great conversation. Adam Segal, thank you very much for being with us. You know, you’re such a great resource. I’m going to task you after this, there was a question from Andrew Moore at the University of Kansas about other resources and books that you would suggest to learn more about China and cybersecurity. So I’m going to get—come to you after this for a few suggestions, which we will send out to the group along with the link to this video and the transcript. So, Andrew, we will get back to you and share with everybody else. And so, again, you can follow Dr. Segal on Twitter at @adschina. Is that correct, Adam? SEGAL: That’s right. FASKIANOS: OK. And also sign up for—to receive blog alerts for Net Politics you can go to CFR.org for that. Our next webinar will be on Wednesday, February 9, at 1:00 p.m. Eastern Time. And we’re excited to have Patrick Dennis Duddy, director of the Center for Latin American and Caribbean Studies at Duke, to talk about democracy in Latin America. So thank you for being with us. You can follow us on Twitter at @CFR_Academic. Visit CFR.org, foreignaffairs.com and ThinkGlobalHealth.org for new research and analysis on other global issues. And again, Adam, thank you very much for being with us. We appreciate it. SEGAL: My pleasure. FASKIANOS: Take care. FASKIANOS: Welcome to the first session of the Winter/Spring 2022 CFR Academic Webinar Series. I’m Irina Faskianos, vice president of the National Program and Outreach here at CFR. Today’s discussion is on the record, and the video and transcript will be available on our website, CFR.org/academic. As always, CFR takes no institutional positions on matters of policy. We are delighted to have Adam Segal with us to discuss cyberspace and U.S.-China relations. Adam Segal is CFR’s Ira A. Lipman chair in emerging technologies and national security and director of the Council’s Digital and Cyberspace Policy program. Previously, he served as an arms control analyst for the China Project at the Union of Concerned Scientists. He has been a visiting scholar at Stanford University’s Hoover Institution, MIT’s Center for International Studies, the Shanghai Academy of Social Sciences, and Tsinghua University in Beijing. And he’s taught courses at Vassar College and Columbia University. Dr. Segal currently writes for the CFR blog, Net Politics—you should all sign up for those alerts, if you haven’t already. And he is the author several books, including his latest, The Hacked World Order: How Nations Fight, Trade, Maneuver, and Manipulate in the Digital Age. So, Adam, thanks very much for being with us. We can begin with a very broad brush at cyberspace, the role cyberspace plays in U.S.-China relations, and have you make a few comments on the salient points. And then we’ll open it up to the group for questions. SEGAL: Great. Irina, thanks very much. And thanks, everyone, for joining us this afternoon. I’m looking forward to the questions and the discussion. So broadly, I’m going to argue that the U.S. and China have the most far-reaching competition in cyberspace of any countries. And that competition goes all the way from the chip level to the rules of the road. So global governance all the way down the to the chips that we have in all of our phones. Coincidentally, and nicely timed, last week the Washington Post did a survey of their network of cyber experts about who was the greater threat to the United States, China or Russia. And it was actually almost exactly evenly split—forty to thirty-nine. But I, not surprisingly, fell into the China school. And my thinking is caught very nicely by a quote from Rob Joyce, who’s a director at the National Security Agency, that Russia is like a hurricane while China is like climate change. So Russia causes sudden, kind of unpredictable damage. But China represents a long-term strategic threat. When we think about cyberspace, I think it’s good to think about why it matters to both sides. And on the Chinese side, I think there are four primary concerns. The first is domestic stability, right? So China is worried that the outside internet will influence domestic stability and regime legitimacy. And so that’s why it’s built an incredibly sophisticated system for controlling information inside of China that relies both on technology, and intermediate liability, and other types of regulation. China is worried about technological dependence on other players, in particular the U.S., for semiconductors, network equipment, and other technologies. And they see cybersecurity as a way of reducing that technology. China has legitimate cybersecurity concerns like every other country. They’re worried about attacks on their networks. And the Snowden revelations from the—Edward Snowden, the former NSA contractor—show that the U.S. has significant cyber capabilities, and it has attacked and exploited vulnerabilities inside of China. And while the Chinese might have used to think that they were less vulnerable to cyberattacks given the shape of the Chinese network in the past, I think that probably changed around 2014-2015, especially as the Chinese economy has become increasingly dependent on ecommerce and digital technology. It’s now—GDP is about a third dependent on digital technology. So they’re worried about the same types of attacks the United States is worried about. And then, fourth and finally, China does not want the United States to be able to kind of define the rules of the road globally on cyber, create containing alliances around digital or cyber issues, and wants to constrain the ability of the U.S. to freely maneuver in cyberspace. Those are China’s views. The U.S. has stated that it’s working for a free, open, global, and interoperable internet, or an interoperable cyberspace. But when it looks at China, it has a number of specific concerns. The first is Chinese cyber operations, in particular Chinese espionage, and in particular from that Chinese industrial espionage, right? So the Chinese are known for being the most prolific operators, stealing intellectual property. But they’re also hacking into political networks, going after think tanks, hacking activists—Uighur activists, Tibetan activists, Taiwanese independence activists. We know they’re entering into networks to prepare the battlefield, right, so to map critical infrastructure in case there is a kinetic conflict with the United States—perhaps in the South China Sea or over the Taiwan Strait—and they want to be able to deter the U.S., or perhaps cause destructive attacks on the U.S. homeland, or U.S. bases in South Korea, or Japan. The U.S. is also extremely concerned about the global expansion of Chinese tech firms and Chinese platforms, for the collection of data, right? The U.S. exploited the globalization of U.S. tech firms. Again, that was something that we learned from the Snowden documents, that the U.S. both had legal and extralegal measures to be able to get data from users all around the world because of their knowledge of and relationship to U.S. tech firms. And there’s no reason to believe that the Chinese will not do the same. Now, we hear a lot about, you know, Huawei and the national intelligence law in China that seems to require Chinese companies to turnover data. But it would be very hard to believe that the Chinese would not want to do the same thing that the U.S. has done, which is exploit these tech platforms. And then finally, there is increasingly a framing of this debate as one over values or ideology, right? That democracies use cybertechnologies or digital technologies in a different way than China does. China’s promoting digital authoritarianism, that has to do about control of information as well as surveillance. And the U.S. has really pushed back and said, you know, democracies have to describe how we’re going to use these technologies. Now, the competition has played itself out both domestically and internationally. The Chinese have been incredibly active domestically. Xi Jinping declared that cybersecurity was national security. He took control of a small leadership group that became a separate commission. The Cyberspace Administration of China was established and given lots of powers on regulating cybersecurity. We had a creation of three important laws—the cybersecurity law, the data security law, and the private—personal information protection law. We see China pushing very hard on specific technologies they think are going to be important for this competition, especially AI and quantum. And we see China pushing diplomatically, partly through the idea of what’s called cyber-sovereignty. So not the idea that internet is free and open and should be somewhat free from government regulation, but instead that cyberspace, like every other space, is going to be regulated, and that states should be free to do it as they see fit, as fits their own political and social characteristics, and they should not be criticized by other states. They promoted this view through U.N. organizations in particular. And they’ve been working with the Russians to have a kind of treaty on information and communication technologies that would include not only cybersecurity, but their concerns about content and the free flow of information. The U.S. right now is essentially continuing a policy that was started under the Trump administration. So part of that is to try and stop the flow of technology to Chinese firms, and in particular to handicap and damage Huawei, the Chinese telecom supplier, to put pressure on friends to not use Huawei. But the most important thing it did was put Huawei on an entity list, which cut it off from semiconductors, most importantly from Taiwan Semiconductor, which has really hurt the Huawei of products. The U.S. tried to come to an agreement about—with China about what types of espionage are considered legitimate. And not surprisingly, the U.S. said there was good hacking and back hacking. And the good hacking is the type of hacking that the U.S. tends to do, and the bad hacking is the type of hacking that the Chinese tend to do. So, basically the argument was, well, all states were going to conduct political and military espionage, but industrial espionage should be beyond the pale. Or if you put it—you can think of it as the way President Obama put it, you can hack into my iPhone to get secrets about what I’m discussing with my Cabinet, but you can’t hack into Apple to get the secrets about how iPhones are made to give to Huawei. There was an agreement formed in 2015, where both sides said they weren’t going to engage in industrial espionage—cyber industrial espionage. For about a year and a half, that agreement seemed to hold. And then it—and then it fell apart. The Chinese are engaged in that activity again. And as a result, the U.S. has once again started indicting Chinese hackers, trying to create—enforce that norm through indictments and naming and shaming. The U.S. probably also—although I have no evidence of it—has engaged in disrupting Chinese hackers. So we know under the Trump administrationm Cyber Command moved to a more forward-leaning posture, called defending forward or persistent engagement. We’ve heard about some of those operations against Russian or Iranian actors. John Bolton, before he left the NSC, suggested they were getting used against Chinese cyberhackers as well. So what comes next? And it’s often hard, if not impossible, to end cyber talks on a positive note, but I will try. So I think from a U.S. perspective, clearly the kind of tech pressure, not only of Huawei but on a broader range of companies, is going to continue. The Biden administration has shown no signal that it is going to roll any of that back. And it’s actually expanded it, to more companies working on quantum and other technologies. The Biden administration has worked much more actively than the Trump administration on building alliances around cybersecurity. So in particular, the tech and trade competition group with the Europeans and the quad, with Australia, India, and Japan all have discussions on cybersecurity norms. So how do you actually start imposing them? Now, where you would hope that the U.S. and China would start talking to each other, again, is where I hope the Biden administration can eventually get to. So there were some very brief discussions in the Obama administration. The Trump administration had one round of talks, but that were not particularly useful. The Chinese were very unwilling to bring people from the People’s Liberation Army to actually kind of talk about operations, and generally were in denial about that they had any cyber forces. But you want both sides really to start talking more about where the threshold for the use of force might be in a cyberattack, right? So if you think about—most of what we’ve seen, as I said, is spying. And so that is kind of the—is below the threshold for use of force or an armed attack, the thing that generally triggers kinetic escalation. But there’s no general understanding of where that threshold might be. And in particular, during a crisis, let’s stay, in the street or in the South China Sea, you want to have some kind of clarity about where that line might be. Now, I don’t think we’re ever going to get a very clear picture, because both sides are going to want to be able to kind of skate as close to it as possible, but we would certainly want to have a conversation with the Chinese about how we might signal that. Can we have hotlines to discuss those kind of thresholds? Also, we want to make sure that both sides aren’t targeting each other’s nuclear command and control systems, right, with cyberattacks, because that would make any crisis even worse. There’s some debate about whether the Chinese command and control systems are integrated with civilian systems. So things that the U.S. might go after could then perhaps spillover into the Chinese nuclear system, which would be very risky. So you want to have some talks about that. And then finally, you probably want to talk—because the Chinese open-source writing seems to suggest that they are not as concerned about escalation in cyber as we are. There’s been a lot of debate in the U.S. about if escalation is a risk in cyber. But the Chinese don’t actually seem to think it’s much of a risk. And so it would be very useful to have some discussions on that point as well. I’ll stop there, Irina, and looking forward to the questions. FASKIANOS: Thank you, Adam. That was great analysis and overview and specifics. So we’re going to go first to Babak Salimitari, an undergrad student at the University of California, Irvine. So please be sure to unmute yourself. Q: I did. Can you guys hear me? SEGAL: Yeah. Q: Thank you for doing this. I had a question on the Beijing Olympics that are coming up. Recently the told the athletes to use, like, burner phones because the health apps are for spying, or they’ve got, like, security concerns. What specific concerns do they have regarding those apps, and what do they do? SEGAL: So I think the concerns are both specific and broad. I think there was a concern that one of the apps that all of the athletes had to download had significant security vulnerabilities. So I think that was a study done by Citizens Lab at the University of Toronto. And it basically said, look, this is a very unsafe app and, as you said, allowed access to health data and other private information, and anyone could probably fairly easily hack that. So, you know, if you’re an athlete or anyone else, you don’t want that private information being exposed to or handled by others. Then there’s, I think, the broader concern is that probably anybody who connects to a network in China, that’s going to be unsafe. And so, you know, because everyone is using wi-fi in the Chinese Olympics, and those systems are going to be monitored, those—your data is not going to be safe. You know, I’m not all that concerned for most athletes. You know, there’s probably not a lot of reason why Chinese intelligence or police are interested in them. But there are probably athletes who are concerned, for example, about Xinjiang and the treatment of the Uighurs, or, you know, maybe Tibetan activists or other things, and maybe have somewhere in the back of their minds some idea about making statements or making statements when they get back to the U.S. or safer places. And for those people, definitely I would be worried about the risk of surveillance and perhaps using that data for other types of harassment. FASKIANOS: I’m going to take the written question from Denis Simon, who received two upvotes. And Denis is senior advisor to the president for China affairs and professor of China business and technology. When you say “they” with respect to Chinese cyber activity, who is “they”? To what extent are there rogue groups and ultranationalists as well as criminals involved? SEGAL: Yes, Denis, will send me a nasty email if I don’t mention that Denis was my professor. We’re not going to go how many years ago, but when I was at Fletcher. So, and Denis was one of the first people I took—was the first person I took a class on Chinese technology. So, you know, and then I ended up here. So I think, “they.” So it depends what type of attacks we’re talking about. On the espionage side, cyber espionage side, what we’ve generally seen is that a lot of that was moved from the PLA to the Ministry of State Security. The most recent indictments include some actors that seem to be criminal or at least front organizations. So some technology organizations. We do know that there are, you know, individual hackers in China who will contract their services out. There were in the ’90s a lot of nationalist hacktivist groups, but those have pretty much dissipated except inside of China. So we do see a lot of nationalist trolls and others going after people inside of China, journalists and others, for offending China or other types of violations. So “they” is kind of a whole range of actors depending upon the types of attack we’re talking about. FASKIANOS: Thank you. So our next question we’re going to take from Terron Adlam, who is an undergraduate student at the University of Delaware. And if you can unmute yourself. Q: Can you hear me now? FASKIANOS: Yes. Q: Hi. Good evening. Yes. So I was wondering, do you think there will be a time were we have net neutrality? Like, we have a peace agreement amongst every nation? Because I feel like, honestly, if Russia, U.S., Mexico, any other country out there that have a problem with each other, this would be, like, there’s rules of war. You don’t biohazard attack another country. Do you think—(audio break)—or otherwise? SEGAL: So I think it’s very hard to imagine a world where there’s no cyber activity. So there are discussions about can you limit the types of conflict in cyberspace, though the U.N. primarily. And they have started to define some of the rules of the road that are very similar to other international law applying to armed conflict. So the U.S.’ position is essentially that international law applies in cyberspace, and things like the International Humanitarian Law apply in cyberspace. And you can have things like, you know, neutrality, and proportionality, and distinction. But they’re hard to think about in cyber, but we can—that’s what we should be doing. The Chinese and Russians have often argued we need a different type of treaty, that cyber is different. But given how valuable it seems, at least on the espionage side so far, I don’t think it’s very likely we’ll ever get an agreement where we have no activity in cyberspace. We might get something that says, you know, certain types of targets should be off limits. You shouldn’t go after a hospital, or you shouldn’t go after, you know, health data, things like that. But not a, you know, world peace kind of treaty. FASKIANOS: Thank you. So I’m going to take the next question from David Woodside at Fordham University. Three upvotes. What role does North Korea play in U.S.-China cyber discussions? Can you China act outside of cybersecurity agreements through its North Korean ally? SEGAL: Yeah. I think, you know, like many things with North Korea, the Chinese probably have a great deal of visibility. They have a few levers that they really don’t like using, but not a huge number. So, in particular, if you remember when North Korea hacked Sony and because of the—you know, the movie from Seth Rogan and Franco about the North Korean leader—those hackers seemed to be located in northern China, in Shenyang. So there was some sense that the Chinese probably could have, you know, controlled that. Since then, we have seen a migration of North Korean operators out of kind of north China. They now operate out of India, and Malaysia, and some other places. Also, Russia helped build another cable to North Korea, so the North Koreans are not as dependent on China. I think it’s very unlikely that the Chinese would kind of use North Korean proxies. I think the trust is very low of North Korean operators that they would, you know, have China’s interest in mind or that they might not overstep, that they would bring a great deal of kind of blowback to China there. So there’s been very little kind of—I would say kind of looking the other way earlier in much of North Korea’s actions. These days, I think probably less. FASKIANOS: Thank you. I’m going to take the next question from Joan Kaufman at Harvard University. And if you can unmute yourself. Q: Yes. Thank you very much. I’m also with the Schwarzman Scholars program, the academic director. And I wanted to ask a follow up on your point about internet sovereignty. And, you know, the larger global governance bodies and mechanisms for, you know, internet governance and, you know, China’s role therein. I know China’s taken a much more muscular stance on, you know, the sovereignty issue, and justification for firewalls. So there’s a lot—there are a lot of countries that are sort of in the me too, you know, movement behind that, who do want to restrict the internet. So I just—could you give us a little update on what’s the status of that, versus, like, the Net Mundial people, who call for the total openness of the internet. And where is China in that space? How much influence does it have? And is it really—do you think the rules of the road are going to change in any significant way as a result of that? SEGAL: Yeah. So, you know, I think in some ways actually China has been less vocal about the phrase “cyber sovereignty.” The Wuzhen Internet Conference, which is kind of—China developed as a separate platform for promoting its ideas—you don’t see the phrase used as much, although the Chinese are still interjecting it, as we mentioned, in lots of kind of U.N. documents and other ideas. I think partly they don’t—they don’t promote as much because they don’t have to, because the idea of cyber sovereignty is now pretty widely accepted. And I don’t think it’s because of Chinese actions. I think it’s because there is widespread distrust and dissatisfaction with the internet that, you know, spans all types of regime types, right? Just look at any country, including the United States. We’re having a debate about how free and open the internet should be, what role firms should play in content moderation, should the government be allowed to take things down? You know, we’ve seen lots of countries passing fake news or online content moderation laws. There’s a lot of concern about data localization that countries are doing because of purported economic or law enforcement reasons. So I don’t think the Chinese really have to push cyber sovereignty that much because it is very attractive to lots of countries for specific reasons. Now, there is still, I think, a lot of engagement China has with other countries around what we would call cyber sovereignty, because China—countries know that, you know, China both has the experience with it, and will help pay for it. So certainly around the Belt and Road Initiative and other developing economies we do see, you know, the Chinese doing training of people on media management, or online management. There was this story just last week about, you know, Cambodia’s internet looking more like the Chinese internet. We know Vietnam copied part of their cybersecurity law from the Chinese law. A story maybe two years ago about Huawei helping in Zambia and Zimbabwe, if I remember correctly, in surveilling opposition members. So I think China, you know, still remains a big force around it. I think the idea still is cyber sovereignty. I just don’t think we see the phrase anymore. And I think there’s lots of demand pulls. Not China pushing it on other countries, I think lots of countries have decided, yeah, of course we’re going to regulate the internet. FASKIANOS: Thank you. Next question, from Ken Mayers, senior adjunct professor of history and political science at St. Francis College. Following up on Denis Simon’s question, to what extent to Chinese state actors and U.S. state actors share concerns about asymmetric threats to cybersecurity? Is there common ground for discussion? And I’m going to—actually, I’ll stop there, because— SEGAL: All right. So I’m going to interpret asymmetric threats meaning kind of cyber threats from other actors, meaning kind of nonstate or terrorist actors, or criminal actors. So I think there could be a shared interest. It’s very hard to operationalize. Probably about six or seven years ago I wrote a piece with a Chinese scholar that said, yes, of course we have a shared interest in preventing the proliferation of these weapons to terrorist actors and nonstate actors. But then it was very hard to figure out how you would share that information without exposing yourself to other types of attacks, or perhaps empowering your potential adversary. On cyber—for example, on ransomware, you would actually expect there could be some shared interest, since the Chinese have been victims of a fair number of Russian ransomware attacks. But given the close relationship between Putin and Xi these days, it’s hard to imagine that the U.S. and China are going to gang up on Russia on ransomware. So, again, I think there could be, it’s just very hard to operationalize. FASKIANOS: Great. Thank you. So just to follow on from Skyler Duggan, who is an undergraduate at the University of Waterloo. Likewise, to these questions, how do we differentiate individual criminal groups from the state? And how can we be sure this isn’t China just trying to abdicate—or, one party, he doesn’t specify, trying to abdicate the responsibility? SEGAL: Yeah, I think—because there’s—one of the challenges faced by the U.S. and other liberal democracies is that we tend to primarily keep a fairly tight legal control over the cyber operations. They tend to be, you know, intelligence operations or military operations. So Title 10 or Title 50. There’s kind of a whole set of legal norms around it. The U.S. does not rely on proxy actors. And other, you know, liberal democracies tend to don’t. And U.S. adversaries in this space tend to do so. We know Iran does. We know Russia does. We know China does, although less than the others. Now according to this discussion group that I mentioned before at the U.N., the group of—what’s called the group of government experts, one of the norms that all the actors agreed upon was the norm of state responsibility, which is a common one in international law, that you are responsible for whatever happens in your territory. So using proxies should not, you know, be able to give you an out. You shouldn’t be able to say, well, it’s happening from our territory, we just—you know, we don’t know who they are and we can’t control them. But, you know, in operation that norm is being fairly widely ignored. Now, the other problem, of course, is the—is how do you actually decide who the actor is, the attribution problem, right? So here, you know, a lot of people are basically saying, well, we have to rely on the U.S. or the U.K. or others to say, well, you know, we say it’s these actors, and how do we know—how do we know for sure? Now, attribution is not as hard as we once thought it was going to be. When I first, you know, started doing the research for the book that Irina mentioned, attribution was considered, you know, a pretty big challenge. But now, you know, there’s a fairly high expectation that the U.S. will be able to eventually identify who’s behind an attack. Now, it may take some time. And we may not be able to completely identify who ordered the attack, which is, you know, as you mentioned, the problem with the proxies. But it’s not—it’s also not completely reliant on digital clearances. It’s not just the code or the language of the keyboard. All those things can be manipulated, don’t necessarily give you proof. Lots of time the U.S. is pulling in other intelligence—like, human intelligence, signals intelligence, other types of gathering. So, you know, part of it is how much do we believe the attribution, and then how much of it is—you know, what can you do with it afterwards? And, you know, I don’t think the proxy problem is going to go away. FASKIANOS: Great. So I’m going next to Tim Hofmockel’s question. It’s gotten seven upvotes. He’s a graduate student at Georgetown University. To flip Denis Simon’s question: Who should the “we” be? To what extent should the U.S. intelligence community and the Department of Defense cooperate on offensive cyber operations? And how would we signal our intentions in a crisis given the overlap in authorities between the intelligence community and DOD? SEGAL: Yeah. I mean, so right now NSA and Cyber Command are dual hatted, meaning that one person is in charge of both of them, General Nakasone. So to some extent that could theoretically help deconflict between kind of intelligence gathering, offensive operations, and kind of signaling to the Chinese. But it’s unclear. It’s very—signaling in cyber so far seems to be kind of developing and unknown. That seems to be one of the big theories between the U.S. taking these more kinds of operations and, in fact, kind of bringing the fight to the Chinese is a very kind of sociological understanding of deterrence is that over time both sides will kind of understand where those red lines are by engaging and seeing where they’re acting. You know, others have talked about could you create some kind of watermark on the actual attack or vulnerability, so that the—you know, you might discover some type of malware in your system and there’d be like a little, you know, NFT, maybe, of sorts, that says, you know, the U.S. government was here. We’re warning you not to do this thing. You know, a lot of these have, you know, kind of technical problems. But the question of signaling I think is really hard, and that’s part of the reason why, you know, I think these discussions are so important, that at least we have a sense that we’re talking about the same types of things, and the same general set of tools. But I think probably through cyber signaling is going to be really hard. It’s going to be mostly other types of signaling. FASKIANOS: Next question from Maryalice Mazzara. She’s the director of educational programs at the State University of New York’s Office of Global Affairs. How can people who are working with China and have a very positive relationship with China balance the issues of cybersecurity with the work we are doing? Are there some positive approaches we can take with our Chinese colleagues in addressing these concerns? SEGAL: Good question, Ali. How are you? So I guess it’s very—so I do think there are forward-looking things that we can talk about. You know, several of the questions have asked, are there shared interests here? And I do think there are shared interests. You know, you we mentioned the proliferation one. We mentioned the nonstate actors. You know, there is a lot of language in the most recent statement from the Chinese government about—you know, that the internet should be democratic and open. I don’t think they mean it in the same way that we do, but we can, I think, certainly use that language to have discussions about it and hope push to those sides. But I think it is hard because it is—you know, partly because government choices, right? The U.S. government chooses to attribute lots of attacks to China and be very public about it. Chinese for the most part don’t attribute attacks, and don’t—they talk about the U.S. as being the biggest threat in cyberspace, and call the U.S. The Matrix and the most, you know, damaging force in cyberspace. But for the most part, don’t call out specific actors. So they kind of view it—the Chinese side is often in a kind of defensive crouch, basically saying, you know, who are you to judge us, and you guys are hypocrites, and everything else. So I think there are lots of reasons that make it hard. I think probably the way to do it is to try to look forward to these shared interests and this idea that we all benefitted immensely from a global internet. We now have different views of how open that internet should be. But I think we still want to maintain—the most remarkable thing about it is that we can, you know, still communicate with people around the world, we can still learn from people around the world, we can still draw information, most information, from around the world. And we want to, you know, keep that, which is a—which is—you know, not to use a Chinese phrase—but is a win-win for everybody. FASKIANOS: Great. I see a raised hand from Austin Oaks. And I can’t get my roster up fast enough, so, Austin, if you can unmute and identify yourself. Q: So I’m Austin Oaks. And I come from the University of Wisconsin at Whitewater. And I used to live in Guangdong province in China. And I used to go visit Hong Kong and Macau, more Hong Kong, very often. And Hong Kong has this very free internet, which China doesn’t particularly like. Macau tends to be more submissive to Beijing rather than Hong Kong does. But Chinese government has kind of started to put in people in the Hong Kong government to kind of sway the government into Beijing’s orbit more. So then how—so what is China doing in the cyberspace world for both of its separate administrative regions? Because one is a lot easier to control than the other. SEGAL: Yeah. So I think the idea of Hong Kong’s internet being independent and free is—it’s pretty much ending, right? So the national security law covers Hong Kong and allows the government to increasingly censor and filter and arrest people for what they are posting. We saw pressure on U.S. companies to handover data of some users. A lot of the U.S. companies say they’re going to move their headquarters or personnel out of Hong Kong because of those concerns. So, you know, it certainly is more open than the mainland is, but I think long-term trends are clearly pretty negative for Hong Kong. I expect Macau is the same direction, but as you mentioned, you know, the politics of Macau is just so much different from Hong Kong that it’s less of a concern for the Chinese. FASKIANOS: Thank you. I’m going to take the next written question from Robert Harrison, a law student at Washburn University School of Law. My understanding is that there have been significant thefts of American small and medium-size business intellectual property by Chinese-based actors. This theft/transfer of knowledge may reduce the competitive edge from the original property holder. Are there any current efforts to curb IP thefts? Any ongoing analysis of the Belt and Road Initiative to evaluate the use of IP acquired by theft? SEGAL: Yeah. So, you know, as I mentioned, the U.S. tried to reach this agreement with China on the IP theft challenge. China held to it for about a year, and then essentially kind of went back to it. It’s been very hard to quantify the actual impact of what the theft has been. You know, there are numbers thrown around, a certain percent of GDP, or 250 billion (dollars) a year. There is what’s called the IP Commission, which is run out of the National Bureau of Asia Research that has been updating its report. But it’s very hard because, you know, a lot of the knowledge and data that’s stolen is tacit knowledge. Or, you know, is actual blueprints or IP, but they don’t have the tactic knowledge. So you can have the blueprints, but it’s then hard to turn from that to an actual product. And it’s hard in the civilian space to kind of track lots of products that seem stolen from U.S. products, as opposed to—on the military side you can look at, oh, here’s the Chinese stealth jet. It looks a lot like the U.S. stealth jet. Now, this could be physics. It could be intellectual property theft. But it’s harder on the commercial side to kind of put a number on it and see what the impact is. Although clearly, it’s had an impact. We do know that Chinese operators, you know, go after other targets other than the U.S., right? So they certainly go—are active in Europe. We’ve seen them in Southeast Asia. Most of that is probably political espionage, not as much industrial espionage. Although, there has been—has been some. I don’t know of any specific cases where we can point to anything along the Belt and Road Initiative that, you know, seems in and of itself the outcome of IP theft. FASKIANOS: I’m going to take a written question from Caroline Wagner, who is the Milton and Roslyn Wolf chair in international affairs at Ohio State University. Chinese actors seem to have incredibly pervasive links to track online discussions critical of China. Are these mostly bots, or are there human actors behind them? SEGAL: So I’m going to interpret that to me for the net outside of China. So, yes. I think what we’re learning is there’s several things going on. Part of it is bots. So they have, you know, a number of bots that are triggered by certain phrases. Some of it is human, but increasingly probably a lot of it is machine learning. So there was a story maybe last month in the Post, if I remember it correctly, about, you know, Chinese analytical software data companies offering their services to local Ministry of State Security to basically kind of scrape and monitor U.S. platforms. And that is primarily going to be done through, you know, machine learning, and maybe a little human operations as well. FASKIANOS: Thank you. And this is a bit of a follow-on, and then I’ll go to more. William Weeks, who is an undergraduate at Arizona State University asks: What role does unsupervised machine learning play in China’s cyberspace strategy? SEGAL: Yeah, it’s a good question. I don’t have a lot of details. You know, like everybody else there, they are going to start using it on defense. It is a big push on what’s called military-civil fusion. You know, we know that they are trying to pull in from the private sector on AI, both for the defense and the offense side. But right now, all I can give you is kind of general speculation about how actors think about offense and defense with ML and AI. Not a lot of specifics from the Chinese here. FASKIANOS: Thank you. OK, Morton Holbrook, who’s at Kentucky Wesleyan College. Q: Yes. Following up on your comment about Hong Kong, about U.S. companies reconsidering their presence due to internet controls, what about U.S. companies in China and Beijing and Shanghai? Do you see a similar trend there regarding internet controls, or regarding IPR theft? SEGAL: I think, you know, almost all firms that have been in China, this has been a constant issue for them. So it’s not particularly new. I think almost all of them have, you know, made decisions both about how to protect their intellectual property theft—intellectual property from theft, and how to maintain connections to the outside, to make them harder. You know, VPNs were fairly widely used. Now they’re more tightly regulated. We know that the Chinese actually can attack VPNs. So it think, you know, those issues have been constant irritants. I think, you know, COVID and the lack of travel, the worry about getting kind of caught up in nationalist backlashes online to, you know, Xinjiang issues or if you refer to Taiwan incorrectly, those are probably higher concerns right now than these kind of more constant concerns about cyber and IP. FASKIANOS: Thank you. Anson Wang, who’s an undergraduate at the University of Waterloo. We have three upvotes. Is China considered the major threat to the U.S. hegemony because China is actively trying to replace the U.S. as the new global hegemon? Or simply because China is on a trajectory to get there, without or without their active intention in involving other countries’ internal politics, the same way that the U.S. does? SEGAL: Yeah. So I think this is a—you know, a larger question about what China wants in the world. And do we—you know, we do we think it has a plan or ideology of replacing the U.S.? And does it want—or, would it be happen even with regional dominance? Does it just want to block U.S. interest and others? It’s a big debate. You know, lots of people have contrasting views on where they think China is coming. I’ll just use the cyber example. And I think here, you know, the Chinese started with wanting to block the U.S., and prevent the U.S. from criticizing China, and protect itself. I don’t think it had any desire to reshape the global internet. But I think that’s changed. I think under Xi Jinping they really want to change the definitions of what people think the state should do in this space. I think they want to change the shape of the internet. I don’t think they want to spread their model to every country, but if you want to build their model they’re certainly welcome to help you. And they don’t mind pushing, perhaps highlighting, in some cases exploiting the weaknesses they see in the U.S. as well. FASKIANOS: OK. Thank you. I’m going to go to Helen You, who’s a student at NYU. It appears that governments are reluctant to restrict their cyber capabilities because they fundamentally do not want to limit their own freedom to launch cyberattacks. As a result, countries fail to follow voluntary norms on what is permissible in cyberspace. To what extent are industry standards influencing international cybersecurity norms? And what incentives would need to be in place to move these conversations forward? SEGAL: Yeah, that’s a great point. I mean, I think that’s one of the reasons why we haven’t seen a lot of progress, is because states don’t have a lot of reason to stop doing it. The costs are low, and the benefits seem to be high. Now, I understand your question in two separate ways. One, there is a kind of private attempt to push these norms, and basically arguing that states are going too slow. Part of that was promoted by Microsoft, the company, right? So it promoted the idea of what they were calling the Digital Geneva Convention, and then they have been involved in what’s now known as the Paris Accords that define some of these rules, that the U.S. just signed onto, and some other states have signed onto. But again, the norms are pretty vague, and haven’t seemed to have that much effect. There’s a thing called the cybersecurity—Global Cybersecurity Stability Commission that the Dutch government helped fund but was mainly through think tanks and academics. It also has a list of norms. So there is a kind of norm entrepreneurship going on. And those ideas are slowly kind of bubbling out there. But you need to see changes in the state to get there. That’s when we know that norms matter. And that we really haven’t seen. On the—there is a lot of work, of course, going on, on the standards of cybersecurity, and what companies should do, how they should be defined. And that happens both domestically and internationally. And of course, the companies are very involved in that. And, you know, that is much further, right? Because that has to do about regulation inside of markets, although there’s still, you know, a fair amount of difference between the U.S. and EU and other close economies about how those standards should be defined, who should do the defining, how they should be implemented. FASKIANOS: Thank you. I’m going to take group two questions from Dr. Mursel Dogrul of the Turkish National Defense University. In a most recent article we focused on the blockchain literature expansion of superpowers. In terms of publications and citations, China clearly outperformed the United States and Russia. Do you believe the technological advancement will have an impact on the cybersecurity race? And the Michael Trevett—I don’t have an affiliation—wanted you to speak a little bit more about the cyber triangle with Russia. How are China and Russia coordinating and cooperating? SEGAL: Yeah. So the first question, you know, clearly, as I have briefly mentioned in my opening comments, that the Chinese are pushing very hard on the technologies they think are going to be critical to the—to the future competition in this space—blockchain, quantum, AI. The Chinese have made a lot of advances on quantum communication and quantum key distribution. Probably behind the U.S. on quantum computing, but it’s hard to say for sure. And blockchain is a space the Chinese have developed some usages and are rolling some test cases out on the security side and the internet platforming side. On the China-Russia question, so closer cooperation. Most of it has been around cyber sovereignty, and the ideas of kind of global governance of cyberspace. The Chinese were, you know, pretty helpful at the beginning stages, when Russia started using more technological means to censoring and controlling the Russian internet. So helping kind of build some of the—or, export some of the technologies used in the China great firewall, that the Russians could help develop. Russia is pretty much all-in with Huawei on 5G. And so a lot of cooperation there. Although, the Russians are also worried about, you know, Chinese espionage from Russian technology and other secrets. They did sign a nonaggression cyber pact between the two, but both sides continue to hack each other and steal each other’s secrets. And have not seen any evidence of cooperation on the operations side, on intelligence. with them doing more and more military exercises together, I would suspect we would perhaps start seeing some suggestion that they were coordinating on the military side in cyber. But the last time I looked, I didn’t really see any—I did not see any analysis of that. FASKIANOS: Thank you. Next question from Jeffrey Rosensweig, who is the director of the program for business and public policy at Emory University. Q: Adam, I wonder if you could fit India in here anywhere you would like to? Because it think it’ll be the other great economy of the future. SEGAL: Yeah. So India’s a—you know, a really interesting actor in this space, right? So, you know, India basically think that it has two major cyber threats—Pakistan, and China being the other. China, you know, was reportedly behind some of the blackouts in Mumbai after the border clash. I am somewhat skeptical about reporting, but it’s certainly a possibility, and there’s no reason to doubt the Chinese have been mapping critical infrastructure there. India pushed back on TikTok and ByteDance. You know, also concerns about data control and other things. There is a long history of kind of going back and forth on Huawei. The intelligence agency has not really wanted to use, but others wanting to help, you know, bridge the digital divide and build out pretty quickly. India right now is talking about its own type of 5G. But from a U.S. perspective, you know, I think the most important thing—and this is often how India comes up—is that, you know, we want India to be an amplifier, promoter of a lot of these norms on cyber governance, because it is a, you know, developing, multiethnic, multiparty democracy. And so we want it just not to be the U.S.’ voice. Now, India’s a pretty complicated, difficult messenger for those things these days, right? India leads the world in internet shutdowns, and we’ve seen a lot of harassment of opposition leaders and other people who are opposed to Modi. So it’s not going to be easy. But I think the U.S. for a long time has hoped that we could forge a greater understanding on the cyber side with India. FASKIANOS: Great. I’m going to take the next question from Michael O’Hara, who is a professor at the U.S. Naval War College. And I’m going to shorten it. He asks about China’s fourteenth five-year plan, from 2021 to 2025. It includes a section titled “Accelerate digitalization-based development and construct a digital China.” Do you see their five-year plan as a useful way for thinking about Chinese future in cyberspace? SEGAL: Yes. So we’re on the same page, the digital plan came out two or three weeks ago. It was just translated. Yeah, I mean, the plan is useful. Like, all Chinese plans are useful in the sense that it certainly gives us clear thinking about the direction that China wants to go, and the importance it puts on a topic. You know, the implementation and bureaucratic obstacles and all those other things are going to play a role. But as I mentioned, I think, you know, the Chinese economy is becoming increasingly digitalized. And in particular, they want to digitize, you know, more and more of the manufacturing sector and transportation, mining, other sectors that are traditionally not, you know, thought of as being digital, but the Chinese really want to move into that space. Now, from a cybersecurity perspective, that, you know, raises a whole range of new vulnerabilities and security issues. And so I think that’s going to be very high on their thinking. And just today I tweeted a story that they held a meeting on thinking about cybersecurity in the metaverse. So, you know, they’re looking forward, and cybersecurity is going to be a very high concern of people. FASKIANOS: Well, we couldn’t have the Naval Academy without the U.S. Air Force Academy. So, Chris Miller, you wrote your question, but you’ve also raised your hand. So I’m going to ask to have you articulate it yourself. Q: Well, actually, I changed questions, Irina. Adam, thank you. FASKIANOS: Oh, OK. (Laughs.) But still, the Air Force Academy. Q: So two quick questions. I’ll combine them. One is: I’m curious how you see the new cyber director—national cyber director’s role changing this dynamic, if it at all, or changing the parts of it on our side of the Pacific that we care about. And second of all, curious how you see China viewing the Taiwanese infrastructure that they probably desire, whether or not they eventually take it by force or by persuasion. SEGAL: Yeah. So I don’t think the NCD changes the dynamic very much. You know, I think there’s lots of—you know, everyone is watching to see how the NCD and the National Security Council, and CISA, the Cybersecurity Infrastructure and Security Agency, work out the responsibilities among the three of them, which will have an impact, you know, of making us more secure. And, you know, Chris Inglis, the head of the NCD has given lots of talks about how they’re going to manage and work together. And I think we’re beginning to see some signs of that. But I think that’s probably the most direct impact it’ll have on the dynamic. Your second question, you know, I think primarily is about, you know, Taiwan Semiconductor. And, you know, do the Chinese eventually decide, well, chips are so important, and the U.S. is working so hard to cut us off, that, you know, for all the other reasons that we might want to see Taiwan, you know, that one is going to get moved up? You know, I think it’s a possibility. I think it’s a very low possibility. I do think we don’t know what the red lines are on the tech war, right? You know, there’s been talk about cutting off SMIC, the Shanghai manufacturer of integrated circuits, are also a very important company to the Chinese. Would that push the Chinese to do more aggressive or assertive things in this space? You know, what is it that we do in that space that eventually pulls them out? But I think it’s very hard—(audio break)—that they could capture TSMC in a shape that would be useful. Am I breaking up? FASKIANOS: Just a little bit, but it was fine. We have you now. SEGAL: Yeah. That you could capture TSMC in a shape that would be useful, right? I mean, there was that piece, I think, that was written by an Army person, maybe in Parameters, that, you know, the U.S. and Taiwan’s plan should be basically just to—you know, to sabotage TSMC in case there’s any invasion, and make that clear that that’s what it’s going to do. But even without that risk, you’re still dealing—you know, any damage and then, flight of people outside of Taiwan, because the Taiwanese engineers are really important. So it would be very high risk, I think, that they could capture it and then use it. FASKIANOS: Thank you. Well, I am sorry that we couldn’t get to all the questions, but this has been a great conversation. Adam Segal, thank you very much for being with us. You know, you’re such a great resource. I’m going to task you after this, there was a question from Andrew Moore at the University of Kansas about other resources and books that you would suggest to learn more about China and cybersecurity. So I’m going to get—come to you after this for a few suggestions, which we will send out to the group along with the link to this video and the transcript. So, Andrew, we will get back to you and share with everybody else. And so, again, you can follow Dr. Segal on Twitter at @adschina. Is that correct, Adam? SEGAL: That’s right. FASKIANOS: OK. And also sign up for—to receive blog alerts for Net Politics you can go to CFR.org for that. Our next webinar will be on Wednesday, February 9, at 1:00 p.m. Eastern Time. And we’re excited to have Patrick Dennis Duddy, director of the Center for Latin American and Caribbean Studies at Duke, to talk about democracy in Latin America. So thank you for being with us. You can follow us on Twitter at @CFR_Academic. Visit CFR.org, foreignaffairs.com and ThinkGlobalHealth.org for new research and analysis on other global issues. And again, Adam, thank you very much for being with us. We appreciate it. SEGAL: My pleasure. FASKIANOS: Take care.
  • Women and Women's Rights

    Working for low wages and few benefits, a large but invisible workforce keeps the internet running. Through her research, Mary L. Gray sheds light on the workers—many of them women caring for young children and elders—who support the technology industry and the lack of regulations governing their labor. Mary L. Gray, senior principal researcher at Microsoft Research, 2020 MacArthur fellow, and coauthor of the book Ghost Work, discusses what governments and the technology industry can do to address this emerging fault line of inequality.
  • Public Health Threats and Pandemics

    The rapid rate of infection in the ongoing pandemic is catalyzing technological innovation to prevent the next potential pandemic from ever reaching a global scale. Technology is the essential tool for early testing, detection, and tracking. Through the use of modern data science techniques to develop epidemiological forecast models and personal technology such as smart thermometers and wearables to track trends across regions, preventative measures are more powerful than ever before. Our speaker Inder Singh, founder and CEO of Kinsa Inc., discusses the current and changing landscape of technology and infectious disease prevention.