The Geopolitics of Cybersecurity
This symposium convenes senior government officials and experts from think tanks, academia, and the private sector to address the interaction of cyber conflict and foreign policy goals, examining the current state of Russian, Chinese, Iranian, and North Korean cyber operations, as well as how the United States is responding and its own vulnerability to cyberattacks as a symptom of a broken geopolitical order.
Click here to download the full agenda for the symposium.
SEGAL: Good morning. I am Adam Segal. I am the Ira A. Lipman Chair in Emerging Technologies and National Security and director of the Council’s Digital and Cyberspace Policy Program.
I’m just going to take a brief moment to welcome you all to today’s event. I think it is extremely exciting that this is the first D.C. symposium in person, although I am, obviously, not there to enjoy it with you. But this is the eighth Cyber Symposium that we’ve had. The last three not surprisingly, have focused on great powers and geopolitical competition in cyber, but in the past we’ve also covered online content moderation, deepfakes, internet governance, and privacy and international trade.
I hope all of you will take part and consider subscribing to some of the other products from the Digital and Cyberspace Policy Program. You’ll check out the Cyber Operations Tracker, which is the running track of known state-based attacks. And unfortunately, of course, we are adding to those at quite a pitch given what’s happening in the world these days. But also, subscribe to Net Politics, the blog of the Digital and Cyberspace Policy Program, as well as our newsletter.
I do want to thank everyone who’s been involved in the program and in working on the symposium, and in particular Connor Sutherland from the D.C. program, who has given so much support and assistance putting this all together. So thank you to Connor.
And let me just say that this is, you know, extremely great timing to have Director Inglis with us here today. As I’m sure many of you know, just last week we had another joint federal warning about new types of industrial control system malware that have been discovered in U.S. networks, so there is no doubt that the risk is going up. So thank you very much to Director Inglis and to David for being with us today. And I will now turn it over to them.
SANGER: Well, thank you very much. Thanks to all of you who are here. And thanks to Adam and the CFR for putting us on.
But mostly, thanks to Director Inglis for this wonderful opportunity to come hear about him. He’s got a truly challenging job that would be challenging under almost any circumstances, but the fact that he is the first incumbent in the job means that he’s got to go navigate all the politics of creating a new position while also trying to do this in particularly trying times. So we really appreciate your coming out.
As you heard from Adam, I’m David Sanger. I’m White House and national security correspondent for the New York Times and write frequently on cyber issues. And this is the keynote session for the Council’s Cybersecurity Symposium. There will be another session after.
So what we’re doing to do is I’m going to talk with the director for twenty minutes or so, or I guess thirty minutes or so, and then we will go to questions both from those who are joining us virtually and those here in the room. A reminder to everyone that this is gloriously on the record—words that, you know, ring like music to a reporter’s ears, but not necessarily to every government official—(laughs)—you know? So Chris is all the more thanked for doing it that way and agreeing to do that as such.
Director Inglis, let me start with what’s most on our minds right now, which is the conflict with Russia over Ukraine. And I’m interested in both your analysis of what we’ve seen happen over the past eight weeks and then your concerns about what we may see going forward. We entered this figuring that any modern conflict would start with a major cyberattack at the beginning—that, you know, undersea cables would be cut, or the power grids would be fried, or the Russians would attack enough of the internet structure within Ukraine that they could shut down all the communications. And so along comes the first major kinetic war between nation-states fought in the cyber age, and that’s not what we saw. We did see some early action in January and February, but we didn’t see the bigger attacks. I was at a symposium recently where a number of your government colleagues were debating why that was, and you know, there are a couple interesting theories around, but I’d be interested first in just hearing yours.
INGLIS: Well, first, David, thank you very much for the warm welcome. Thanks to Connor and the Council for setting this up. You know, I think we’ve so longed to have these face-to-face discussions for the last two years. It’s a delight to learn again how to speak in the presence of real people. (Laughter.) And I look forward to this discussion.
You know, the difficulty of the job, you know, for the next hour it might be one thing and the rest of the job might then pale in comparison.
INGLIS: You’ve asked a great question. It’s the question of the moment: Why, given that we had expectations that the Russian playbook, having relied so heavily on disinformation and cyber married with all other instruments of power, why haven’t we seen a very significant play of cyber at least against NATO and the United States in this instance? I can’t say that I know with precision any more than anyone outside the Russians themselves might know, but I think we can deduce perhaps several things.
First and foremost, it’s not playing out the way the Russians had imagined, right? That they had imagined that their kinetic forces, married with disinformation, could overwhelm and take the country by storm within a matter of days, right, if not kind of sooner than that, and therefore they probably did not have an incentive to use a computer network attack in the way we might have imagined to achieve degrees of disruption or destruction of an infrastructure that they thought they would soon inherit. Neither did they want to inflame kind of the NATO or the United States alliance with what might be an unnecessary provocation. So some degree of self-imposed deterrence on their part.
Two, as soon as it was realized by them and others that it wasn’t playing according to plan, they were distracted. They were busy, right? It’s hard when you’re on your back foot trying to recover your initiative to then mount a campaign of any sort. And we saw that play out, certainly, on the ground, and I think to some degree in cyberspace.
Finally, it’s harder to do than it is to kind of say; that mounting a campaign of any sort, whether it’s on the ground or in cyberspace, requires a certain understanding of what the lay of the land is; a certain ability to then understand what the lay of the land of your victims, your targets might be; and an ability to not simply fire salvos but to do that with some continuous understanding of what that lay of the land is such that you can announce and effect a campaign. And if you’re up against a heterogeneous architecture, which clearly that’s what exists in cyberspace, again, it might not be that it’s easy to do.
The last thing I would observe is that all of that imagines that cyber is not an independent domain. It’s not something that sits off to the side where cyber on cyber plays out all day every day. It’s connected, necessarily, to the real world. And so the aspirations of the Russians and the defense mounted by the Ukrainians is strongly connected between cyber and the physical world, and therefore the stout hearts that we see on one side, the confusion we’ve seen on the other, the use of this as an instrument of power as opposed to something that plays out in an independent domain is something where there’s a—pun intended—strong analog between the physical world and the cyber world.
SANGER: So think out for the weeks ahead. So Putin has now narrowed his objectives in the physical world, right? He’s down in the southeast. He’s withdrawn his forces, at least for now, from going after Kyiv. As you look out ahead at this integrated sort of battles, are you expecting that inside Ukraine we are going to see more use of cyber? And at what point do you think, if at all, he then turns to attacking targets in the West to retaliate for the sanctions, for the isolation, and so forth?
INGLIS: I don’t know the—let me take those in reverse order. I don’t know the answer to that second question, which is at what point would we expect or do I predict that we’ll see the Russians attack the West, NATO, or the United States, right? That’s a fraught decision. It has dire consequences on both sides. And I don’t know that that decision has either been made or that it’s imminently predictable with the precision we might prefer. But I would say that we have strategic warning of the possibility, and we therefore need to prepare for that possibility. Having that strategic warning, we then need to figure out how do we actually understand as it’s happening at the earliest possible moment what’s happening, when, why, so that we can deal with that. That’s what, essentially, guides our present efforts, which is: Having strategic warning in hand, how do we effect a collaboration not just between nations, but between the private sector and the public sector to combine our insights, our assets, our capabilities so that if one of us sees something that might not be plainly visible to another or, better, if one of us sees a bit of it but not all of it and can’t perhaps put that in the proper context, we compare and contrast so that we could discover things together no one can discover alone? That’s, I think, the game before us, is how do we actually put ourselves on the front balls of our feet to anticipate and to react to that.
Having said that, I think that we’re in a reasonably good place. The architectures which we know as the internet or cyberspace-plus have not been built for forty years to be as resilient and robust as we might prefer, but we’ve done a lot in the last few years to essentially install the mechanisms in there. We now have a collective understanding that these instruments of power are useful not just for their own sake—cyber doesn’t exist for its own sake—but for broader societal purposes. And I think you have most people—individuals, organizations, governments—leaning forward in the straps trying to figure out what they can do to make it defensible and to actually defend it.
That being said, we’re still open for a sucker punch. NotPetya, for many of you remembering from 2017, affected a broad population that were not in the intended victim set. That was the Russians going after the Ukrainians, and it escaped its moorings and essentially ripped across Europe and broad other swaths within the world, and affected a great physical harm and also some confidence—some kind of attacks on confidence about the underlying nature of that architecture.
So if I worry about one thing going forward—and we need to actually put action to this worry—it’s that we will be comfortable that the last few weeks predict the next few weeks—doesn’t necessarily—and that we’ll be comfortable that we can defend ourselves, you know, fairly and fully because we have a resilient architecture that’s well-defended. We should not make the mistake of kind of understanding that our present circumstance is still one where kind of an adversary might then seize the initiative and come at us, and we need to make it such that they have to beat all of us to beat one of us. I think that’s where we need to be.
SANGER: One last on this. Last week, we heard Bill Burns, the CIA director, give a speech down in Georgia in which he said—he’s probably the one in the administration who’s spent the most time with Putin over the years, I would say, and he said that what’s really struck him is that Putin’s risk appetite has clearly increased over time as he has thought about his legacy and his goals of reestablishing spheres of influence. How does that apply, to your mind, that risk appetite increase in your area of direct concern of cyber?
INGLIS: I know of no one better than Bill Burns to make a remark of that sort, and therefore I would fully support that observation. I would say marry that with the degree of isolation that he has—Vladimir Putin has—which—that’s a dangerous combination. If you’re willing to take risks but not fully informed about the nature of the lay of the land of where you’re taking that risk, then that kind of occasioned grand surprises both on the part of the aggressor and on the part of, you know, the target interest.
SANGER: And we’ve increased that isolation, I mean, by—our sanctions are isolating him more. He is self-isolating more by cutting off so many sources of information to the Russian people, for all the obvious reasons. So, in an odd way, does the isolation we’re imposing here fuel that risk?
INGLIS: I don’t know that it exacerbates the risk. It certainly doesn’t—it doesn’t give him confidence that he has control of the situation, and in that regard might make him perhaps then more unpredictable. So I think that we do have to concern ourselves with that. But there are consequences to actions. He’s taken the actions that then, properly, led to the consequences that have been imposed on him and those who have enabled him, and I don’t think that we would change that at the moment.
SANGER: So one of the oddities that we saw in public of the way your colleagues in the intelligence community dealt with the runup to the war was a declassifying of information at a pace we have rarely seen before, a decision to make that public in an effort to preempt some Russian action. I can imagine you were probably sitting there thinking, boy, how would this have gone over when I was still at NSA. What do you—how do you evaluate that? And how does it apply as you think about deterring future cyber action?
INGLIS: Yeah. Last question first. I hope I would have been wise enough to do it.
INGLIS: But that being said, I think that what you’ve seen in terms of the government’s response—and not alone, but I’ll just talk about this government—is born of two realities.
The first is that in the—in the realm of cyberspace, the private sector increasingly is the supported organization. Most of the critical functions that we care about as a society are generated, deployed, sustained, defended by the private sector. They’re the actors that are on the front lines of all of this.
And the second reality is that if the government’s in possession of what I would describe as actionable intelligence, it’s only actionable if it’s put in the hands of those who can action it, who can essentially do something about that. And so the choice was a natural choice, to say how do we put this in the hands of the people who can do something about it. And then speaking clearly about the cyber lanes, there’s kind of an equally important lane in terms of putting it in the hands of nation-states, right, who can do something about it. But in the cyber world, you have to imagine, if this is actionable intelligence—it’s fine-grained, it’s timely, it’s actionable—how do you actually make that kind of possible for somebody to action that?
And we’ve seen a good response from that, which is we achieved a degree of collective understanding that affected our sense of a shared strategic warning. It put, then, various individuals, organizations, governments in a place where they then had this shared prospect going forward. They needed to collaborate to understand further what the tactical warning would look like. And we’re now in that place where the collaboration is going to help us discover things together that no one could discover alone.
SANGER: So you’ll recall that during the Obama and to some degree the Trump eras, but really during the Obama era, the U.S. government was reluctant even to name the Russians when they got into the Joint Chiefs of Staff, the White House, the State Department. You and I talked about this when you were still off and working at the Naval Academy and so forth. Our response was relatively muted. When SolarWinds first happened, the Trump administration was still in. President Trump didn’t say a word about it. In fact, he asked his briefers whether it could have been China. Do you think, looking back, that this reluctance to name and shame and all that emboldened the Russians more generally?
INGLIS: That’s a really good question. I’d think back and I would say that those conditions were to some degree slightly—maybe substantially, but slightly different than the present.
One, in the day attribution was harder, right? Attribution in those days was for the most part done from a cold start. We would see something happen and then we would chase that ghost right into the murk and try to figure out from a cold start who it was, what they did. And our satisfaction in terms of having confidence about attribution was low.
We also didn’t want to give away our own insights such that having deployed what we knew about who was doing what in that space we then just gave them an opportunity to disappear into some other dark corner.
And finally, I don’t think that we at that time had as mature an understanding as we do today, which is maturing still further, about the need for putting actionable intelligence in the hands of those who can action it, right? We had essentially not come to that conclusion, that this information is only useful if it then causes something to happen so that we can change the future, we can change the situation.
So I think all of those realities, we have a much stronger hold on that today. We’re beginning to practice that. Maybe we could have done that sooner, but I think in the fullness of time it takes understanding, some muscle memory, and some kind of practicing to get that closer to right. You don’t want to careen into that space with unfounded doctrine. And so I think that we’ve matured properly over the years.
SANGER: In the context of the 2020 election, Cyber Command, a lesser degree NSA, came out and announced things that they did to disrupt some of the Russian activity, disconnecting the GRU and others from some of their activities and so forth. They haven’t said very much about what they’ve done in Ukraine, and I know you well enough that I don’t really expect you to provide us a comprehensive list here today. But as you look across, even if you can’t discuss the details, do you think the United States contributed in any way to the Russians’ troubles in mounting cyber activity in Ukraine and around it, or were effective at it I guess?
INGLIS: So you’re right, I won’t—I can’t and I won’t speak to the specifics. I think American citizens, likeminded nations’ citizens should simply understand that if they have an expectation their governments are taking appropriate actions using all instruments of power—diplomacy, legal remedies, kind of, you know, thought leadership—cyber is one of those instruments of power—then they should be confident that those tools are being appropriately used, not least of which are the defensive tools which would say how do we create resilience, how do we create a proactive defense on defensible architectures and give a stout accounting of our own ability to chart our course and to achieve our aspirations built on that.
I think the messaging that you speak to is in part a message to the Russians declaring, you know, what it is we’re prepared to do, what we’re doing, and therefore hopefully has some ability to change their decision calculus. I’m not using the word “deterrence” because that’s a fraught word, but we need to make sure that they understand what we’re prepared to defend, and under what circumstances and how. But it’s also a message to the American people or those who we would defend to say that we’re in this, we’re essentially taking the appropriate actions.
I think looking back across my forty years in this space I would say that I’ve observed three waves of attacks that are of some consequence. And I don’t pin these to either actors or events, but rather to eras.
The first wave forty years ago was clearly attacks that we would see on the secrecy or the availability or the integrity of the data that was in these systems. We called these kind of confidentiality, integrity, availability in the day, and we spent a lot of time trying to figure out how do we defend the data or the systems that process the data. And you might have in those days considered that if you had defended the data, it was job done.
We later realized that there was really a second wave of attacks, because it’s not the data that’s important; it’s the abstraction of that that then allows us to do the critical functions or the personal functions that we built the internet to do in the first place. And so it’s the abstraction—the generation of electricity or the flow of electricity, the abstraction of that into the coordination of your personal functions, the abstraction of that into the conduct of business dependent upon digital infrastructure. And attacks on that, which were kind of growing by leaps and bounds twenty years ago, require a slightly different strategy. You go to do all the things you do at the first level, and at the second level you have to now effect a collaboration because those processes, those functions, are essentially the result of collaboration broadly across society, and often in the case of critical functions between the government and the private sector. So you have to do something more.
But the third wave of attacks, and the one that kind of brings me back to your main point, is an attack on confidence. Think about what was happening in the election of 2016 or 2020. The Russians might have been, perhaps, doing things at that foundational level—stealing emails, maybe hacking into various servers, kind of running troll farms—but it really wasn’t about the data. It was about an abstraction of that into an election system. And if you want to defend the election system, that for the United States is a collaboration between states and counties and locales. Federal government leans in to assist both in 2016 and 2020 the defense of that. But fundamentally, it was an attack on confidence, right? At the end of the day, it was an attack on: Does the American people believe—do the American people believe that the election system served their purpose, that democracy is still intact and it—and it can actually have confidence in those digital underpinnings? And if you’re going to address that, you need to make sure that you’re not simply addressing the actor that might hold that at risk but you’re addressing those whose confidence might be shaken to say, no, no, we’re actually doing what we should to defend this space; we’re essentially defending with a stout defense all of those things, to include our principles on top of that. And so the messaging I think you’ve heard is as much about speaking to the aggressors on this space as it is about speaking to the victims in this space, because somewhere between the two of those we need to make sure that we have thought leadership that gets us to the right place about why we care about this space and what we’re doing to defend it.
SANGER: So if I was one of those aggressors, the Chinese government, I think I’d be delighted with the events of the past couple of months because it has soaked up so much of the thought processes of the U.S. and other Western leaders that you haven’t heard very much about them. But you have been working away in the administration on a new approach to China and have got some big, big announcements coming up on that. I wonder if you could just give us a little bit of a sense of how you look at dealing with the Chinese cyber threat in particular differently from what we have just been discussing with Russia, because it’s a different actor with very different goals, very different approaches.
INGLIS: Well, first and foremost, imagine that—for the Chinese, like the Russians, like the Americans, cyber is an instrument of power, and we have to imagine to what purposes they would apply that. Which immediately points back to the geopolitics of the situation, less so the technology. And so you have to focus on what the end purposes are, and what they will do with this tool and a range of other tools to essentially achieve those purposes.
This is, at the end of the day, a competition not just of economics, but a contest of—a contest of geopolitical systems. And I think that our concern with the Chinese as they deploy technology is sometimes about the quality of that technology or perhaps the features of that technology, but increasingly it’s about the legal system or the economic system that actually delivers, deploys, and extracts value from that technology. And so we need to make sure that we continue to remember that.
And kind of winning that competition is less about saying no to that technology or that geopolitical system and more about actually filling that vacuum with something of our own creation, right, so technologies that we can have confidence in, geopolitical systems that we can have confidence in. We need to win that in the affirmative as opposed to simply by denying the negative.
SANGER: So you wrote in Foreign Affairs a few months ago a really fascinating piece that I would commend to everybody in the room even if it didn’t appear in Foreign Affairs, the publication of this institution. And you said here that it should come as no surprise that many cyber policies proceed from a fundamentally negative framing that cedes the initiative to transgressors and places excessive faith in market incentives. That line really jumped out at me when you—when you wrote it. So talk to us a little bit about what it means to take that initiative back, particularly in the context that we’ve just been discussing.
INGLIS: I’ll borrow a quote from a friend and colleague in this space, Jeff Moss, who’s of DEF CON and Black Hawk—Black Hat fame. He often starts his talks with a question which seems to be a little bit wide of the discussion, but he brings it back to the main point. The question he asks is: Why do racecars have bigger brakes? You pause and think that this is a cyber talk, isn’t it? And he immediately responds: So that they can go faster. Right? At which point he then launches into the talk, which is: So why, then, do we have cyber? Not for its own sake. We have cyber for some larger purpose. We have it so we can achieve our individual aspirations, our business aspirations, our societal aspirations.
That article in the Foreign Affairs journal was cut out of that cloth, which is we need to stop obsessing about the threats in this space. We should be mindful of them. We should address them. But we should remember, recall, and seize back the initiative why did we build this space in the first place. What are those positive, compelling aspirations we had, whether they’re individual or societal and everything in between? We kind of did a really good job in developing a vaccine and deploying that over the last two-and-a-half years, things that would have been unimaginable ten, twenty years ago. Why? Because this system of systems that we have connects scientists and data and insights in ways that are otherwise impossible. And there are so many similarly positive aspirations we have about this space. We need to think about getting back on track to say it’s our positive aspirations that should drive our thinking in this space and then imagine what’s the cost of that, right?
The cost of that, if it’s not about cyber but rather about those aspirations, is to build the resilience in so that we can have confidence that this system will deliver what we expect. And that’s resilience not just in the technology, but in the people skills—are they up to the—up to the game—and the doctrine. Do we have the roles and responsibilities right? So get this left of the event. That’s the kind of vernacular in the cyber realm. Make this a capital investment that we make as opposed to an operational exercise. We shouldn’t wait for the three-alarm fire to respond; we should actually build these systems of systems so that they’re inherently resilient and robust. Having done that, they will then be defensible, not secure, with that need to actually defend them. That’s a human proposition.
And in that, the further argument made in the article is we need to approach that with a degree of collaboration because no one of us can defend this space on our own. Even the government can’t defend this space, especially the large regions of the private sector, on its own. We have to imagine that it’s only collaboration that will allow that. We overestimate what the government does, underestimate what the private sector does, and ignore at our peril what we can only know together.
So the two big deals in that article are: one, we have to actually positively reimagine what we built this space for and double down on the investments necessary to achieve that resilience; and, two, defend it in a collaboration.
Finally, the question we ask at the end of that is: If we’re going to get to a place where you got to beat all of us to beat one of us in this space—or if you prefer the positive articulation, how can each of us contribute to the defense of all of us—we need to ask the question, what do we owe each other in this space? For too long, there have been individuals, organizations, sectors who have looked at this and said, the place is trouble. Cyberspace has all manner of trouble in it, but it’s somebody else’s to solve. That willful ambivalence needs to be removed. We need to—each and every one of us—participate in our own defense. So that’s what that article was all about. And if it’s unduly positive, it’s because—unless we have that positive aspiration of what our strategy should deliver, we’ll continue to be confounded by transgressors who own the initiative in this space, and obsess about those things that go wrong in this space in ways that deter us—self-deter us—from getting to where we need to go.
SANGER: Let me just drill down on one part. It’s the last part of that sentence I read you. It has to do with your phrase that market incentives aren’t enough. And I think we’ve seen the administration play that out in the China case. If you look at the China bill that has been struggling its way through Congress since it was first passed by the Senate back in June, it’s full of industrial policy, industrial strategy. No one wants to use the word industrial policy but, you know, $55 billion for the semiconductor industry, money into artificial intelligence, quantum computing—things that I’m sure you believe, as many do, we need to be investing in more heavily.
So what needs to change on the government end of that, and why has it been—if it’s pretty clear that this is what the Chinese are investing in in their Made in China 2025, why has it been now eight, ten months just to get from the Senate vote through to something that you can actually begin to implement here?
INGLIS: Well, let me kind of talk about that in two tranches, you know—first, in terms of the role of market forces. They’re very, very important, right, especially in this society, but like-minded societies as well. And so we need to give it its due. I mean, I would say the continuum that we have followed in other domains of interest—automobile safety, aviation safety come quickly to mind—is there is a certain degree of self-enlightenment that comes into play where innovators, kind of, manufacturers say this is a better idea, and then somewhere between public service and the ability to build a market, they say, we should do these things based upon my own insight, my own innovation.
Market forces then get you a further distance down the road in terms of competition comes into play, and users will prefer things that are more intuitive, perhaps safer—I mean, kind of—or perhaps those things that are more resilient and robust by design, but it doesn’t get you far enough. At some point, we determine that there are non-discretionary features of cars, of airplanes, of therapeutics, drugs, and that we have to step in and say we can’t leave this either to chance or the vagaries of market forces which may from time to time wind up in a bubble or in some dark corner.
And so with the lightest possible touch, we need to specify what are those non-discretionary features. We’ve done this before, and we will do it in this space as well; again, allowing market forces to have their play where innovation comes from, where scope and scale comes from. But at some point when we determine that health, life, and safety depends upon the qualities of digital infrastructure in the same way that it depends upon the safety of automobiles and airplanes, we’ll step in and essentially do that as well.
The second tranche I would offer is that we’ve recently passed the Infrastructure Investment and Jobs Act—$1.3 trillion that we’re about to invest in what most people probably think is a largely physical world. And to be sure, there are a lot of physical manifestations of that infrastructure, but just about every bit of that—I would suspect every bit of that—is going to be dependent upon digital infrastructure. If you’re a bridge, right—imagine you’re a bridge, right, kind of, you know, the “All About” books, and you’re going to be built with the maximum efficiency, probably lighten the structural components by sensoring that bridge to understand what’s the current load, and what the weather conditions might be so that you can adjust in real time, right, the performance of that bridge. That reliance upon digital infrastructure is something that has to be considered in terms of its cyber resilience; not just kind of to protect itself against the vagaries of nature, but an adversary who might say, I want to hold that at risk.
So that $1.3 trillion is an amazing opportunity for us to figure out how do we actually then pay, at the moment, for the next ten, twenty, thirty years of the cyber resilience, not just the physical resilience that we want. And so that has to be side by side a goal that we put on the table for ourselves.
SANGER: So two other things I would like to get to quickly before we open this up to our audience here and virtually—one of the interesting characteristics of this time period, particularly since the Ukraine war has started, is that we have seen the technology community begin to take sides more with the United States, with Western values than I think before. I mean, a few years ago—it wasn’t all that long ago—that Google employees rejected the participation in Project Maven, the artificial-intelligence-related Pentagon project related to drones.
Now we’re seeing not only tremendous collaboration during a time of war, but the banning of Russian-based websites, a lot of Russian news sites, and so forth. And where you sit—I mean, you’ve seen this play out in your jobs at NSA and the interim before you came back to this job—where are we in this great pendulum swing between identifying tech companies as international players with no real national identity and what you’ve seen now?
INGLIS: I think the common underpinnings of both of those eras that you described is the values underneath, right—a choice of values as opposed to a choice of nations or geopolitics, right? And I think the United States and like-minded nations actually find that quite agreeable, right, that we’d actually stand for the values and the principles that, in our case, kind of are enumerated in the Constitution, particularly in the preamble. And if you choose, right, that those are the values that you would then sign up for and support, right, then you are signing up for, right, alliance collaboration, right, for something that endures across time. Administrations come and go. Political parties come and go. But those values endure across time.
And so I think what you’ve seen is a very stark contrast between the values that are playing out on the side of the Russians and the values that are playing out in terms of the staunch defense mounted by the Ukrainians and all those who would enable them. And I think that that, at the end of the day, is driving what you then see as perceptively geopolitical choices, but it’s really about the underpinning values.
SANGER: Well, actually, I wanted to ask you about an interesting debate that’s taking place within the White House and within the cyber infrastructure there. You may recall that during the Trump era, there was a National Security Presidential Memorandum—I think it was 13—which, while classified, was described by John Bolton at the time who was national security advisor as trying to go take the decision making, particularly on offensive but not entirely on offensive issues—out of the White House and push it down more to the warfighters, to Cyber Command, and so forth. This was in response to a concern—a legitimate concern—during the Obama era that there was so much debate with so many players about each action that the system got gummed up and decisions weren’t made. As we hear it now, there is an effort within the White House to think about how you could go rewrite that and maybe bring some more of that decision making back into the White House, presumably without the bureaucratic troubles one experienced the last time.
Tell us a little bit about that, where it stands, what the parameters of the discussion are.
INGLIS: So as you would know, it was classified then and it remains classified now, so I’m not at liberty to say much about what’s inside that box, but to say the following: that when that was published—kind of a classified document inside the government—in the year 2018, it was part of a trifecta, right, which I think occurred simultaneously. They weren’t formally coordinated or synchronized in this way, but they happened to essentially reset what we thought about this domain.
First there was the passage of law—I think it was the National Defense Authorization Act of 2019—which described cyber as a traditional military instrument, traditional military activity; not so much kind of opening the space to say we therefore need to conduct military operations in cyberspace, in saying that it’s another instrument that can be used by the military—kind of align that with all the other modalities that the military has to say that this can be and should be kind of used as an instrument the military can bring to bear, meaning constrained by the same purposes, constrained by the systems, constrained and influenced by the same considerations.
On the other side of that, there was the declaration in the Defense Department strategy that year of this concept of persistent engagement in forward defense, which essentially brought cyber into play in the same way that we use legal remedies, in the same way we use diplomatic remedies: early discernment, forward discernment, and early engagement of problems that genuinely hold us at risk, and such that we’re going to have the highest possible leverage under the rule of law of those things that are threats to us.
And so it really wasn’t about unleashing cyber offense; it was about essentially using cyber in the full defense using that side by side with other instruments of power. That then leaves NSPM-13 in the middle which, by John Bolton’s description, was a mechanism by which we could understand how do we properly delegate to the Defense Department the authority to conduct cyber operations under the rule of law consistent with our values and principles, and considering that in the context of all the other instruments of power.
That remains in place in terms of being an NSPM—a National Security or National Policy Memorandum—that essentially governs that space. It would only be reasonable, right, to reconsider that to make sure it continues to occupy its proper place, and that’s what’s ongoing at the moment, is to reconsider that to say, is there anything we’ve learned over the last four years that would cause us to adjust, right, the placement of that instrument or the consideration of that in the context of all the other instruments.
I’m not at liberty to say what that review has led to, or what the internals of that document are, but it’s more about context than it is about a single-threaded instrument or about setting up a silo known as cyber.
SANGER: And would the result be that authorities that may have been delegated previously are now sort of brought back into the White House for approval ahead of time?
INGLIS: I mean, I’ll say that the result should be, will be that we have even greater confidence that that instrument of power is properly applied in the context of all other instruments.
SANGER: Well, we’re going to turn now to our members here to join in some questions. I think our first question is going to come from—virtually, but to ask a question, please click on the appropriate spot on your screen, and we’ll get to some here in a moment. We’re just going to go to the virtual ones first.
Do we have a virtual question ready to go?
OPERATOR: We’ll take a virtual question from Bobby Inman.
Q: It took me a minute to unmute.
Chris, is it feasible that the absence of the cyberattacks that many of us expected in Ukraine and all is the value the Russians put on the intelligence they are collecting on the Iranian forces that they are fighting, and secondly, that the access to social media for their aggressive propaganda, disinformation campaigns, both in Ukraine and in the West, gets higher priority at this point?
INGLIS: Admiral, it’s nice to see you again, at least virtually speaking. I hope you are well down in Texas.
I think it is possible that the classic intelligence gain/loss consideration is informing the Russian decisions. The only thing I think that would counter that is what we’ve seen in terms of an almost reckless application of kinetic power, right, the bombs and kind of explosions that routinely occur in a seemingly indiscriminate way. They’re holding at risk the very infrastructure that would carry the communications they might be interested in that they might use for intelligence purposes, that none of that I think could kind of offer an insight based upon classified intelligence, but I think you’ve properly kind of deduced that there might be an intelligence gain/loss proposition in their hands.
SANGER: OK, we’ll go to a question right here. We’ll start with you, yeah.
Q: OK. Alan Raul from Sidley Austin.
Two weeks ago today the attorney general and the FBI director announced a government takedown of the malicious botnet, so-called the Cyclops Blink. It was attributed to Sandworm from the Russian GRU.
Do you anticipate that there could be any retaliation from that active defensive measure to disrupt the botnet? Do you think there will be other opportunities to repeat that, and do you view that as a possible model of successful public-private partnership in helping bring that down? Thank you.
INGLIS: I think the short answer would be perhaps a modified yes to all of those. So for those that weren’t paying close attention to that, essentially what that was is, under a court order, right, the Department of Justice essentially separated the command and control elements that have been installed by what we believe was the GRU—Russian military intelligence—that would then command these botnets to kind of achieve some no-good purpose, right? There is nothing but an illicit purpose associated with those botnets.
I thought that it was instructive for several reasons about what might we do in the future. First, it was done under the rule of law, right, so it was affecting kind of command and control elements that were on a very diverse, highly distributed kind of set of platforms such that you could not have gone around and knocked on each and every one of those doors to say, I’d like you to kind of, on your own initiative, remove that.
Second, it was done with the lightest possible touch, meaning it didn’t go in and kind of destroy data or processes; it just simply removed the link between the command and control and the botnets, both of which had been purloined and set up for no valuable purpose.
And third, it had a high degree of leverage benefit to a broad population, which I do think is a good model for public-private collaboration in the future: rule of law, kind of lightest possible touch, and the broadest possible set of beneficiaries. I hope we don’t have to do that again. I hope that the message to those who would then try to repeat the exercise from the aggressor’s perspective is it’s going to be harder and harder for you to succeed. The cost is perhaps too high. But at the end of the day, it remains to be seen whether their decision calculous is going to be affected by that.
SANGER: Good one. Another virtual question if we have one. If not, we’ve got several here in the room.
OPERATOR: We’ll take a virtual question from Catherine Eng.
Q: Hi, Director Inglis, Mr. Sanger. Thanks for taking the time.
One of the challenges of cyberspace is that things can change very quickly. Bad actors are constantly evolving by trying new techniques and moving in spaces where it can be difficult to enforce actions against them. And so as they innovate, presumably our own offensive and defensive capabilities would do the same, but oftentimes, bureaucracy and other barriers can get in the way.
I know that your role is also meant to cut through all of this, and perhaps you’ve touched on this a bit with regard to public-private collaboration. But in light of all that, how would you view the nature of this challenge? And how can we work towards ensuring the rate of technique innovation of transgressors doesn’t outpace our own capabilities? Thanks.
INGLIS: Yeah, what a great question. I would answer it two levels.
You know, one is that broadly—and this is easier to say than do—but broadly, you would observe that we’ve been crowdsourced by those aggressors and they’re very agile and very enterprising. They’re even syndicate setup to do ransomware, which broadly connects various in sundry parties who probably don’t have a physical relationship—probably they don’t know who each other really are, but yet they collaborate against us.
We need to turn the tables. We need to crowdsource them, which is why I say we need to make it such they need to beat all of us to beat one of us. Now you point out the difficulty of doing that, right, kind of in the light of day with bureaucratic and all manner of other processes that try to govern the speed at which we affect relationships and execute those relationships.
I would say that we need to reconsider what the nature of collaboration is, and to this, I would point to our British friends who run a place called the National Cyber Security Centre, and we tried to replicate this—and I think they’re doing a really good job under Jen Easterly’s leadership—in the form of something called The Joint Cyber Offense Collaborative, which turns collaboration on its head.
Don’t wait until you think you have something in hand that you know is valuable to some other party. That probably is too slow, and you probably don’t have enough insight into what’s really valuable on the other side of that permeable membrane in somebody else’s stovepipe. Let’s collaborate at the lowest possible level.
Let’s actually compare the insights we have in the bowels of these stovepipes, so that some kind of half of an insider shard or shred or a hunch is compared and contrasted in real time across these very diverse, right, organizations—private and public sector combined—so that we discover something on the fly that no one of us would have discovered alone, or if we had, we would have discovered it too late for it to make a difference.
It's working, right. It doesn’t happen just at, right, so the Department of Homeland Security CISA organization or up at the National Security Agency, which has a cyber collaboration center. It’s happening broadly across what we call sector risk management agencies and at the FBI where each of them has engaged, hopefully in an increasingly coherent fashion, the private sector to say how do we compare and contrast our various insights so that we can discover on the fly the threats to this space.
Now having settled on that, what I’ve just described is a kind of a model for collaboration in the response phase of this. We need to have an equally adept collaboration in the preparation phase such that we consider: how do we build resilience and robustness into these systems? How do we get the roles and responsibilities right? How do we get the people skills right? How do we get the technology right?
And that work lies before us in terms of do we make it such that it’s less likely that these events occur? We don’t have to wait for the two-alarm fire to essentially affect the collaboration. We do that by design. That’s harder, but it affects supply chains and all manner of commodities that then populate those supply chains. That, I think, is perhaps the next chapter of this collaboration.
SANGER: Well, just to follow on Catherine's really good question, SolarWinds was probably the most interesting example of sort of an organic element of this. It was a case where Mandiant and to some degree Microsoft detected this—came in a moment of transition in government for the U.S. where it sort of fell into one of these seams.
And yet we’ve been waiting around for probably more than a year for sort of the big lessons learned report from SolarWinds. I haven’t seen it. What do you think we should extract from that? When are we going to sort of learn, sort of get a full accounting of that?
INGLIS: I think the lessons are pretty clear very soon after if not in the midst of that, and those lessons were taken aboard very soon after. So if you look at the Executive Order 14028—those of you who follow those things closely, published in May of last year—it essentially was a broad set of activities that were mandated within the federal government and the supply chains that essentially feed the federal government—to say these are the lessons learned of our previous experience, not least of which SolarWinds.
You have to have as mandatory features a variety of things—multi-factor authentication, segmentation, encryption and so on and so forth—and practices on top of that that allow us to then better defend, right, these systems of systems. The technology modernization fund that the Congress brought into being—a billion dollars, then applicable applied broadly across technology within the federal government—is similarly a response to that.
When you then see people begin to reconsider—both in the private and the public sector—what’s the nature of our collaboration defending supply chains, which really was the biggest lesson of the SolarWinds—was that nobody was defending the entirety of that supply chain? What’s the nature of that collaboration? When you see companies begin to show up and say, I want to have a collaboration with the federal government or some allied government so that we can actually discover some things together that no one of us would have seen alone—those are all responses to that.
So I wouldn’t wait for the lightning strike that has a cover on it that says Lessons from SolarWind(s). I would look at the diffused, kind of, set of lessons that apparently have been learned and beginning to be effective. There are some further lessons that we, perhaps, in the fullness of time using kind of the scholarly research that can be done that would tease out something more, but I think we’ve begun to act.
SANGER: There’s a question right back here again, right in the middle there, yeah. There’s a microphone coming to you.
Q: Thank you, Director Inglis. Wonderful to see you, David, as well.
My name is Monica Ruiz. I am from Microsoft’s Digital Diplomacy Team.
So my question is around the creation of a new cyber bureau that was announced by the State Department earlier this month. I’m wondering how you envision your office at ONCD collaborating with the new cyber bureau, especially in the context of what’s happening in the U.N. with the Open-Ended Working Group and the ongoing cybercrime treaty negotiations?
INGLIS: In a word, richly. I think it’s a fantastic construct that the State Department’s just stood up, and I and Jen Easterly and Anne Neuberger we all attended, right, the ribbon cutting for that, what, two weeks ago—Monday there might have been one more—where Tony Blinken and Wendy Sherman and Jen Bachus essentially described what the role of that new cyber, kind of, organization would be—a bureau for cyber.
I think that diplomacy is a very important instrument of power, especially where kind of an instrument of power like cyber cuts across national boundaries. And therefore we need to bring diplomacy to bear on this to understand: what we have in common with nations increasingly a lot in cyberspace; how we then affect collaboration across those national boundaries; and how we then define, right, the procedures, protocols, behaviors that we would prefer and bring those about in the largest possible community, the international community.
So my sense is that my office will partner richly with the diplomatic arm, right, of the State Department as they execute their cyber strategy, and that on occasion you might find me abroad but always speaking about and with—by, with and through, right—that department.
SANGER: Great. We’ll go to another virtually.
OPERATOR: We’ll take the next question from Joseph Nye.
Q: So my question follows the prior question. I’d like to know a little bit more about how you see your role in geopolitical crises. When the Solarium Commission set up your office, the idea—to use a metaphor—was that you would be the coach, that Anne Neuberger would be the quarterback on the field, and that the danger was that the coach would get too much drawn into operations.
And since we are in our first major crisis with the Ukraine events, have you and Anne worked this out and how do you see the role which the Solarium Commission sketched out for you? Obviously, you’ve got to be involved in everything, but on the other hand, you’re also supposed to be working out the structure for the government as a whole, including a lot of domestic things, for cyber.
So just tell us a little bit how does it work in practice? We all know what the Solarium Commission said, but now we’ve seen it in practice.
INGLIS: Yeah, Joe, that’s a great question. Thank you for it and for the foundations that we all stand on in terms of your kind of work on deterrence there.
I would say that that question is at once, kind of, only half as complicated as it needs to be, but the answer is more settled. First, there are principally three roles—and perhaps a couple more beyond that in a longer conversation I would mention—but three roles that come immediately to mind. The analogy that we’ve used in this space over the last nine, ten months is that I serve as the coach and that Jen Easterly at CISA serves as the on-the-field quarterback. Anne Neuberger retains a very special, kind of, third role, and I’ll describe those in reverse order.
Think about what the National Security Council does as employing all instruments of power that are necessary to be applied to bring about conditions that you would desire in some domain of interest. Cyber is no exception. And so the National Security Council—and for purposes of cyber, driven, led by Anne Neuberger—applies diplomacy, applies military instruments, applies the legal instruments, the financial instruments, to bring about the necessary desired conditions inside cyberspace. That’s why Anne has been leading that engagement of the Russians vis-à-vis trying to get them to help us, to assist us in ransomware, and so on and so forth.
And so my job is inside cyberspace, not outside cyberspace. So my job is to be the coach to ensure that all of the assets, the capabilities, the authorities we can bring to bear, that are inside cyberspace are properly aligned, that they’re inherently complementary, and that they add up to something greater than the arithmetic sum of its parts. By way of example, to make sure that what CISA does—Jen Easterly, the on-the-field quarterback—is complemented by what the sector risk management agencies do.
You may know, you do know, you well know, right, the Department of Homeland Security, kind of, engages broadly a lot of the critical infrastructure sectors, but so does the Department Energy, the Department of Treasury, the Department of Defense for their respective sources or their respective lanes of expertise. And we need to make sure that increasingly that those are complementary—that if you’re looking back at the federal government from the private sector, you don’t need to make this unholy choice of do I, kind of, seek out of one of those and hope that they have the whole story or do I run the gauntlet trying to figure out how to piece that story together. But the government knows what the government knows because by design we’re coherent.
It is, of course, slightly more complicated than that. You have to add in the threat response that’s done by FBI and various other players. But again, my job is to essentially make sense of that hole inside cyberspace. Anne is to apply the instruments of power outside of cyberspace. And Jen Easterly has most of the resources on the field to essentially deliver—operationally deliver the objectives that we have.
And I am mindful that in most sports the coach is not allowed on the field in the middle of the game, and so I’m happy to bask in the reflective glow of the extraordinary performance of all those people.
SANGER: All right. Let’s see a question right here.
Q: Good to see you, Chris.
INGLIS: Nice to see you, Sam.
Q: Yes, thank you.
Casting your eye forward, what do you think of the role of offensive and defensive cyber—essentially the role of these tools as instruments of state power in international relations? Are we going to get to the point where we don’t think about, well, there’s foreign—oh, there’s this cyber thing off to the side? At what point, if ever, do we think about this as an integrated component of the international system and the exercise of power as part of the integrated tools? I think you’ve hinted at it, but I’d be grateful for your thoughts. Thank you.
INGLIS: Yeah, Sam, that’s a great question. I think we’re there. Now we might not be there in terms of the execution fully matches our thinking on this, but I think we’re there. And I thought that 2018 was the inflection point, right, where, again—where I indicated—the Congress said, look, this can be a traditional military activity, right, and the Department of Defense described it as an instrument of power—not the instrument of power—for which purposes of cyber action requires a cyber response.
When you say cyber is an instrument of power, that says a lot. That says that it has to be thought about in the constellation of all instruments of power. It might be the right response for something that another instrument of power does, but it’s not necessarily the right response for a cyber provocation. And so you have to actually put it on the field and say it needs to earn its place in any particular implementation of the moment, any particular strategy. You consider its implications, its applications, in all the right context.
When earlier there was a question about, you know, should the State Department have a bureau? What do we think about that? I think that’s a fantastic idea, so that we can ensure that cyber is properly considered in the state craft that we have with various nations. Because if this instrument of power doesn’t work in the international kind of arena, then I’m hard-pressed to imagine how it could possibly work in a national silo. It just won’t work on its own in that isolated environment.
SANGER: You know, the analogy I often use is airpower, which took about twenty years before people began to think about integrating it. It certainly wasn’t in World War I. It was by World War II.
INGLIS: Right. And I was born and raised in the Air Force, and so I was an Air Force Academy graduate—hand on heart. I can still remember the description by my elders there saying the reason that they took the Air Force away from the Army was that the Army considered airpower to be long-range artillery.
I think that we’ve left that era in terms of having a similar kind of myopia about cyber. Cyber doesn’t exist for its own sake. It’s not an instrument that lives in a silo of its own sort. It has broader applications, and therefore it needs to be considered in a broader context and held accountable to make sure that it plays its role appropriately in that space.
SANGER: We are running down to the end here. There was a question right here. Yeah, the young lady here.
Q: Hi, Director Inglis. Thank you so much for your time. My name is Jessica, and I’m from Shift5. I do appreciate your callout specifically of the infrastructure bill and the potential opportunities that we have with the infusion of resources to tackle not just the physical infrastructure, but the cyber resilience that underpins all of our physical infrastructure.
I would be curious, as you are when you’re in first grad, your particular thoughts on the cyber resilience of our commercial aircraft and the lacking cyber intrusion detection systems and therefore inability to monitor in aircraft onboard systems from a cyberattack. We talk about the cyber resiliency of aircraft versus our iPhones. I think you can publicly agree that there’s a disparate cybersecurity method between the two. How does your role and the potential infrastructure bill address that gap of cybersecurity with regards to commercial aircraft? Thank you.
INGLIS: Well, I think I’ll broaden that question a bit to say that I think your question probably cuts to the chase in terms of, is it just information technology—general purpose information systems that run apps or that have general purposes based on what code you want to run—or does this also apply to operational technology?
I would describe an airplane like a car, right—kind of, a valve that’s kind of causing fuel to flow up and down a pipeline somewhere in America, that’s operational technology. And for too long we’ve assumed that because it’s physically distinct and it’s not, kind of, observably well connected to the underlying digital infrastructure, that somehow it’s safe based upon its physical isolation.
That’s not true anymore. They have attack surfaces that are reachable sometimes by virtue of kind of coming in through the IT—the information technology side. That was the dilemma in the Colonial Pipeline, was the IT attached to the OT, the operational technology. Or sometimes using exotic methods, whether that’s satellites or kind of all manner of other things.
SANGER: In that case, by the way, it wasn’t really attached; it was the company that shut down the operational technology.
INGLIS: I think it was—again, I don’t have the details richly at hand. My sense in the Colonial Pipeline was that there was an abiding concern, right, that because this thing was known to be on the IT side, that it will transgress into the OT side so let’s just shut it down, all right. Maybe a thoughtful decision, but all sorts of consequences in terms of the psychology of the people that thought that they were going to get fuel that day, but that’s a whole different story.
Having said that, we need to give equal time and attention to operational technology, right, because, you know, critical—health, life, and safety— critical functions depend upon that to an even greater degree than they do upon general purpose IT. And so we need to stand in and try to figure out, you know, what is it that, you know, these connections afford transgressors the ability to do? How do we then defend that? How do we then, kind of, right size this to say let’s build the resilience and robustness in by design, make a defensible architecture, and then actually defend it?
It doesn’t matter whether it’s OT or IT. They’re increasingly, kind of, intermixed in ways that we have to think about that.
SANGER: Well, the OT side of the Council of Foreign Relations has a strict rule about ending on time. They have a particularly strict rule when there is lunch outside, and so I just want to remind everybody who’s here that there is lunch out there. If you’ve joined us virtually, you’re on your own for lunch.
But in thirty minutes, we’ll be beginning another symposium, Cybersecurity by Other Means—Diplomacy and Deterrence.
And I just wanted to thank you, Director Inglis, for joining us today. Your comments will be on the CFR website with a transcript for anybody who needs it at some point in the near future. I hope you’ll come back as all of this is done. I wish you good luck as you enter into what I suspect is a pretty fraught set of moments as we get into the next phase of the war here.
INGLIS: Well, thank you, David. And I would say good luck to us, right. This is a shared collective proposition. And I would say that if anything, at the end of the day we need to imagine what thought leadership is required to compel, impel, inspire us to each and every one participating in their own defense. Nothing short of that is going to work. No one of us can defend all of us. And so we need to figure out how are we going to do this together going forward.
SANGER: Well, thank you very much, and thank all of you for your great questions. (Applause.) Thank you.
STEWART: All right. Good afternoon, everyone. Thanks for joining us. I have the pleasure of presiding over this conversation about diplomacy and deterrence and we’re going to get started today by having our panelists introduce themselves.
My name is Camille Stewart. I’m the global head of product security strategy at Google, where I sit at the intersection of our product security teams and our central security team, and I have worked across government and private sector on cybersecurity issues for a number of years.
HARDING: I’m Emily Harding. I am the deputy director of the International Security Program at the Center for Strategic and International Studies, which is a very long title that just means that I get to oversee the work of about 50 scholars doing tremendous work in intelligence, defense, and tech policy. Before that I spent almost two decades working in the federal government, both in the Senate in the Senate Intelligence Committee and then in the intelligence community, and a couple years at the White House.
HULTQUIST: I’m John Hultquist. I’m from Mandiant’s intelligence analysis shop. We, you know, look at threats from all over the world using our incident response and our—you know, a dozen different ways to sort of collect this data; we bring it all back to, you know, one centralized intelligence hub where we are sort of developing intelligence on threats around the world. Been with various versions of Mandiant for about twelve years; before that I was with DIA and Diplomatic Security at State, mostly looking at the Russian threat.
SMEETS: And my name is Max Smeets. I’m a senior researcher at ETH Zurich, the Center for Security Studies, and also direct the European Cyber Conflict Research Initiative.
STEWART: Wonderful. As you can see, we’ve got a great panel ahead. And what I’m actually going to do is have each of them give about a two-minute overview on their thoughts on diplomacy and deterrence in this space, and we’ll use that as a foundation for our conversation.
Emily, do you want to get us started?
HARDING: Sure, I’ll get started. So it’s an interesting and very broad topic. I think that you need to take it back to basics when you’re speaking about operations in the cyber domain. Right now there’s no common lexicon, no real norms and understandings. My colleague Jim Lewis at CSIS has done some really tremendous work in international agreements around cybersecurity and cyber issues, but those have yet to really gel into a broad set of norms that govern work in the cyber domain. There’s no agreement on what is cybercrime, what is cyber espionage, what is a cyberattack, what is cyber war? You have politicians who sometimes understand the cyber domain and sometimes really don’t, calling things just willy-nilly, oh, it’s an act of war. Well, is it? What does that really mean if it is?
So given that, why is this so hard? Why is it difficult? It really is a combination of things, you know. When you’re thinking about something that’s a game-changing technology—Stingers in Afghanistan, hypersonic weapons, nuclear weapons—those all came with a debate around what norms govern them and how they should be used. What is a proportional response? We haven’t really gotten there yet in the cyber domain, and it’s partially this combination of two things: The speed of attribution is very difficult in this domain, and John can talk extensively about this; he’s done tremendous work in this field. There’s also, as sort of a partner to that, the ease of deniability. Actors have proven themselves really adept at staying arm’s-length removed from any kind of cyber activity that they don’t want to claim and then claiming it when they do. And that combination of things makes it very challenging for policymakers, people who, you know, sat at the NSC like I did, to make decisions about what to do to respond to a cyberattack, to a cyber operation; what does this mean and how do we react to it? It also means that it prevents the threat that is at the core of deterrence, and that is a quick and a decisive response to an activity. If you can’t attribute it quickly and if you don’t have a set of policy options ready to go, it’s very difficult to pull something on the shelf and respond immediately and thus send a message or deter future action. I can talk about this a lot more later, but in the 2016 election interference that we saw the Russians do, when we were studying this on the Senate Intelligence Committee, we saw this play out in excruciating detail within the Obama administration. And really, I have all the sympathy in the world for them; they were in what they saw as a totally unprecedented situation and they were under attack, but they could not say with 100 percent certainty from who and what that meant, and that delay in attribution, that inability to pull something off the shelf and immediately deploy it had nearly disastrous consequences. That’s something that we can’t afford to do four years, five years, six years later. It’s time to actually get that settled and move forward.
I think that we will get better. We will get faster. You know, folks like John who are doing this work are already making tremendous strides in that attribution piece, in trying to get to a place where we can act quickly. I think there’s a really solid story to be told right now about Ukraine that is really just sort of emerging. So I have hope for the future; it’s just that right now I think that we still need to really wrap our heads around this as an issue.
STEWART: Great. Thank you.
HULTQUIST: I’ve been asked I think the last, you know, four months now—(laughter)—since, you know, Christmas or the beginning of the year, you know, what’s the likelihood of an incident against NATO allies, against the United States, and I think, you know—and these usually turn into good-natured arguments but there’s a question of whether or not an attack of—any cyberattack against the United States would be crossing a major red line, and I’ve argued that it doesn’t cross a major red line, that the one thing that I think—one of the most important things we have to sort of keep in mind, we’re talking about cyberattacks. And when I say cyberattacks, what I’m talking about: disruptive, destructive stuff—everything from heating and industrial control system to NotPetya-like just widespread, destructive event.
But the word I keep throwing around is “limited,” right? Those incidents—we’ve seen many of them already—were largely limited, right? They didn’t take a society and bring it to its knees. They didn’t bring the economy to a major halt. They are survivable. We will get—we will, you know—probably for a society that’s already experienced COVID-19, you know, a lot of the effects may not necessarily even register. The reason these actors carry out these incidents, right, is not to bring society to their knees. I don’t think they have any—there’s any major question of the prospects of turning off the power for three hours at a time is really going to have that effect. They do it for the sort of psychological effects. They do it to undermine institutions, right? They do it to undermine your sense of security, your sense of—particularly, you know, in places like Ukraine, their belief that the system is safe. In the United States, in 2016, they did it to undermine our elections, right? We had actors in systems where they could conceivably, you know, make some edits or changes to the system or maybe alter some things, but really, they weren’t going to change the election and I don’t think they have any—you know, at command level there, they have no—they don’t expect to do that. What they expect to do, though, is to change our reliance on those elections and our belief that those elections were secure, right? It’s always about undermining our institutions.
So there’s a real—I think the real important watchword here is limited, right, and that plays two roles, though, right? It’s good news somewhat, but it also means that this is a great tool because you could conceivably use it without starting World War III. You can carry out attacks that don’t bring society to its knees and conceivably get away with it. And historically, you know, the attacks that we have seen, these actors kind of got away with it, right? It took years, in most cases, for us to even accuse them of doing it. The Olympics—I talk about the Olympics all the time. Sandworm or the GRU, who we were talking about earlier, attacked the Olympics; they tried to take the opening ceremonies offline. This was an attack on the entire international community. It took us four years before we even bothered to blame them for it, right? There’s no hope for deterrence in a scenario where we don’t even blame the actors for four years, right? And that is an incident that affected literally everybody in the international community.
So I think that, you know, these actors recognize that they can get away with this type of activity and that’s what makes it such a good option for them. They were looking for the psychological—the sort of psychological effects. That’s what they really want to do; they want to undermine our resolve, particularly in Ukraine; they want to undermine our elections elsewhere; they want to undermine our sense of security.
STEWART: Great. Thank you.
Max, do you want to talk to us a little bit about NATO?
SMEETS: Yes, I mean, these are already great points that are mentioned and I thought about the Olympics and realize that there’s an obvious connection here because with Olympic Destroyer many weren’t immediately convinced that it was Russia, right, John? And you’ll know a lot more about that.
But yeah, I wanted to take the conversation a bit towards the NATO alliance, and here’s the main takeaway is that whilst we have seen a convergence amongst alliance in terms of the need to develop a cyber posture, we actually have seen a divergence in what this posture should look like, in particular on offensive cyber and the role of the military. And let me talk thirty seconds about these three key components of what we can see as a cyber posture: capability, strategy, and a legal understanding.
So on the capability side, what we have seen since 2018 is now the majority of NATO members have established a military cyber command with some type of offensive mandate. But the difference in operational capacity today is enormous, so whereas you have, of course, particularly on one side the U.S. and several others who’ve really put the resources into operationalizing this command, the majority of NATO allies still have commands operating on a budget of a couple of million dollars. It’s enough to be at least officially part of the cyber club but certainly not enough to operate effectively in this domain. Now, the second one, around strategy, yes, of course, all the countries have established a cyber strategy and particularly defense cyber strategy and have updated this repeatedly. Also there, from 2018, we have seen some significant differences emerging, right, with the U.S. developing U.S. Cyber Command’s vision of persistent engagement and DOD’s strategy of defend forward with a focus of operating globally, continuously, seamlessly, and also recognizing this activity below the threshold of armed attack that can still be strategically meaningful and that the military Cyber Command has a role to play in potentially even conducting effect operations in peacetime. That is not something that most of NATO allies would be willing to do so and changes the perspective across the Atlantic.
And then the third one, which connects to this, is that what we’ve seen over the past few years is countries articulating not just saying OK, you know, international law applies, to which all allies agree, but how it applies, and we’ve seen a significant difference in, on the one hand, the camp of sovereignty as a rule with The Netherlands and France but, on the other hand, U.K. that says, you know, sovereignty doesn’t apply in cyberspace.
And the last point here is that it’s dangerous to argue that these differences between—in the alliance come from simply differences in maturity. I think they’re actually on a different policy path, and that requires, as a result of it, some real coordination and cooperation to at least bring these closer together.
STEWART: Great points.
So let’s just start with diplomacy. Emily, you mentioned norms, you mentioned the lack of taxonomy. We’ve got work to do, right? Where are nations currently succeeding and where are they falling short, and what diplomatic efforts should we be focusing our attentions on to make progress in this space?
HARDING: Yeah, so I’ll pick one from each category. I think where we’re really succeeding is the cooperation at the tactical level, the kind of thing that Max mentioned with different levels of coordination, but it’s happening. At the working level, people are sharing indicators, people are exercising together. Right now, Locked Shields is going on as a big NATO exercise; they say it’s just a coincidence it’s happening at the same time as Ukraine, but excellent timing. And that is how we win. The NATO alliance, the sharing of knowledge, the hunt forward, the defend forward teams, this is how we’re going to win in this domain. So I think that’s where things are going well.
Now, that level of tactical information sharing of tactical cooperation really needs to be paired with a strategic discussion, and that is hard for lots of reasons. When I was on the Hill and we were doing oversight of the government, people used to come in all the time and brief us, and you could boil down every single briefing to two words: it’s hard and we’re working on it. So—(laughs)—I think that that’s true with this too; it’s hard and we’re working on it. But let me talk a little bit more about why it’s hard and why we still need to work on it. The hard piece—the people who need to have those strategic-level discussions are swamped. They are staring at China, they are staring at Russia-Ukraine, they are staring at, you know, a whole host of global issues from supply chains to food shortages. Sitting down and having a strategic-level broad discussion about what the norms should be in cyberspace is like yes, we should do that; that’s about fifteenth on my list of priorities. We need to create the urgency before the urgency is created for us and really have those discussions.
The other piece of that, I think, is that a lot of these concepts are very fuzzy and they’re wrapped up in domestic values and national values. I mean, just here in the U.S. we have debates all the time about free speech and what can and cannot be regulated in cyberspace, given our First Amendment rights. Our European friends have very strong views on privacy and have implemented that in a whole host of different ways and that bleeds into this debate as well.
So it’s difficult. But if you can take it up a few levels—my friend Sue Gordon always says that if you disagree down here, take it up a couple levels and get to a place where you agree, and that place where we agree is the norms and the values. This is a place where NATO allies, where like-minded democratic countries, can sit down at a table and say, we all agree that spies are going to spy. That’s a thing that’s going to happen.
But that when you’re engaged in operations that affect human life, that affect public safety, that’s a different level of threat and that’s where we need to be building the norms and the guidelines.
STEWART: I’m so, so glad you brought up the point about being strategic and the lack of bandwidth there. We have to prioritize that if we are going to make progress because, quite frankly, there will always be the next Russia-Ukraine, the next ransomware attack, the next whatever. But if we’re not making progress on these more strategic initiatives we’ll never come to that consensus.
So, Max, tell me, can we get some norms? (Laughter.) Can we find consensus in NATO? What work should we be doing in NATO to do that?
HARDING: Yeah, Max. Fix this.
SMEETS: Let me pick up the point—(laughter)—let me pick up on a point that John made and then Emily as well on the norms and then also on the sharing side of things.
So on the norms side, just to give a, potentially, annoying different angle, yes, we should think about redlines and I think if we—I don’t know how many people are currently sitting in the room, but I think everyone in the room can come up with a couple of different potential redlines that we should consider—no critical infrastructure attacks, financial systems shouldn’t be attacked, health care off limits, all of those things.
But there is a second question now as well, particularly, in the US, considering its change in threat perception, where it has argued, rightly so I think, that activity below the threshold of armed attack can cumulatively still be strategically meaningful. Maybe one gigabyte of data being stolen by the Chinese is not a big deal but doing this repeatedly is significant.
So the second question is what is not a red line and that’s actually a really hard question to answer. I’ve asked that a couple of times in different rooms and rarely do I get, like, a very clear, coherent response of, OK, what is now, like, off limits and what, yeah, is actually allowed to be done, because rarely is activity undertaken by an adversary that isn’t strategic. We have just argued that all strategic activity, you know, can—like, shouldn’t be done. So there’s this strange kind of norms question here now that has emerged that I haven’t really resolved.
The second point is on sharing, and I think it’s a great point Emily made on, like, you know, the importance of sharing, and in some ways, we’re doing this already. But equally, I think, we’re not doing it enough, right. So in the—kind of at least the allied context we’ve got a couple of different initiatives.
The first one, most obviously, is the notion around sovereign cyber effects. We can’t share tooling. We can’t share exploits, what we’re exactly doing, how we’re operating. But at least we can collaborate on when we want to achieve certain effects. And secondly, what we can do, we can conduct these exercises together such as Locked Shields in Tallinn.
I think that is not enough. Much more can be done and isn’t done right now and then, particularly, that comes in the cyber ranges and infrastructure side of things. I think that’s where there is a space which is, one, incredibly costly for many countries to establish to do it well and, second, where you see potential opportunities for collaboration where the use of one country or one actor or one training program doesn’t necessarily reduce the effectiveness of another country to use this as well.
And so if I would make a push and recommendation, like, what should allies do in the coming years, they should have a, I think, even a billion-dollar cyber range for the training of their operators, developers, system administrators, and other people who are crucial for the workforce of military cyber commands and, potentially, intelligence agencies.
STEWART: Great recommendation.
John, I—you know, with all attacks intentionally below the line with the need for more collaboration and creating a cyber range like Max is talking about, and the dynamic of cyber criminals being leveraged as a shield to continue to block the attribution we were talking about earlier, how can we make some progress in this space? Where should we be focusing our attentions in terms of deterrence?
HULTQUIST: I think there’s a lot of good points. So I think we need to almost either sort of rank and stack our problems, right, and that’s, like, they’re not going to change—they’re going to change constantly. Sorry. They always will be changing. But we’ve got—we’ve looked at a lot of different problems in this space and I don’t think we’ve really prioritized them.
Good example—there’s the ransomware problem. There’s the elections problem. There’s the espionage problem. I, personally, think that, you know, the espionage problem is probably the—you know, spies are going to spy. It is probably the least addressable issue.
I think that the ransomware problem is probably the most addressable issue and, you know, if you look at our vulnerability to that problem, it’s—you know, it’s fairly large. These actors are very—they’re now hitting a lot of critical infrastructure. We saw them hit health care during, you know, like, raging days of COVID. They’re crossing a lot of lines, and at the very least we want to push them back, you know, where they’re not necessarily constantly pushing those lines.
The election problem is another good example. It’s not solved. In fact, the unfortunate reality is the last election we had or the last, you know, major election we had we saw new players get into the mix. So when the Proud Boys thing happened, my first instinct is like, well, the Russians did it, right. Like, I mean, I couldn’t say that. I didn’t have any evidence whatsoever. I’m, like, well, here they are. We’ve been waiting and waiting. This is it. This is the play.
It was the Iranians, right. So now it’s not even just a Russia problem. The problem is growing on us. I think we need to have a conversation about what problems we want to stop and start ranking them and going after them. Also, I feel like we’re sort of running from one fire to the next and that’s not going to work. I do think the ransomware problem is, largely, addressable and it is absolutely out of control and, potentially, costing us the most money.
HARDING: Can I just—
HARDING: —on the part of the Iranians and the Proud Boys? Because—(laughs)—I went through that same roller coaster of emotions. (Laughter.) There was a time around the 2018 elections and the 2020 elections where I just did not sleep. There was too much to worry about. The Proud Boys/Iranian problem was, I think, disheartening in that we saw this new player burst onto the market in grand fashion. (Laughter.)
But, in a large way, it was a success story because the United States government and its allies—and that’s a really key point—had their eyes opened for this kind of potential activity. The excellent folks at DHS CISA had done a lot of prep work, so much prep work—
HARDING: —to say to people this is what’s normal election problems and this is what’s more difficult election problems, something to be suspicious of. Then when this activity was noticed it was located, attributed, downgraded, and released, shockingly quickly.
HULTQUIST: Super fast. Historically fast.
HARDING: I mean, it was thirty-six hours.
HULTQUIST: Yeah. Absolutely.
HARDING: Yeah. So, I mean, this is actually—as upset as we all were to see it happen, this was actually a good news story in the way that it was handled.
Now, to Max’s point about red lines, I’m not sure that we were ready to do something to actually respond to the Iranians and try to create deterrence for the next time around and that’s where we need to do more work.
STEWART: I think that’s a great reminder about our point about being strategic and that prioritization that John talked about, the investment in attribution and getting things out there really quickly are signs of that kind of coalescing around being more strategic and focusing there.
How can we create actual consequences for the actual actors, particularly, those hiding behind criminal groups and plausible deniability, and are current tools working, right? You said that this was a success story in the Iranian context but a success to what end, right. Did we deter the behavior or were we just able to make attribution? How are sanctions, attribution, indictments, all of those things, actually moving us to a desired end? And I’ll open that to any of you.
HARDING: Yeah. So I can start off with that just because I brought up the point. (Laughter.) I opened this Pandora’s box.
The Iranian thing was a success story in that we were able to broadcast very quickly to the American people, who were in the midst of a very difficult election, that this is not a thing. This is not real. This is not something you need to worry about. There are not these bad actors falsifying ballots all over the place.
Now, we can leave aside the question of the domestic issues that happened in the 2020 election, but on this specific issue it was a success in that it was mostly defused. I would not call it a success in that there was a broader strategic policy response.
You brought up several things there—the sanctions question, the indictments question. Sanctions are great until they’re not. There’s, really, only so much that you can do with a sanctions package. A lot of the individuals that you might be targeting with sanctions don’t really care.
There’s ways that you can make life painful for a Russian oligarch. For a hacker who was working ten levels down from the Russian oligarch it’s much more difficult to actually create some deterrent pain there.
Indictments, same thing. If this person really wants to come visit their kids at college or take their kids to Disney World in the U.S., then, you know, great. But trying to find them and arrest them, it’s really much more of a messaging tool than anything else and I think, honestly, a tool of last resort.
If you look at the way that DOJ and FBI operate, they are law enforcement officers and what they want to do is build evidence and prosecute a crime and that’s just not the model that works effectively for these actors. It takes too long, it’s too slow, and then while they’re building a case for prosecution they can’t take the information and share it, and that, honestly, is the most important piece.
This is where I’m going to make a pitch for the private-public collaboration and the deep, deep importance of having the U.S. government and its entities and private sector operations that see this on the frontlines on a daily basis doing all of the collaboration possible to try to go after this problem set. That’s my soapbox.
HULTQUIST: I think you’ve hit most of the—all the points I would have made. As we go through, you know—as far as the, you know, the election situation went, I think we’ve gotten to a place where we’re talking about, you know, capability and intent, right. Or I actually got into—recently, got into a conversation with somebody from another country who’s dealing with a different—another actor, a non-Russian actor, and whether or not it was a question of capability intent. And right now in the United States we think, you know, Russia has got capability. The question is whether they have intent.
In this other country, they said this actor has got intent but the capability is just—it’s not really there, and the problem with sort of not being able to deter, right, is that—and when these actors have intent you’re going to run into these black swan events, right. They will hit again and again and again, and these will be incidents that people don’t even read in the news, right.
The problem is, is that just because of the nature of technology eventually there will be a major black swan event. It will—they will get through. So the defense, which, I would argue, you’re absolutely correct, our defense, you know, for the elections were fantastic. Our response was fantastic.
But if they keep trying eventually they are going to get through. They are going to have something that makes it on the news, that causes a division in the U.S. electorate. I mean, there’s all kinds of potential outcomes here, and that’s what happens, I think, with an actor who is almost their own capability but definitely their own intent.
We have that black swan event. And I think another—a good example was the pipeline incident we saw, right. We had been warning—you know, myself and my colleagues had been warning this is coming, it’s coming, it’s coming. They’re knocking over so many things. Someone’s going to get hurt. Something important is going to go down. It’s just a matter of time.
And so I think if we can’t figure out how to sort of approach the intent side, we’re just talking about a matter of time before there’s that black swan event.
STEWART: Definitely agree.
Max, anything on this one?
SMEETS: Yeah. Just for all the talk that I mentioned about this divergence around NATO allies’ cyber posture, of course, the good thing here on imposing consequences is that what we have seen is a real development in the EU waking up, that they have to think about this as well.
And so the U.S.—like, the EU now with the cyber diplomacy toolbox, with, like, at least an agreed set of measures in place as to what can be undertaken to respond, it means that the U.S. is kind of not there alone anymore in thinking of how to impose costs but can, potentially, do this even more effectively in a coordinated manner.
And a second point is—and, of course, it also comes with the nature and the title of this panel is that when we talk about imposing costs we do quickly get into the deterrence mindset and less into the mindset of how can we take the initiative away from our adversary, how can we make sure that we disrupt their activity.
We’re already kind of, like, in the second step, OK, after it has been done what can we do. But, clearly, the second question here is as relevant as well and we’ve seen great strides in this domain, of course, over the past two, three years.
STEWART: That’s great.
I’m going to open it up for questions in just a minute. But I’m going to ask one more while you guys pull your questions together.
We talked a lot about Russia-Ukraine. There was a lot of talk about cyberattacks on the margins. It’s a great illustration of if cyber capabilities continue to be leveraged what will be the impact—who will bear the brunt of the back and forth, of the tit for tat. As folks—grassroots cyber actors start to jump in and—(laughter)—figure out how they play a role, what are we looking at? Who’s going to bear the brunt and who will be playing in this space and how will that impact us?
HULTQUIST: So when I talk about limitations I’ve got to be really clear. I think from a society—societal aspect, we’re going to be fine when it comes to almost all of this. You know, we made it through COVID-19, right. There are a lot of businesses in my neighborhood who are out of business now.
So, I mean, you know, altogether, we’re going to be fine. My customers may take a real hit, right, and I think that’s important to remember. The people who really are on the front lines are our private sector and it’s important to remember when we start employing these capabilities, too.
I saw—during one of these sort of little kerfuffles that we get into with Iran every now and then, I think there was some news of, like, a cyberattack against their capabilities, and it’s important remember that Iran is not going to retaliate against Cyber Command, right. They’re going to retaliate against some random company in the United States. That’s who’s going to actually feel the burn from this stuff. So we have to keep that in mind no matter what we do.
HARDING: Yeah. I would agree. This question of who is a combatant is going to be the thorny question over the next few years. I am reading Nicole Perlroth’s book right now, which is really good. Very, very thorough. I’d love to talk about my thoughts on the book.
But one of the things that she outlines is the response inside Google when they first saw the cyberattack coming from China, and some of the quotes are priceless. You know, who would have thought that a nation-state actor would be interested in Google? How could we possibly have been expected to respond to a nation-state actor invading our territory? And that is a totally understandable perspective for somebody who was a startup and then grew this massive company and just never really had to think about it from a national security perspective.
Now, for somebody like me who’s spent twenty years, basically, in the intelligence community, I’m, like, of course, you’re a target. Come on. But that’s a product of my training and my upbringing that I tend to think this way and that they don’t.
So bringing these two sides together to collaborate, to cooperate, to try and share information, is going to be absolutely critical, and I think American companies, European companies, really thinking through whether they are going to be counted as a combatant, not by the U.S. government but by our adversaries, is a real challenge.
The folks in the executive branch right now, the—Jen and Chris and Anne triumvirate have done a really phenomenal job pulling together the JCDC and a lot of this collaboration between the private sector and the government. It’s initial steps that really need to be built on.
When you look at China and the way that they think about what is government versus what is private sector, that is not a distinction for them. They see government and they see those who help the government when we ask them to. In Russia, there’s also really not a distinction between the oligarchy and the government. There’s the government and then there are all these tools of government that I can draw on whenever I want to because they know where their bread is buttered.
When we look at our adversaries and we say, no, no, no, it’s private sector, they go, oh, yeah. Right. Of course. So I think that thinking through who counts as a combatant, how they’re going to be affected by this next round of potential warfare, is going to be really challenging. And this, I think—we can talk about this during the Q&A a little bit more, but this question of redlines and escalation, this is where, I think, it gets really thorny because if Google gets hit what does that mean for escalation?
SMEETS: Yeah. Camille, it might be a bit boring, but I agree with already the previous panelists. It will be much nicer if I would disagree. (Laughter.) But, of course, the private sector will bear the most significant costs in case of some type of retaliation.
But just to add, maybe, one point here is we often hear about or this question being raised will Putin, potentially, conduct a cyber operation against the West, against the U.S. should we be concerned about. We should have our shields up, of course. But it’s not just Putin, right, and I think we sometimes overestimate the amount of control that the Russian government has over such a wide set of criminal groups and other activist groups that are operating in Russia.
And, you know, as an academic, the most—one of the most famous theories to understand these relationships is, of course, principal agent theory and where normally you would argue that the principal has the least control over the agent in case there are information asymmetries, and these are enormous here in terms of the asymmetry of information that these criminal haves (sic) in terms of targeting and what they are capable of and who they want to target, which seems to suggest that agency slacking is very high; that risks—that we may have these groups operating, yes, in favor of Russia but not completely in control is significant and it increases the risk of, I think, a more of a WannaCry/NotPetya-type of scenario—less kind of critical infrastructure attacks directly on the U.S. but, certainly, more consequential collateral-damage type of attacks through ransomware or self-propagating malware.
STEWART: I think the thing that is clear is the place we have been a bit strategic is in that collaboration between public and private sectors and that’s really important. I’m interested to see how that continues to evolve between the JCDC and the Cyber Safety Review Board and some of the other mechanisms that have been deployed of late.
Do we have any questions? Do we have a (staff in ?)? Virtual questions first.
OPERATOR: We will take our first virtual question from Lyric Hughes Hale.
Ms. Hale, please go ahead.
We will take a virtual question from Adam Segal.
Q: Yeah. Hello, everybody. Thank you very much for doing the panel. I’m sorry I couldn’t be there with you in person.
I think this question is for Max, although the others can, I guess, question my assumption. So, Max, it seems as if in the U.S. the kind of debate about whether defend forward and persistent engagement as escalatory is over with people, basically, believing that it is not. So I wonder if there’s a different perception of that among, you know, the NATO allies that you spoke of and if there is, within NATO, differences on that view.
SMEETS: Yeah. That’s a good question. I think it’s actually the discussion. I don’t want to avoid your question, Adam. I think the discussion is less about is it more or less escalatory. But the question is much more, I feel, a legal one, like, should the military be allowed to operate in peacetime, potentially, do this, you know, globally. What is then the relationship with intelligence? It’s that question, in particular, that is holding, I think, many particularly continental European countries back in developing a similar posture. So it’s less of an escalation question and more of a legal bureaucratic question that is constantly raised right now.
STEWART: Anyone in the room? Yes. Back—(inaudible).
Q: Thank you. My name is Erin Dumbacher. I work for the Nuclear Threat Initiative.
I wanted to, first, make a comment to your ranked order list of priorities. I would love to add operational technology and military systems there. Sometimes I think we have a lot of conversation about the IT side and maybe not enough about the OT.
But my question is to what extent do we really need to solve or pay attention not only to the attribution conundrum but also to signaling in this space relative to escalation management. You know, we have other technologies for which, globally, there’s some sort of recognition for what the movement of a bomber implies or the—or other types of maneuvers or on the—both the military and from the leadership side.
Is it possible to build sort of clarity around what different cyber actions signal, in fact, and do we need to be working on that?
HULTQUIST: I think the best example of cyber signaling I’ve seen has been—are a sort of read on the actions of an actor that’s publicly called—we call them Isotope. They’re called Berserk Bear, Dragonfly. They have this history of getting into U.S.—they’re FSB-related, so Russia’s—actually, their—sort of their internal security services, they have a SIGINT mission.
But, anyway, for about a decade they’ve been digging into U.S. critical infrastructure, and we looked at it in two ways. One, are they, you know, sort of digging in for, like, that moment when they need to be ready for the contingency. The other thing is are they digging in to signal to us that they’re digging in, right, that they are there in case they need to be.
And I think that’s probably one of the best examples, I think, of this sort of signaling I’ve seen in this space because it’s holding real capability or real infrastructure under threat. I’ll be interested in any other examples or—
HARDING: Yeah. So—
HARDING: Oh, go ahead, Max.
SMEETS: No, I think you can speak more to this, Emily. But it just reminds me, actually, of a CFR blog post by J. Healy which is, like, not the deterrence the U.S. wants with some great quotes from Ben Rhodes around the presidential election where, supposedly, some of the U.S. retaliatory options were taken off the table because of concerns of the FSB, in particular, being in U.S. critical infrastructure.
I don’t know if that’s true. But it’s a fascinating case in terms of signaling and, supposedly, whether deterrence works or not. At least there might be one case where it has worked but not in the way that we want it to. But you may know more about that, Emily.
HARDING: I might. (Laughter.) I’m going to put in a plug here for anybody who wants to read the Senate Intelligence 2016 election interference report because we do go into that a little bit.
Yeah. So this was one of the problems with the Obama administration response in 2016. By the time they understood somewhat the extent of what the Russians were up to, they had very limited time before the election and they had very limited prepared options.
The other thing, it’s easy now and it was easy-ish in 2017, 2018, to look back on the complete package of information and say, well, clearly, they should have known this. Well, when you’re in the fog of war and when information is coming at you, you know, a piece at a time day after day, it’s a lot more difficult to make sense of a very foggy picture. But, again, that’s the reason why we have to be strategic now. We have to be thinking forward now.
I wanted to make a point about the signaling question that you asked, which I love coming from a nuclear scholar. Nuclear scholars have spent decades talking about very precise signaling options and deterrence theory and how these things work together, and I think that folks in—working in the cyber domain have a lot to learn from that scholarship.
I think we need to be very careful about making comparisons, though, because it’s just a totally different set of tools, and the cyber domain is still so young that no one has figured that out yet. In nukes, you know, there’s this finely tuned, you know, this signals this and this is code for this. In cyber, it’s, like, nobody really knows what any of this means yet—(laughter)—and part of the big problem is that a lot of the tools are dual use. If you implant a tool in someone’s systems that tool could be used for espionage.
HULTQUIST: Absolutely. Yeah.
HARDING: It could be used for destruction, and you don’t know. This gets to John’s point about intent and capability. Maybe the adversary has the capability to implant this tool on your network. What’s their intent? Are the Russians there to spy on a potential new administration? Are they there to tank confidence in an election? It’s really not wise to sit back and wait to see which one it is.
HULTQUIST: There were two crews in the DNC. One was, you know, GRU, and the other one was SVR. And the SVR guys were just kind of spies being spies, right? They were abiding by the rules, sort of.
HARDING: The GRU, on the other hand, some men just want to watch the whole world burn.
HULTQUIST: (Laughs.) Yeah.
STEWART: A question. Yeah?
Q: Yeah. Thanks. Steve Charnovitz, GW Law School.
So we’ve heard a lot on the panel this morning and this afternoon about collaboration—public and private collaboration. But I was a little surprised when I heard earlier the question what is the U.S. government response to an attack on Google, and I would have thought that’s the whole role of U.S. government, to defend the public, including the U.S. companies.
So I’m wondering, I mean, do we expect Google to have its own international policy and international capability to defend itself? I would think not. I don’t think we’re trying to have Google take international military or cyber action. We’d like—we might be comfortable with Google having a very activist environmental policy but not a military or cyber policy as offensive.
So it has to be, I assume, the U.S. government who’s going to defend Google, and so I’m wondering are we doing enough as a government to defend and help our leading technical technology champions in the United States if they’re vulnerable? And I guess they are vulnerable. Is the U.S. government doing enough to safeguard?
HARDING: Oh, Steve, I could go on, like, a twenty-minute tear about this. But I’m not going to because other people have questions.
I think the short answer to the question are we doing enough, no. The longer answer to the question, though, is what’s appropriate, and I think this is what you’re really getting at with your question.
When Sony Pictures was hacked, lo, those many years ago, that, initially, was a hands-off response by the U.S. government until it became clear that it was the North Koreans trying to silence free speech, and then the White House got involved.
But still, I mean, was the FBI responsible for what happened at Sony? There was no way that Sony would have let the FBI into their systems ahead of the attacks if the FBI could have prevented it. It’s not the FBI’s job. They’re, you know, feds. They’re the ones supposed to be defending after the fact, finding the criminals and prosecuting them. That doesn’t really work here.
The U.S. doesn’t have a domestic spy agency. We don’t have an MI-5. The FBI is very poorly suited to the mission of trying to defend in advance of this kind of cyberattack.
Your question about Google—you know, would we defend Google—OK, so let’s say we defend Google. Are we defending the cyber startup that has five employees and didn’t pay any attention to security? Are we responsible for that? How are we defending them? I mean, these—I ask these questions knowing full well that I don’t have the answer and I don’t think anybody does right now.
Trying to find the right line between a business executing its own business practices properly, doing the simple things it needs to do—patching, two-factor authentication, you know, just the basic cyber hygiene stuff that everyone needs to do—and then at what point the government takes over as a response in a deterrent fashion.
You can make a comparison to crime, which the FBI and local law enforcement are supposed to do. But that’s after the fact when the damage is done. You can make a comparison to national defense. We all pay taxes so that we can buy aircraft carriers and, you know, F-22s. Should the government be thinking that way in the cyber domain, and if they are what does that imply for the Googles of the world letting the feds into their systems?
I can see the room cringing when I say that because everybody says no, you know, that’s not the job of the U.S. government. So what is the job?
STEWART: Yeah. Proactive defense is not likely to be the place where the U.S. government plays. I mean, CISA has a strong mission for voluntary support and can help the small company all the way to the large company build up their defenses and be more proactive, implement the two-factor authentication, all of the things, and, I think, will continue to play in a voluntary space preattack.
What we have to figure out is how we would stand up as a USG to support an organization, depending on the severity of the attack. So Sony, we decided that, you know, trying to attack free speech, which is a fundamental constitutional right, was something that we wanted to come after.
We need to do that strategic work that we talked about earlier to figure out what those lines are and what necessitates—what’s a significant cyber incident that the U.S. government would mobilize itself around that happens in the private sector. But there is unlikely to be a moment where all of the U.S. companies open up their systems to let the U.S. government do something on the proactive defense side of things.
HARDING: Meanwhile, there’s Mandiant. (Laughs.)
HULTQUIST: You know, honestly, and we’ve had our own incident and I can’t give too many details but, you know, we had—I think, had a really strong—a good experience working with the government as far as dealing with it. There were—there are, clearly, you know, things that we’re very good at, like, for instance, the incident response team that worked this problem are, literally, the best incident responders on the face of the planet. These are the—we picked the—you know, handpicked a team of all-stars. But we still needed the U.S. government’s help and they were able to fill in a lot of gaps that made it—made the whole process easier and better.
STEWART: Which is why that proactive collaboration is so important. The trust that needs to be built—
STEWART: —between the private sector and the public sector is what’ll get us further so that in the event of an incident companies are pulling the government in early so they can enrich their data with the information that they have and vice versa. We could declassify things, all of that. That’s why that collaboration proactively and consistently is going to be so important.
Next question. Monica?
Q: Hi, everyone. Monica Ruiz, Microsoft.
So earlier in the conversation you all talked about the importance of strategic engagement and also information sharing in the context of international cyber norms or norms—red lines. So I’m curious how you all think about countries that don’t necessarily have the capacity to engage strategically and to share information.
How do you think about building that capacity, especially in the context of what is going on in the United Nations as part of the Open-ended Working Group—the OEWG—and the fact that the previous report, essentially, was endorsed by a lot of countries and reaffirmed the eleven norms that came out of the 2015 UN GGE group. And so I’m curious how you guys think about sort of building that capacity beyond the countries that actually have it right now. Thank you.
HULTQUIST: So we have some experience working in areas that don’t necessarily have a lot of customers. But we still find value working there because we learn a lot, and I think that’s one way to sort of get the private sector involved in these sort of problems.
Some of the areas that are on the front lines can’t necessarily afford the million-dollar, you know, security solutions, right. But they can offer a lot of great information. The bleeding edge of—you know, a lot of threats have been in places that I’ve tracked in history historically have been in places—India and Taiwan and Ukraine and, you know, in the Middle East—and not on every occasion where we—you know, it was a customer relationship. You have to go in there and develop partners, and those partners oftentimes pay you back in the form of information that you use to secure your other customers. So there is value there. It’s just not necessarily, you know, the normal sales process.
STEWART: Yeah. You’ll see companies investing to raise the collective level of cybersecurity so that we all will benefit from it. On the USG side, I think that’s an important question that we need to be focused on. The strategic investments in collaboration, the support that we provide now, will have a direct impact on the norms discussions and the multilateral bodies we are engaging in that will dictate how we engage on cyber in the future.
And so it is a strategic imperative and part of that strategic conversation that needs to keep happening to be focused on how we engage with smaller nations and nations that are developing capabilities. So it’s important.
SMEETS: Just to come in with one quick comment, I think it’s a great question, Monica, and what we really see is a—indeed, a capacity gap in terms of the countries that are actually able to attribute and not able to attribute, and where—at least where we have to get to the level of those countries who are unable to attribute and, as a result of that, are very hesitant to follow the public attribution statement of maybe their allies or other countries. Get them to at least the capacity to verify attribution claims, and that’s a starting one.
Now, of course, that comes with a number of issues, one of which being that attribution is not only the Sherlock Holmes type of process that particularly companies like Mandiant are involved in where you collect the different puzzle pieces on, you know, where the C2 is set up or all of those kind of things. To come to a conclusion is also a more proactive process sometimes, particularly, by the mature actors being already in adversarial systems and see the attack going out.
Now, in that second case, you have a high level of attribution confidence, but it’s even harder to share with a wide number of other countries. But on the first one, yeah, I think getting, you know, Microsoft, other companies, involved in training programs to at least ramp up the capacity to verify would be a good first step.
STEWART: Another virtual question.
OPERATOR: We will take our next question from Zaid Zaid.
Q: Hi. Zaid Zaid from Cloudflare.
I have a question about companies that continue to operate in Russia. There have been a number of articles, a lot of attention paid to who’s leaving, who’s staying, et cetera, how they’re winding down their services.
Cloudflare, as well as a number of other companies, is still in Russia. We, you know, provide internet security. We provide VPN services, and one of the things that us staying there has allowed us to do—has allowed Russians to do, rather—is to get information from outside of Russia.
But there’s also been this push to, you know, close Russia down from the internet, in some ways. Just love to hear how you all think about that.
HARDING: Well, I tend to be in favor of keeping Russia widely connected to the internet—in fact, throwing every pipeline of information you can in there. You know, this is a difficult question for so many companies—to leave or not to leave—and if you leave what does that really mean for the long term.
I’ve been, from the beginning of this whole thing, talking about how it’s not going to be a short fight. I just—I don’t see how it’s going to be a short fight. And if, as a company, you can’t be out of Russia for more than six months or a year, then think very hard about pulling out now because what happens in a year when you have to go back in or else your business model can’t survive. What message are you sending then?
I think there are lots of ways to support the Ukrainian people and I think that every company has to make their own decision here. I, for one, have been heartily encouraged by seeing the outpouring of support for Ukraine that has come from the private sector. I mean, citizen sanctions have, in a way, done as much good or more good than what the governments have done. I think it sent a very strong message and I think the repercussions on the Russian economy are going to reverberate for years and be very difficult to undo.
So I think every company really has to make their own decision and then, you know, do what you got to do to explain that to your customers or your shareholders. But if the basic fundamental goal is to support the Ukrainian people and then continue to speak truth inside Russia, I think that’s a noble goal.
HULTQUIST: Without getting into sort of the information flow, I think—I mean, one of the really interesting things that happened really early is that with the—with regards to this sort of citizen sanctions, as we watched a lot of organizations, a lot of customers, take very, like, clear public stances on the war including, you know, divesting themselves from Russia and at one point, we were, like, OK, we need to, you know, figure out who these people are and—you know, because they’re sort of, essentially, putting themselves in a higher risk profile.
It got to the point where it was almost impossible to track. There was—I mean, the bad news is, like, you know, OK, I think you sort of—you might consider you’ve raised your threat profile. The good news is so many people have done it now that I don’t think it matters almost. Like, it’s become—
HARDING: Safety in numbers.
HULTQUIST: Yeah. So there’s safety in numbers. I don’t know if you could really—you know, if it had been one organization—you know, really early on, I think, we saw, like, some of the international gaming or sporting organizations, for instance, and I was, like, well, that’s—you know, they have a history of hitting these sports organizations. Putin loves—you know, loves sports. It’s, like, a thing for him. You know, we really were kind of worried about it. But now everybody’s done it.
So I’m, actually, like, sort of encouraged by the fact that there is this safety in numbers problem.
STEWART: Yeah. The only thing I’ll add is I know many of the companies have real physical security concerns as they evaluate this and they want to protect their people. So they’re weighing that as part of that strategic decision.
And so it’s, definitely, not a—I mean, it’s not a decision without complication, and so they’re probably weighing a number of factors as they determine what to do.
SMEETS: And then the insider risk threat has just increased so enormously for the countries staying.
SMEETS: Major concern.
STEWART: Next question. OK. Well, I think that’s it.
Well, thank you all for joining us for this discussion. The questions from the audience were rich, the comments from the panel were rich, and so I think we all have left with a mandate to be more strategic and collaborate with the government—(laughs)—and to be thinking long term so that we can get ahead of some of these issues.
So thank you all for the time. A big thank you to the panelists and to CFR for having us. (Applause.)
This is an uncorrected transcript.