Norms, Laws, and Cybersecurity

Wednesday, November 4, 2015
Michaela Rehle
Speakers
Christopher Painter

Coordinator for Cyber Issues, U.S. Department of State

Scott Charney

Corporate Vice President of Trustworthy Computing, Microsoft

Eneken Tikk-Ringas

Senior Fellow for Cybersecurity, International Institute for Strategic Studies

Presider
Craig Mundie

President, Mundie & Associates

Scott Charney, corporate vice president of trustworthy computing at Microsoft, Christopher Painter, coordinator for cyber issues at the U.S. Department of State, and Eneken Tikk-Ringas, senior fellow for cybersecurity at the International Institute for Strategic Studies, join Craig Mundie, president of Mundie & Associates, to discuss the current state of laws and norms regulating state and non-state actors in cyberspace. The panelists additionally consider the potential of offensive cyber weapons.

This symposium is held in collaboration with CFR’s Digital and Cyberspace Policy Program.

SEGAL: Good morning, everyone. I’m Adam Segal. And as the guy who kicked off this morning’s talk said, I direct the Council’s Digital and Cyberspace Policy Program.

I just want to real quickly thank all of the panelists for coming and participating and making this such a great event. I also want to thank Layla Adler (sp), who really put most of this together, coordinated with everyone, and is making sure that everything is going so smoothly, and Alex Grigsby, who works with me as assistant director of the program, for helping put it together.

As Richard mentioned, the program has three focuses: cybersecurity, Internet governance, and digital trade and privacy issues. Last year’s symposium was on Internet governance issue. Probably next year we will start focusing on some digital trade and privacy issues. We have a number of major initiatives we launched this year—the blog, which is focused on a whole range of issues, and a number of contributors: Rob Knake, David Fidler, myself, and a bunch of outside authors. So if any of you have a blog post in you, please write me and let me know.

We have a new Global Governance Monitor focused on Internet governance issues, which is a great resource if you’re coming to the issue new. And we have a new Cyber Brief series. We’re on the fourth. We’ve looked at radicalization on the Web, proportional responses to cyberattacks, how you promote norms in cyberspace, and government procurement and supply chain IT security. And we’ll have another one coming out in about two months.

And as Richard mentioned, our—one of our main focuses is to bring public and private sector together to generate new ideas and create new connections. So during the day today, if you have any ideas what we should be doing, what we’ve missed, where we should be thinking, please come up to me and let me know where we should be going forward. We have a survey, actually, up on the blog about what we should be covering, so if there are topics that we’ve missed I also ask you to go—to go fill that out and let us know what we can be doing better.

So thank you again to everyone for attending and for participating, and we’ll now start panel two.

MUNDIE: Well, welcome to the second session on today’s Council on Foreign Relations symposium. I’ll remind everybody again that this session is on the record. The title of this session is “Cyber Offense and Rules of the Road.” And we have three people joining me today for this conversation: my good friend Scott Charney, vice president of Trustworthy Computing at Microsoft; Eneken Tikk-Ringas, senior fellow for cybersecurity and international—at the International Institute for Strategic Studies; and Chris Painter, the cyber coordinator at the U.S. Department of State. So, like the last session, you know, we’ll—I’ll have a little chat with them for a little while to get warmed up, and then we’ll open the floor to questions.

I think to start we’ll talk a little bit about the rules-of-the-road question. The previous session talked about a broad array of issues, and I think for each of them, you know, there are questions going forward about the rules of the road. A lot’s been happening in this regard. And so I thought maybe I’d let each of the panelists talk briefly about, you know, what their favorite aspect of the rules-of-the-road issues are right now, and then I’ll come back and we’ll ask some questions. So, Chris, why don’t we start with you?

PAINTER: So this has been really one of the major areas of focus for us really for the last few years now, and it’s been an area that, by and large, I think, the United States has led the debate on. And that is this is not a space where nothing applies. This is not a free fire zone. And we’ve been very clear that one of the basic tenets of this is that international law applies in cyberspace, both the kind of international law that deals with things like the Law of Armed Conflict and the U.N. Charter but also, below that, very high-threshold international law applies. That’s significant because if you say it doesn’t apply and you need some new legal structure for the Internet, that itself can be destabilizing, but it also means that there are—it is a free-fire zone; it is a wild, wild West.

The second part of the initiatives we’ve been promoting is the idea that below that threshold of—that very high threshold, which, frankly, we haven’t seen much activity in—you need to start thinking about norms of state behavior, things that states either should restrain themselves from doing or affirmatively do that makes the entire ecosystem, in the long term, more stable. And this complements the kind of defensive and other issues that we just talked about in the last panel.

And so, again, I think in a very short period of time we made tremendous progress in not only taking forward this norm about theft of intellectual property that was discussed extensively just now, but also stability norms that include not attacking, for instance, the critical infrastructure of another state in peacetime; not attacking the CERT or CCERT of another state, that states should not do that, that they should use CERTs for good and not bad, not for offensive purposes, which is not always true now; and then finally, that a state, when asked, should cooperate with another state when malicious code is coming from their territory to either mitigate it through law enforcement or technical channels.

And these are really important because we started promoting these just recently and we got, you know, pretty wide take-up of them pretty quickly in this U.N. group of the governmental experts, which, if you think of international relations and moving things forward, getting that kind of consensus in just a couple of years I think is remarkable.

And then the third part, the third part of the structure, is confidence-building measures. This is an area where there’s not a lot of understanding between states. This is an area where there’s a lot of chances of misperception, miscalculation, and inadvertent escalation. So how do you address that, particularly when you have difficult issues of attribution?

Well, this is the one area where I think there’s a parallel to the nuclear world, where confidence-building measures played a pretty large role. And these are things that are not rocket science, though. They’re things like transparency measures, you know, making sure you have points of contact, perhaps hotlines, other things that you can do along those lines. They can be cooperative measures, cooperating against shared threats, and ultimately stability measures, which I think merges nicely with norms.

So that framework altogether has been something we’ve been advancing. And again, I think big success there in having the OSCE adopt a set of 11—I don’t know why it couldn’t have been 10, but it was 11. It was like the “Spinal Tap” movie, but—(laughter)—but, you know, 10 confidence-building measures, which were important, and now implementing it. We’ve done a workshop just recently in the ARF.

And then I think, going forward—in Singapore—we did it in partnership with Singapore for those countries. But, going forward, I think one of the major things we need to do is get wider and wider acceptance internationally of these norms, and that really is a major administration and presidential—absolute presidential priority to really promote these norms, this concept of international law, this framework around the world in getting this more universally accepted. And you’ve seen this in a lot of the statements that come out of—when there are leader statements coming out of the White House, recently the AUSMIN statement, the ministers of Defense and Foreign Affairs from the State Department.

This is, I think, something that we really are going to be concentrating on over the next year and really beyond that. And I think although this is not the silver bullet for everything, this helps create a more stable environment.

MUNDIE: Eneken, you want to—

TIKK-RINGAS: Yeah. Well, I guess my favorite parts of the norms conversation is that the—it’s all the same norms, just the—just the road is different. And with the road being different, I think for a lawyer what’s really exciting is that we are seeing norms meeting other concepts. So lawyers are really challenged to think how to make those old rules and norms working on this new road.

So going to the framing of this panel, for example, I think that norms in and of themselves have come to constitute an important deterrent. So I would say that in this norms discourse and then when it comes to a spectrum of deterrence, then we are really talking about one track and we are now using norms to answer the questions that—well, traditionally we used to look in stovepipes—stovepipes, and therefore maybe apply straight sort of stovepipe solutions to them.

But now we see that, as was mentioned in the last panel, we’re dealing with a threat surface. And on that threat surface we see different actors, and in a way they interact in ways that challenges us to use those old norms and potentially develop new practices on those norms. That means we think of how we implement those norms for the purpose of deterring very different actors. So that’s the means that I think would—that’s the fun part.

MUNDIE: Scott.

CHARNEY: So we are a big fan of norms, and in part because we believe there’s too much short-of-war activity on the Internet. And it’s interesting when you think about the fact that in times of war we have norms like the Geneva Convention, which required discreteness and proportionality in your military activities, but there’s no Geneva Convention in times of peace.

But having said that, I think the challenge isn’t so much saying international law applies. The devil is in the details. So, you know, as a former cybercrime prosecutor we said existing laws apply to the Internet. Then you look at the Electronic Communications Act and other laws—the Computer Fraud and Abuse Act—and you find it incredibly hard to apply those existing laws to the new environment.

And so, yes, international law applies. And I think we would all agree that if a foreign country flew a fighter jet over the U.S. and dropped a bomb on U.S. property we’d say that’s an act of war, but if a foreign country accessed physical property through the Internet and caused kinetic damage, how many people would say it’s the same thing; it’s an act of war?

So the real problem is how do we apply these rules in this environment? And at what level do we want attribution? So one of the big challenges in all of this, including the new Chinese-U.S. agreement, is how do you determine whether or not people are actually following the norms? And people have gotten into the space on the Internet where they go, because it’s hard to do authentication, because it’s hard to do traceability, there’s always plausible deniability.

But that’s just not true. I mean, in the physical world, in the United States people can be sentenced to death on a lower standard called proof beyond a reasonable doubt. So what is the standard we want to use in deciding when norms have been violated? So there are a lot of devils in the details that have not been worked out.

MUNDIE: I think one of the biggest challenges in my mind as we talk about norms is the relative sophistication, and I’ll say contribution, of the non-state actors in this generation of problems, at least when we’re talking about good guy-bad guy norms. There’s many other issues too.

Because the asymmetric capabilities are so high compared to historical standards, what do we do with the non-deterrable people? What should be the advantage if you are, let’s say, a deterrable actor, a nation state, and you decide I’m going to comport with norms? What should you get for that agreement, relative to the threat that comes from those that can’t be deterred or who aren’t going to agree to the norms, whether they’re a nation state or a non-state actor? Anybody?

PAINTER: So, I mean, I think you can break it down in a couple different ways.

First of all, there is conduct by states; there is conduct by criminals; there is conduct by perhaps rogue states or others out there. So, you know, you have to tailor how you deter each of that kind of conduct and how you control it in the future by the actor, to some extent, even if you don’t always know who the actor is. So that’s why it’s important to have strong cybercrime laws in place, something Scott and I have spent much of our career trying to do and we’re still promoting very heavily to have, countries have the ability to deal with that.

But when we talk about some of these voluntary norms that I’ve talked about and what would bind states or, you know, where states could bind together if you get a large, likeminded group, it’s very much like things like the non-proliferation initiative, where those countries who are part of this, those large-tent likeminded countries, can band together to act against transgressors, people who are on the outside. So yes, you’re never going to get every single state, particularly rogue actors, to sign up to it, but this gives you a better way to try to enforce it.

The only other thing I’d say—just one comment on what Scott said—you know, I think there’s always this thinking in cyberspace—and I agree that how the rules and things like proportionality and et cetera applies is going to keep lawyers busy for a while, speaking as a recovering lawyer—but I think that the bigger—one of the issues that we can’t lose sight of is you look at the effects. I mean, if there is major death and destruction caused by a cyberattack, you look at the effects; you don’t necessarily look at the means.

TIKK-RINGAS: Yeah, absolutely, when it comes to effects like that, then there is, at that point of time, no doubt that the Law of Armed Conflict would apply. I think the question too lies where you’re pointing, which is how do we deter, potentially, those other actors?

Now, why use the word “deter,” meaning—I think one of the biggest challenges all of us face—and that concerns, like, everyone my age and then up who hasn’t been in the computer industry for their lifetime—is that we are not natives to this technology. And that means that we have to deal with two variables. One is our own discipline concept, such as legal concepts—armed attack, use of force, what is a crime—and then the reality of technology. And now, pairing them together, we are not used to taking our old concepts out of their paradigms. That means deterrence—I would come back to what is it about? It’s about changing someone else’s behavior.

And we can think about it as this potential of military response or military force, or forcible measures being used against an attacker, but I would say that we can think about it in terms of cyber much more differently, and how we incentivize those other attackers that maybe cannot be deterred under our traditional thinking. Then in terms of punishment we can think of, yes, effective law enforcement, right, meaning where that works that deters at least mainstream attackers. We can think of economic sanctions. We are more talking about cyber sanctions. We have seen SWIFT being used to deter or to change behavior. That means the world international banking system sanctions against countries, and potentially there are DNS sanctions against countries.

So we can deter by other than just forcible measures and then we can think about denial. And that’s where these different groups of actors come to play because we create denial by better security designed by architecture. We create better denial by reinforced targets, meaning focusing on infrastructure we want to be on safe side. We can actually create denial by better attribution. That means cooperation between countries who together can create that effective attribution, or even costs, meaning that we if raise the costs of attacks by all that, that deters both state and non-state actors.

CHARNEY: So norms aren’t designed to deal with criminals. By definition they violate social norms. They’re criminals. And, you know, norms are about self-restraint for mutual benefit by governments. And if they’re using people as proxies then, speaking like a lawyer a little bit, the normal rules of agency apply, which is if you’re not supposed to go something then you shouldn’t hire someone to do it on your behalf. The real problem is attribution and proof again, which is, you know, when do you claim that this organization is a proxy for a government?

The other thing is if you also have a norm, which the governments are, you know, touting, which is there shouldn’t be safe havens for criminals, one of the ways you think about checking the validity of the norm is, OK, if you claim that you are not supporting this activity, then what are you doing to stop it? It’s a comment you kind of famously made once to a government where you said: Well, if you’re not doing, you’re not stopping it. Either way you’re the problem.

PAINTER: And I’d say one of the interesting things about actually the last GGE report in 2013 and again this time is saying that a state should not be able to do things that they’re prohibited from doing by using proxies. Now, determining who those proxies are and going after them is somewhat of a challenge but not an insurmountable challenge, I would argue. And then, you know, I also—I also think that this idea of states cooperating—and we’ve seen this even in the China agreement or China understandings—is that—setting up this ministerial mechanism is there’s an expectation: If we see malicious code coming from your country, we see things, that you’re going to help try to mitigate it.

MUNDIE: One thing, Chris, you said that I agree with and—

PAINTER: Just one thing? (Laughter.)

MUNDIE: I’ll say one of the last things is this idea that if you agree to a set of norms, then a benefit is that if you’re not going to do it then you ought to, in fact, be willing to get together with other people who’ve agreed not to do it and prevent others from doing it. And I think that that is something that really needs a lot of reinforcement. I think that whether it’s crime or just bad actors, even in that case, it really boils down, then, to the enforcement mechanism.

One of the things—and Eneken mentioned it but I’m increasingly worried about is what I call the weaponization of the banking system in the sense that it’s become very convenient for people to decide, well, you’re doing something we don’t like and therefore the way we’re going to, you know, sanction you is we’re going to basically apply economic sanctions, up to and excluding access to the banking network, as you mentioned. But in the world of the Internet we know today you see other new technologies emerging, things like the blockchain and bitcoin. You see terrorist actors already creating their own currencies.

You know, what do you think the issues are from an enforcement point of view? We don’t have the equivalent of a U.N. peacekeeping force. We don’t, you know, have, as the last panel mentioned, effective multilateral, you know, treaties that really operate well in a uniform way on the criminal prosecution side. And so, you know, how do you see these things stirred together? Should we be looking at some kind of collaborative peacekeeping capability on the Internet? You know, what is—what are our other mechanisms other than economic sanctions, particularly for the people that increasingly we’re caring—maybe don’t care about our banks to try to get some enforcement, whether it’s norms or laws?

PAINTER: Do you want to start this time?

Well, I mean, I think there are a number of things. First of all, sanctions I think are an effective tool to reach people. And this is what they were designed for, to reach serious conduct and conduct that may—the other tools we have may not be able to reach. It’s not the exclusive tools. Certainly there’s law enforcement and other tools. There are diplomatic tools. We say in our international strategy from 2011 that, depending on the level of the incident, we have a full suite of tools—trade tools, now sanctions tools, criminal tools, diplomatic tools, the whole range. And so I think that that’s important.

I would say that there is, in terms of, you know, the cybercrime area, the Budapest Convention now is—you know, more and more adherents are joining it every day and there’s a list that are going to join it soon, and other ones have actually emulated its provisions, so—even if they haven’t joined it. So the Siren song about having a new global cybercrime convention—which, you know, Scott will remember well it took five years to even get this convention—would actually not help any of the countries out there really fight this.

And then this idea I think that I’ve heard a few times of some kind of global instrument treaty to deal with some of these cyberwarfare, cyberweapons, cybersecurity areas, you know, I don’t think that really gets you very far either. That’s why, you know, frankly, I don’t think we’re at all close to being mature enough to even understand what the different aspects of this are. I think this is still a fast-moving technology. I, again, don’t think that’s going to be effective in making us safer. So that’s why so much of our effort has been put into these norms and confidence-building measures which are more voluntary.

How do you enforce it? You know, I think if you create some big institution it will fall on its own institutional weight. So how do you do it? Like we did with a lot of other areas, frankly, and we have traditionally, which is banding together with more likeminded nations to go after this. I think that’s happening. I’ll give you an example.

Even in the—it was raised in this last panel about the attacks, the denial-of-service attacks on our financial institutions, which was not the end of the world. I mean, it was a major nuisance but it wasn’t changing the integrity of the information or other things that would have been much more serious. But nevertheless it was a serious issue, and we reached out to countries around the world, even diplomatically, to try to get—and to build this idea of collective action against these threats.

And so usually you’d have the CERT people reach out and the law enforcement people reach out, but we did diplomatic demarches—which before I came to State I always thought “demarche” sounded bad, but you could have a good demarche. You can say: Can you help us? Can you work with us? And we’re going to be willing to help you if you come to us. So building those kind of collaborative networks I think is really the way to go.

MUNDIE: Other thoughts?

TIKK-RINGAS: Yeah, I would maybe just—let me just bring a different angle to it.

Distributed denial-of-service attacks against banks didn’t start with also recognizing harsh security threats in and around cyber. And I think the positive effect of all this is that—referring back to what Michael Daniel has brought up earlier, this understanding of new normality and also being increasingly exposed to sophisticated and intensive cyber threats has led to a situation where our banks are much better defended, so meaning they—it creates this incentive for the private sector to sort of self-heal. It’s not to say that this is the end of it and that it should all be left there, but now we come to what are we doing about this at the international level?

So what are we doing? Chris mentioned the U.N. GGE report, and one of the things that was clearly echoed there but didn’t start there was this understanding that we need to abstain from malicious and hostile acts against critical infrastructure. There is some mutual benefit in that framework and especially when it comes to those key infrastructures, which the financial system is one. Telecommunications today is clearly the other because it brings benefits, both social and economic, to countries across the world. And I would say grids, power grids, are the third one, and then some others in the pipeline.

So why I wanted to mention that it didn’t start in the GGE—and of course it won’t end there because we’re still to elaborate what exactly is to be protected and how—the U.S. has tabled years ago at the U.N. something called the Proposition of the Global Culture of Cybersecurity, and that has developed over years to encompass this understanding of what countries ought to be doing about protecting their critical infrastructure and how are we improving on that?

So, all in all, I think there is some real important progress. And what I find really useful is that that progress happens in tandem. It’s about protecting yourself as a private sector entity, realizing that you’re a flagship of a company in a political situation, and then your government, and actually all governments, helping you do that.

CHARNEY: I think we have to think about where technology is headed and how to integrate technological solutions with policy solutions. I’m reminded of a story in the early ’90s when I first became, at the Justice Department, responsible for cybercrimes. Many of the attacks on the universities, the Internet, was much more wide-open than it is even today.

Many of the attacks were coming from a single university, and other universities complained about the fact that when they were being inundated with these packets they couldn’t get much help. And that university from which all the packets were coming said: Hey, we have an open network. The Internet is free. Anybody can sign in and we don’t care. So the other university said: OK, we’re going to drop all your packets. And suddenly that university has a security regime in place.

And you know, one of the things we have to think about is the world is going to be more authenticated. You know, people often say, do we want security or privacy on the Internet? The answer is yes. You can’t think about it at the Internet level. You have to think about it at the application level. You know, if you’re doing online banking, the bank wants you to be authenticated; you want to be authenticated. If we’re going to do a blog post on a controversial topic, you want to be anonymous and democracies want you to be anonymous so that you have free speech.

If you start thinking about that layer, you could start saying, well, maybe part of the solution is not just economic sanctions but Internet-related activities. I mean, we just have to think outside the box about what—you know, we’ve all said that governments, as part of this whole norms process as it addresses these problems, should use all the tools in its toolkit. But when the government thinks about what those tools are, they tend to think about their traditional tools. I think we have to have a more expansive discussion about what is the toolset for the digital age to make sure that people are adhering to norms, if you believe in them?

MUNDIE: So if you think about both the previous panel’s discussion and some of the comments we just had here, I think that a couple of things are going to have to happen. One is more of this idea that we have to come up with a model of protection that is not just making everything perfect. You know, we’ve been for decades on the let’s make everything perfect and then we won’t have a problem, and I think the complexity of the system is such that we know that that’s—you know, you want to be as good as you can be but you can’t ultimately be perfect. And so that’s going to require more of this sort of high-scale observation and monitoring. And we talked a bit about it. And the legislation that was discussed this morning kind of shows how difficult it is to find a balance even narrowly between the privacy issues and the other.

We all mention every day about the problems of attribution, you know, whether it’s an attack that’s coming from somewhere or just a crime that you want to be able to prosecute. How sure are you about that? You just mentioned this idea of authentication. What do you think the prospects are or should be for establishing norms relative to this question of identity authorization, you know, access control in order to get at this question of attribution?

And while I agree with your comments, Scott, that you don’t want to make this something that is applied at the infrastructural level—you want to apply it sort of domain by domain—you know, how do you think we can get movement in this quickly, because I think it lies at the heart of a qualitative change in our ability to protect and to enforce.

CHARNEY: So I actually think you see that happening. It’s been a long time coming, but you see it happening both on the government side and on the private sector side.

So on the government side you see some countries issuing national identity cards with strong enrollment processes, because they key is how did you enroll people in the system? The U.S. has the TIC program, where they’re funding various projects to do driver’s license-related and person proofing to generate ID cards. And so governments who are recognizing they need to protect citizens from fraud and protect taxpayer money from fraud are moving in that direction.

On the industry side, what you’ve seen over the last, you know, year, year-and-a-half is this massive move to biometrics with authentication that is bound to machines through trusted platform modules, whether it’s a laptop or a phone or a PC, so that people can now have a biometric ID that’s signed by a machine and passed over a line party. And the point is not, as I said, to authenticate everything. It’s to authenticate the things where authentication is appropriate but that also allows you to filter out those things that you don’t want that might be trouble.

I mean, just think of how much spam and phishing we have, right? One of the beauties of this system is you can give up credentials and it doesn’t matter, but as these technologies move forward you can see some interesting applications. So, for example, my mailbox should tell me whether this mail that I am receiving is being signed by someone I recognize or that a government has recognized or not. It doesn’t mean I can’t get unauthenticated mail, but at least I’ll know it’s unauthenticated mail. And if I see that it’s asking me for my banking password but it isn’t signed by my bank, that’s kind of a clue.

And so I think you see the technology moving in this direction and governments moving in this direction. And the real important thing is to make sure that we don’t go too far and say, if you’re not authenticated you can’t be on the Internet and you can’t engage in speech activities and other things. We have to strike that down.

PAINTER: And, in fact, back a couple years ago—and Scott was involved in this too—the National Strategy for Trusted Identities in Cyberspace looked at this. And I think one of the things—and Scott’s quite right, you don’t want to go too far. You don’t want to somehow make anonymous speech impossible, because there’s lots of good reasons for it but for critical things that you need authentication you want to have that as an option. And I think that is largely a private sector-driven initiative, although there is some authentication that goes with it. And I think that’s something we are—you know, I’ve seen—like Scott, I’ve seen a lot more progress.

I remember a time—and you all do too—you know, when certain institutions were faced with big losses because of fraud and other issues, they just ate the costs. They said, you know, this is a cost of doing business. But there’s another effect, which is the effect on consumer confidence and just reputation. And I think in the last couple of years, really, we’ve seen a switch to that. So there’s a lot more emphasis on that, and I think that’s a good thing, but perfect attribution is not—even as a former prosecutor, perfect attribution, not so great for lots of good reasons.

The other thing on the attribution issue is that people always think of this as binary. They think the only way you can do attribution is through the technical channels, to follow the digital footprints, and that’s just not the case. You look at—you look at all the different things that you could. In a criminal case you might follow the money. You’re going to look at some, you know, other witnesses. You’re going to see what kind of other intelligence you have. We were quite clear and sure, for instance, that North Korea did the—I mean, there wasn’t a doubt in our mind—despite some people thinking there was a doubt, there wasn’t a doubt in our mind. So there’s ways to do it.

The other thing is it’s easier to do attribution when you have sustained activity, which is often the more damaging activity than a one-off quick hit. And so, you know, I don’t think attribution, although it complicates issues and it continues to do so, is insurmountable.

TIKK-RINGAS: Yeah, for a second it was almost—I was wondering, why does it seem a non-question to me? And it is because I come from a country where—well, I come from a country with probably the strongest public infrastructure. I’ll put it in other words. In Estonia it is natural that banks and government function on the same secure identification platform. Every citizen carries a national ID card that also serves as an electronic transaction device basically.

So it is working—the point of that, without necessarily going into details about Estonia, is that some things in comprehensive, integrated cybersecurity are easier for smaller countries, just by size itself, and then of course by agility and how much that country has invested into policies and sort of lifestyle to prevent (on it ?). And Estonia has really got it right a long time ago, and it is saying these (things are really ?) functioning.

Not to say at the same time that there are no issues for us. And this comes now to the other side of identification and attribution power part of it, which means we still have to work to improve our forensics capabilities to work with other countries to share information, to exchange information on CERT level, Computer Emergency Response Teams’ level, and then of course rely on the capabilities of our allies and friends, including NATO’s capabilities, when it comes to a certain type of network awareness and monitoring.

And finally, I would just point out one thing that I think that constitutes a cost to this identification race, which is oftentimes when you opt in for identification that results in a lot of spam and advertisement, and that’s another sort of thing you should definitely regulate, not necessarily by law but maybe by policies.

MUNDIE: Of course in the U.S., you know, we see an oscillation as a function of how distant the last great big bad thing was, you know. When bad things happen, you know, the U.S. population generally swings more towards saying, I’ll accept more measures focused on security, maybe even identity, you know, and then it swings gradually back toward the privacy environment. Estonia, of course, as a society and a country, had its own wakeup call in cyberspace some years back.

And so, you know, I think it’s often easier, whether you’re big or small, to get people to be motivated to do these things when they have a fresh memory or a consistent threat, you know, that is present in front of them. And I think that’s one of the challenges we have right now in many countries is people have both a threat, a privacy violation, and a threat of crime and other bad things happening, and therefore it’s more difficult to find the balance.

So I’m going to pursue one more question. Then I’m going to open the floor to the audience, so think about what you want to ask.

In the panel, the first panel this morning, they talked a bit about the legislation. And one of the things the legislation talked about was near real time, and yet everybody acknowledged that you really want to get to real time. And I think in many aspects, whether it’s, you know, cyber warfare, and in particular I’ll say cyber defense in a new model, I think we’re going to increasingly find that speed is critical, because the thing that is qualitatively different about the Internet threats is the combination of speed and scale. So if you don’t want scale to happen, you better intervene quickly. And these things occur at a speed that humans don’t move at or think at.

And so what do you think the issues are going to be from a legal point of view and, I’ll say, a liability point of view for companies strictly operating in a self-defense environment who get to the point where they have non-man-in-the-loop intervention in order to safeguard their infrastructure?

But, say, for example, you know, our old—my old company, your current company, Microsoft, or another other giant cloud provider, if you come under attack and you decide that you’re going to essentially protect the infrastructure to live and fight another day, or do business another day, but in doing so you now take perhaps thousands or hundreds of thousands of companies and momentarily disrupt their business because they all sort of have a concentrated dependency on these super-scale cloud services, how do you think that we ought to deal with the liability there?

You know, you can say, well, I’ll try to stay up or open and keep you alive, but if I die then everybody loses. If I take action instantly because I think I might be able to survive but I have these side effects—how do you think we ought to control that kind of liability for businesses, because I think—my own view is—and I’ll just close with this thought. Somebody asked earlier about the balance of government spending between their offensive capability and their defensive capability.

And I think a hidden reason that you see that discrepancy today is that today offense is the business of the government but defense is the business of business. And in the past that wasn’t really true. When people thought in military terms they usually think, well, the government both made the weapons and used the weapons, and they had to defend against the other guy’s weapons. But increasingly in this world, you know, the government isn’t actually—they’re making the weapons to exploit flaws that come from the commercial product, and so the defense is going to fall to the business sector. And so I think, you know, there’s a confluence of these things that at least I’m worried about in thinking, how do we get sort of the liability issues right in this environment?

MUNDIE: So, I think it’s a very hard problem, and I’d say that for a few reasons. And the first is even if you don’t say there should be strict liability but liability for negligence, which is the classic liability standard, you have to decide what’s the reasonable standard of care? And in fast-moving environments with a lot of change, it’s often hard to define with clarity what is the reasonable standard of care? Then the second problem is whether you’ve met that reasonable standard of care. And, yes, you might take some defensive action that, you know, injured someone but saved many, and then the question is, is that a reasonable thing to do?

But the other thing I worry about is—with the speed of automation, one of the things we think a lot about is could we denial-of-service ourselves by accident? You know, if you basically have machines that are executing automatically on a range of things—you look at the “flash crash” on Wall Street where algorithms were just triggering because of other algorithms without human intervention, and suddenly you have this effect. And, you know, we can do things like we can block packets, but if we do that in an automated way we may end up denial-of-servicing ourselves.

And then the next thing I worry about is machine learning, which is—you know, one of the things that Microsoft researched, which you led for a long time—said to me—which gave me pause and I had to think about it is once you really embrace machine learning where machines take in a lot of data and learn from that data, sometimes the machine will learn things that surprise even you, like even though you wrote the algorithms, once they’re free to learn—they’re kind of like my kids. Sometimes I give them a message and it comes out funny, right?

And so what’s going to happen is machines might start making decisions on our behalf that we didn’t even anticipate and wasn’t even predictable, and how do you apply the reasonable standard of care to that? So there’s no easy answer.

MUNDIE: That may be—but that may be the only way protection ensues in the future.

CHARNEY: That’s right. And I will say, you know, there are—there is this also a concern, as you raised, about the difference between defense and offense, which is companies can do a lot of defense, but once governments, for example, start automating offensive reactions, you get into—I give John Badham great credit. In 1985 he did “WarGames,” long before people were thinking about these potentials. But if we allow the machines to decide on our behalf when to strike out at other people, and then you add machine learning to that, you know, how close to “Terminator” have we come, is the question. (Laughter.)

PAINTER: Closer every day.

A little more? You wanted to—

TIKK-RINGAS: I could go.

Well, I would take the machine out of the loop, in a way, because, well, it’s not to say that machines are not going to be there but I would not take humans out of the loop. And so I would get this conversation towards what are we seeing in terms of trends in legislation that directly touch upon people who still have and will have responsibility for their business, regardless of whether society uses machines or not?

And I would like to go back to a court ruling from 1920s. And this is the T. J. Hooper case. And the logic that follows from there is that if there is a technology available that is not unaffordably expensive and it’s known to save lives or property, then you ought to have it. And—

MUNDIE: Like machine learning.

TIKK-RINGAS: And very logical. The very idea of that is that it’s just a new technology, after all. And of course it’s complicated, it’s complex, and we are yet learning how to apply it all, but the principles are all the same and we can make use of them.

Now, some of the examples of how it’s happening, we’re already seeing that the responsibility for technology in a company moved from CTO level, CIO level, to CEO level. What that means is that our manager—executive managers are made responsible for data protection, for consumer protection, for intellectual property protection, for all those handles that support business in information systems.

And adding to that insurance and the required sort of standard of agility from those, it really comes to the point where your liability is attached to your business model. If you are a defense contractor, then your business risk is higher. If you are an e-commerce company doing online dating, well, then your business risk is there. And we ought to be better at attaching the right labels to those liability handles.

CHARNEY: Before we go to—

MUNDIE: So let’s get—let’s get the contracts right.

CHARNEY: But before we go to questions, I have—I want to touch on one thing you said, because we’ve seen this.

You know, on “Patch Tuesday” of every month we issue all these patches, and we do massive testing before we issue. And then you have “Exploit Wednesday” because the bad guys reverse engineer the patch and then send out their exploits. And they don’t do testing. They just let it go. If it doesn’t work quite right, they don’t care. The problem with keeping the human in the loop on defense is there’s no requirement that a human be in the loop on offense.

MUNDIE: Right.

CHARNEY: And so the real problem becomes—even if you’re only using machine technology for defense, the fact is there are implications to all your users when you’re at hyperscale. And if you say you’ve got to keep a human in the loop that your adversary does not, how are you going to keep up?

TIKK-RINGAS: Maybe just very short on that. The question, I think, for me becomes—if we put humans out of the loop, the question becomes can machines be strategic about what we have to achieve? Is the answer is no, then we have to make sure that something else is more strategic there.

PAINTER: And I’d say it’s kind of a blend. So if you’re talking about, you know, network defense at the perimeter, doing things where you’re looking at signatures, you’re blocking them, you’re not just shuffling them off to say, OK, here’s something we need to look at later—and this has been a lot of the move in the Einstein, the other system—I think that’s generally a good thing because the second-order consequences aren’t going to be as large if you’re simply blocking malicious activity and taking mitigating actions within your systems.

I think where it gets a lot more complicated and where you still—because the policy, frankly, is not developed in this area—gets more complicated when you go outside of your systems and you have—you have a range of second-order effects. So to give you an example, if one of the tools you have—you know, we talked a little bit about this on the last panel, this idea of hacking back. That’s bad if you do it in person. It’s even worse if you have automated responses, because you could be hitting innocent third-party victims domestically and internationally; you could be violating sovereignty; you could be escalating without even knowing it. And you don’t want to do any of those things, so that’s true.

The other thing is let’s say you were just saying, OK, we need to go out and—one way to mitigate a botnet is to go and issue a code to turn them all off everywhere. Again, there are huge legal issues behind that and there are huge sovereignty and international issues behind that, so you can’t really automate that as well.

So when people talk about automation as this sort of, you know, that’s the way we’re going to solve all this, I think we have to be cognizant of some of the limitations there. It doesn’t mean that in time we can’t think about some processes and have some of the international discussions we need to have for this, but I still think you still need a man—not a man in the middle of attack, but a man in the middle of actually—help think about these second-order effects. But I think there’s a lot you can do, you know, at the perimeter, at the cloud.

You know, you talked about the risk of shutting down a lot of people because you decide to shut down your own system. There’s a lot of—short of shutting down your own system that you can do in terms of blocking and mitigating attacks in cloud services and domestically. And I think, from a liability perspective, one of the ways you deal with that is your terms of service with the various customers.

So I’d also agree with Scott that, you know, even though we are a naturally litigious society, it’s been very interesting—because I remember talking about this literally 20 years ago, that someday there would be a standard of care that would be developed for this, and there still really isn’t. Now, I think that as we think and this more—it gets more mature and people even start adopting voluntary standards, that will develop over time, but that’s not been a driver.

MUNDIE: Yeah, and this framework stuff was sort of a push down the direction of creating some, you know, uniform standards.

So let’s open the floor now to the people, the members in the audience. Wait for the microphone. Tell us who you are, your name and affiliation. And try to, in this case, direct your questions to one of the panelists if possible. Let’s start right here.

Q: Hi, Joe Marks from Politico. This is for Chris and probably for Eneken too.

So since the no-commercial-hacking agreement with China, two things have happened. One, there’s not been any real evidence either way whether China is complying to the extent that the U.S. would agree it’s compliant. Two, a seemingly similar agreement was agreed between China and the U.K., and Germany has said it’s interested in a similar agreement.

So, one, what’s your reading of this proliferation of similar agreements? What does that mean for this as a developing norm of some kind? And then, two, if there are more agreements but no compliance, is there still some value to these agreements? Or, conversely, do a lot of agreements that no one’s honoring damage the cause of commercial spying being verboten?

PAINTER: So I think as Michael said in the last panel, it was significant to get this agreement with China because, you know, first and foremost it’s something they never said before. It’s something they never said, that that was something that was off limits. And it creates a standard, a metric we can hold them accountable to. And so I think that’s important. I think as far as whether they live up to the agreement, we’re watching. The president’s ever going to watch. We’re going to watch carefully. We’re going to look at the information. We have mechanisms in place to do that and we’re going to continue to do that.

I do think the—this is the good proliferation, the proliferation of this norm, because it is one of the norms we’ve been championing, where other countries are reaching similar understandings. But even more broadly, we think that no station should engage in this. And this is part of—part and parcel of a very aggressive movement—effort that we are doing to try to promote this and our other norms internationally and get, you know, acceptance from a wide—as wide a community as possible.

I do think it’s interesting that before China agreed to that—you know, one of the things about norms is—one of the key things about a norm that you’re trying to get acceptance for internationally is it has to be universally appealable. So if China had a norm saying there’s absolute sovereignty in cyberspace; we get to control everything, we’re not going to sign up to that. Many, many countries—that’s not universally appealable. It doesn’t help everyone. Not attacking critical infrastructures? Everyone sees benefit in that and so that has led to acceptance.

This one, I think there might have been some view that, well, this is a China and the U.S. problem, but it really isn’t. It’s a global problem. And the fact that China agreed to it—and now the U.K. has a similar agreement and Germany is looking for that—I think very much aids people understanding this helps the entire world community and helps our efforts.

Eneken, did you—

TIKK-RINGAS: Yeah, I would maybe just come at it from a slightly different perspective, joining you at the end, though, which is it is almost with every government in the world, the struggling with trying to get their heads around the whole complex of cyber. And I would say that it is inevitable that all countries are becoming more and more responsible players in it, if not for other reasons than because ICTs constitute a considerable source of economic growth for all of them.

And when it comes to becoming more responsible actors, then I think the value to be had in such agreements or such processes of agreement is getting a much better understanding of each other’s redlines. That means, where is it that we really need to look into either enforcement restrictions, or where it is that we cannot agree to each other’s sort of standards of compliance?

And on this point of sovereignty, I would just say that—I’ll say that there is full sovereignty in the very fact that countries are having these conversations. The question that we disagree about is the exercise of sovereignty. And that is again where countries have to be really strategic in tabling their views and, in fact, soliciting support of their views from the international community.

MUNDIE: OK, let’s go over here.

Q: Charlie Stevenson, SAIS.

I have a suggestion for U.S. policy I’d like to get the panel’s reaction to. Forty years ago or so I helped write the Hughes-Ryan Act, which created a principle that the CIA could do covert operations only with presidential accountability and congressional notification. Would that be a good rule for offensive cyber operations by the U.S.?

PAINTER: So, I mean, one thing that we said—and it’s a classified directive, but we said that there is a U.S. policy, and it’s the policy of restraint in using offensive cyber capabilities. That’s something I think that, you know, other countries—because many, many countries around the world are developing offensive cyber capability. So, you know, it is something where we look at all of our various policy dimensions. We take into account a lot of different issues. And it’s not an open the floodgates, but it is very much a policy of restraint. You’ve heard this from DOD as well.

So I won’t comment on whether you need a particular kind of legislation, but I think having those policies in place is important. So again, it’s not something that’s uncovered completely.

MUNDIE: Back here.

Q: Hi. Still Marc Rotenberg.

PAINTER: I thought for privacy reasons you’d change your name the second time. (Laughter.)

Q: I do at some events. (Laughter.)

First of all to Scott, I mean, I wanted to thank you actually for your comments about machine accountability. I think that’s a very important issue in the cybersecurity world, because increasingly we need to think about autonomous devices like drones and vehicles and what the consequences will be of having those out there. We’ve argued specifically in support of something we call algorithmic transparency and a change—addition to Asimov’s laws of robotics, which is that a machine should always reveal the basis for its decision. So that’s something to think about.

But here’s my question to you: To your earlier comment about our increasing dependence on biometric identification, certainly there’s some upside there because there’s better authentication, but there’s a serious downside as well. I remember when Steven Brill launched Clear, ran into some financial problems and then had to declare bankruptcy. They put the biometric database of 185,000 frequent flyers up for auction. And then of course OPM, as part of the breach, lost control of 5 million digitized fingerprints.

So we can change credit card numbers when credit card accounts are compromised. It’s not quite so easy when a digitized fingerprint is compromised. What’s your thinking on that?

CHARNEY: So the reality is when we do biometrics we don’t actually capture your fingerprint. And you don’t want to actually do that because sometimes you have to revoke an authentication token because some flaw has been found in a protocol or reply capability comes up.

So, for example, with facial recognition, if we take a picture of your face we can pick 1,000 points at random, seed some other material, sign it with a TPM; it becomes your ID. If, because of something we didn’t anticipate, that can now be replayed, we take another thousand points from your face, create a completely unique ID. And you can’t—cannot actually take the points and create a real face. And fingerprints have to be the same way.

The other thing is, with many of these technologies your ID—which is actually now just a blob of encrypted material; it’s not really your fingerprint—it’s stored locally in the device, not in the cloud. So, yes, your device could be stolen and then someone might figure out, you know—maybe, you know, they talked you into giving up your password and now they can replay from your machine, sign by your TPM, but it can be revoked. Once you complain to someone, it can be revoked and it’s recreated. It’s not your actual biometric. So it’s a different model than the Clear model.

Look, when the government—and I work for the government. They took my actual fingerprints. We don’t need actual fingerprints to do a biometric identification. I think you have to design these things to recognize that, for legitimate reasons, you’re better off not actually having the fingerprint or the face, and you just use it to seed algorithms.

MUNDIE: Yes, sir, right in the middle here.

Q: Hi. Sean Kanuck with the National Intelligence Council.

As the international community establishes rules of the road for states to possess or use offensive technologies in cyberspace, I’m curious if the panelists, and particularly Eneken, as I’ll explain in a second, feel that those rules should be symmetric where all nations have the same privileges and responsibilities, or asymmetric, as we have in the nuclear model where some states are “haves” and others are “have-nots.”

I posed it to Eneken as a representative of a non-nuclear power, but I welcome comment from any of the panelists.

TIKK-RINGAS: It is hard to imagine how we would proceed with this proposition that we are satisfied with existing international law and Law of Armed Conflict and at the same time pursue an asymmetric model of rogues. So where I’m going is we could potentially think of an asymmetric model of how those laws are—rules are applied, and that could flow from capabilities—capability levels, as they actually do at this point of time.

The other question about—first on offensive capabilities and then why would we need to rethink rules, meaning I think at this point of time it is clear that the countries are integrating ICTs in their arsenal across the world. And I would not necessarily say that this in itself constitutes a threat bigger than we have ever faced from countries, because as a matter of fact, although ICTs may do a lot of damage, we don’t see that in state practice in how they’re applied.

So that comes through the existing Law of Armed Conflict. And the question I would ask there is mainly, are we ready to answer the question that the first panel touched upon, that what is proportionate and what is all those principles that are there, and how we therefore are to apply those rules? And I think that already becomes asymmetric, as actually has been an interpretation and implementation of a lot of international law depending on military power and other power instruments that those countries posses.

MUNDIE: Chris, do you want to add anything?

PAINTER: I think that—look, I think one of the things about this area is that you can have a country that doesn’t have a lot of technological sophistication but could have capabilities in this area without investing a whole lot. Now, maybe not sustained capabilities but they’re capabilities.

So you have to think of—and again, that’s why I think you look, as we are, at the norms in terms of effects. You know, what effects should countries not do in peacetime? And then in wartime, I think you need to look at proportionality, distinction, and those other issues, and exactly how they work in this space.

MUNDIE: I actually will just add, I think from about a doctrinal level it’s unlikely that you’re going to see a complete decoupling of the cyber weapon, you know, from all the other weapons.

PAINTER: Well, I would agree with that. And I think, you know, one of the things we always see is this—you know, because it gets a lot of headlines—cyber war, that there’s going to be a free-standing cyber war. You know, I think what we’ve already seen and what we’ll continue to see is this being integrated as a tool that countries use. And we’re seeing that already.

MUNDIE: Right.

Yes, sir.

Q: Tony Summerlin. I work at the FCC.

My question has to do with just recent—like Safe Harbor being overturned, “right to be forgotten.” Today it was announced the U.K. passed a law that said all providers had to keep all browsing data for a year and make it available as necessary. I mean, I’m seeing a growing gap in the direction of what we’re doing for each other across the pond. And we’re talking about this kumbaya with China, which is nice—(laughter)—but I just wonder about the gaps that are being created at the individual level and what those arguments might look like.

MUNDIE: Anybody? Go ahead, Scott.

CHARNEY: I’m happy to, you know—so this is a huge challenge for an international, you know, company. Safe Harbor, the U.S.—it’s been reported that the U.S. and the EU are working on a new framework and trying to get a new agreement in place. We also rely on model clauses that have been approved by the Article 29 Working Party for Privacy.

The real challenge, though, it seems to me, is there’s this tension between the idea of a global medicine and sovereignty. And I don’t think you can expect countries to necessarily all adopt the common regime because there are a lot of cultural differences and other things, but you do look for some sort of harmony. And as an international business, what you look for the most is you don’t want conflicting laws in different jurisdictions—you must preserve this for X days and you may not preserve this for X days—and you’re subject to both laws at the same time in the same place.

You know, we’re fighting a case now—we just had arguments in the Second Circuit where the U.S. government is trying to compel us to produce data from our Irish data center. And the question is, whose law should apply? How do we expedite mutual legal assistance? There’s a lot of this tension. I will tell you that, like Chris said earlier about another issue, governments have been arguing over this since 1990, because I was chair of the G8 Subgroup on High-Tech Crime and in one of the earlier meetings we went around the table to talk about where the countries were on this notion of trans-border searches and access to data.

And it was very interesting because—so I’m the chair of the group so there’s a, you know, U.S. delegation. And we went around the room and every country basically said, no, you can’t do searches in our country; no, you can’t do searches in our country. And then the British suggested there might be this concept of virtual presence, like if we can access it we can get it. And the other countries are like, but you can access everything. And then we got to Italy who said, well, it’s kind of interesting because I’ve been searching your computers for years. (Laughter.) No, literally. He said, I’m doing a child porn investigation. There’s a network drive. I go to the drive; I actually don’t know where it is. There’s no way to know.

So anyway, we’re going around the table. We’re working this problem for a long time and after several months we go around the table to take the pulse of everyone again. And we’ve been working on other issues as well and came up with 10 principles for law enforcement. But we went around the table and we got to the French delegation that said, no, you cannot do searches in our country, but we need to think about times when maybe we should do something else. So I’m the chair of the group and I said, I’m sure everyone would like to understand the French change of position. They said, there is no change of position—you cannot do it—but there might be times we should do something else. (Laughter.)

So when you lead an international delegation, you learn that the thing to do is call a coffee break. So I called a coffee break and I walked off with the French to the coffee urn and said, OK, you know, what’s going on? And he said, look, we were investigating two French citizens for a violation of French law and they both had AOL accounts, so we went to AOL and we asked for subscriber data, and we gave them a court paper and they gave us subscriber data. And then we went back to them with court paper and asked for the content of email and we got back a letter from the FBI saying, we received your request for mutual legal assistance, and were completely confounded by this.

And I said, well, that’s because AOL, all their email servers are in Dulles, Virginia, and it’s covered by the Electronic Communications Privacy Act. And the French said, so explain to us why, if two French citizens who have not left France are committing a crime against the French state we would need the assistance of any foreign government. This is the problem.

And so, you know, from a company that’s international—look, we have data centers in the EU. A lot of our services allow you to choose where you want to store your data. So you can store it in the Irish data center and back it up to Amsterdam. If you’re U.S., you can store it in the U.S. But there’s this constant tension about what the rules of the road should be for data flows and whether people’s rights should flow with the data, whether mutual legal assistance should work more effectively so countries less often have to run into this conflict but can assist each other. But these are very hard problems because possession is nine-tenths of the law and governments do not want to give up their sovereign right to compel people in their territory to do what they say.

PAINTER: And I’d say one of the big issues that Scott really mentioned is you’re not going to have exactly the same legal regimes for privacy or anything else, but it’s having interoperable ones, and there’s been some hiccups recently but we are working to address those.

And, you know, in Europe they’ve had different views of data retention from saying, you can’t retain any data, to, you have to retain data. And it’s been all over the place. So I think as long as we have compatible rules that’s important.

On China, I don’t think we believe that these accords at all solve all of our issues with China. So it is an important step, and I’ll leave it at that—important and good step.

MUNDIE: And as Chris said about the attribution question, you know, it’s not just a question of having identity or something else. You use all the things you can get.

PAINTER: Right. Right.

MUNDIE: So, I mean, I think, you know, in the case of the U.K., they, like everybody else including the U.S., are now struggling with the fact that businesses have made encryption so much more generally available. So they’re still trying to do their job, whether it’s counterterrorism or law enforcement. And so if they can’t get it the way they used to get it, they’re going to go try to get it some other way.

And so I think, you know, this is one of these Whac-A-Mole kind of problems. You know, I mean, the governments have not been—you know, no one has taken away their mission and obligation to perform their duty, whether it’s law enforcement, counterintelligence, whatever it is. And the technology keeps sort of moving things around for them and so we’re going to play Whac-A-Mole a little bit, looking for, OK, if I can’t get it that way, I’m going to go over here and look for it some other way. And I think that you’re just going to continue to see that cycle around, in my view.

TIKK-RINGAS: If I may—

MUNDIE: A question in the back?

TIKK-RINGAS: If I may just, too—

MUNDIE: Oh, yeah, sure. Go ahead.

TIKK-RINGAS: —add a line to that. I think what we’re seeing here is this sort of question about, as Chris said—well, we are, in a way, also facing the same question at the international level: What do human rights, including privacy, mean online? And we see actually regions, not to mention countries, going different directions. And currently European trend is clear with the “right to be forgotten,” but also repealing the Data Retention Directive, turning it back to more conservative.

Now, that has real-life implications, though, because data centers are not moving to Europe by accident. So there is economic too and—or let’s say money to be made in these policies. And of course countries that care about national security need to now do their own calculus on how they’re going to achieve that under the mechanisms in place.

But I think the real question will be the European model cannot be accepted most likely as a global standard, and we see other standards emerging. For example, Asian countries, Middle East are building their data protection and privacy requirements not on human dignity, as Europe, and not necessarily thinking of national security only, but precisely, how is business to flow to those countries? So I guess in terms of where privacy law will go, we would have to follow the money.

MUNDIE: Yep.

Yes, sir, the man with the paper in his hand.

Q: Alton Frye from the Council on Foreign Relations.

The theme of the panel underscores very effectively that the continuing international legal categories of collective defense and self-defense, meaning as a residual possibility, are alive and well, that I think this is a very fascinating evolution into the requirement for new practices, but the tempo of potential threats is so accelerating that the question becomes, how can we make responses prompt enough?

Therefore the question: The Xi-Obama agreement seems to talk about a mechanism to link Chinese and American consultations. Could it point toward a meaningful hotline that could be the basis for prompt information sharing about threats that need to be acted on quickly? And could that bilateral hotline eventually become a multilateral hotline?

PAINTER: So this is actually one of the—this is a basic confidence-building measure. Indeed, you know, one of the very first bilateral agreements we reached was with Russia several years ago where we used something called the Nuclear Risk Reduction Center, which is the real hotline—it’s not a phone—that connects the two countries. We can apply it to cyber. We also had a voice hotline. We also had an exchange of doctrine and other issues.

So as we think about these confidence-building measures both bilaterally and multilaterally, yes, I think having hotlines that help de-escalate, make sure there is not a misperception or miscalculation, that’s one way of achieving it and I think that’s something we are looking at.

But, you know, to your point on collective action too, there’s a number of good things that have happened there. The last panel talked about NATO quite a bit and said, well, does NATO really apply? Well, actually, yes. There is a statement in the Wales Summit that says that Article 5, for instance, applies to cyber. It’s going to be a case-by-case basis. Frankly, Article 5 has always been a case-by-case basis so—but it says that cyber is part of that construct. There’s work being done there.

So there’s collective action that we’ve talked about with norms and other issues, but the confidence-building measures are an important part of this three-legged stool: international law and how it applies, the norms below the threshold of armed conflict, and then, you know, really building that confidence and transparency. And I think a hotline is something that, you know, we are trying to execute in an appropriate way.

Sometimes it’s just having points of contact, knowing who to call in a crisis, knowing how to get the information you need. Sometimes that’s law enforcement channels. Sometimes that’s through policy channels or, you know, White House to prime minister channels. So we’re looking at all those things.

MUNDIE: I think one of the things you have to think about is what the command and control structure looks like and whether even one exists. And what I mean by that is—so I came out of the Justice Department where you had a command-and-control structure. You have a lot of agents who carry guns. You can put people in prison. There should be a lot of rules and structure. And then, of course, in my career I went to Microsoft where developers are king and it’s, you know, widely distributed, and the like.

Yes, you need escalation paths sort of quick, particularly for situations where you don’t want countries to misread the signs and make high-level political decisions that are catastrophic, but at the speed of Internet attacks a lot of the work is actually done by CERT teams and computer security professionals, often not in the government at all but in the private sector, like who found Stuxnet? You know, it was private sector people.

And so you have to get used to having a distributed, non-command-and-control structure for a lot of the activity. It doesn’t mean you don’t need a hotline and you don’t need escalation paths for the right thing, but the speed of attacks and the breadth of attacks, these asynchronous issues, there is going to be a lot of organic work that a command-and-control structure just can’t command and control.

PAINTER: And there should be. I mean, this is—you know, you don’t have one ring that rules them all. You have different connections to deal with different issues. You have a policy connection. The CERTs should be talking together and that’s something we’ve been promoting very heavily.

But Scott also raises an interesting point that some countries, because this is still a new policy area, haven’t figured out their internal structure even within their governments, like how would you actually escalate something up? And that’s part of the work that needs to be done.

MUNDIE: So, if it’s quick, we have time for one more question. I’ll remind everybody this has been an on-the-record session. So we’ll take the last question from the man in the back right there.

Q: Thank you very much. Don Daniel, emeritus professor at the Naval War College.

I’m interested in finding out how much the issue of protecting sources and methods could restrict our desire to, let’s say, raise certain issues in certain countries. You’ve talked about a potential hotline with the Chinese. I can see circumstances maybe where I don’t want to call the Chinese. You know, I know something is going on. I’ve got a way to get into it. It’s quite useful for me. It’s going to maybe be useful for me in the future. It may hurt, I don’t know, Walmart, OPM or whatever, but I still don’t want to indicate to the Chinese or to anybody else that I’m onto what’s going on. Is that an issue or not?

PAINTER: Look, I think that’s not unique to cyber. That’s an issue all the time, but I think that there is, you know, much more of a tilt to actually getting things out there and trying to ameliorate the problems. But it’s always an issue. So, you know, you want to make sure you can track illegal activity you’re seeing but you need to respond to it too and actually change behavior.

CHARNEY: I will say I give the U.S. government credit on this for transparency. I’m going to be a little facetious here but, you know, the president did come out and make a statement that the U.S. government has a bias for defense. So they will disclose vulnerabilities to the private sector unless we don’t, which is really pretty much what he said. He said if we have a national security or public safety reason, we may not disclose.

And then Michael Daniel, who was on the last panel, actually published a blog, going through the questions they asked in their equities process, like is this something we think other people know about so the whole planet’s at risk? Is this something that we can use and then have fixed, and blah, blah, blah? So the U.S. government’s been relatively transparent about this and I give them credit for that.

PAINTER: And I think the reason for that is it’s exactly the point you made. I mean, it’s the stability of the whole system. And I think that if there was a bias toward nondisclosure before, there is certainly a bias to disclosure now.

MUNDIE: And, in fact, in the norms paper that Microsoft published, one of the norms we believe is that countries should have a standard policy for how they handle these mixed-use things that have, you know, offensive value but also put people at risk. Countries should be transparent about how they’re dealing with this problem because it is a balancing of equities, I think.

MUNDIE: Or transparency is just telling people, how do you decide?

CHARNEY: That’s right.

MUNDIE: And I think that’s important.

So, on that note, thank you for your attention. (Applause.)

(END)

This is an uncorrected transcript.

Top Stories on CFR

Myanmar

The Myanmar army is experiencing a rapid rise in defections and military losses, posing questions about the continued viability of the junta’s grip on power.

Ukraine

The two-year-old war in Ukraine—which is far from deadlocked—could pivot dramatically in the coming months. U.S. decisions will play a decisive role.

Egypt

International lenders have pumped tens of billions of dollars into Egypt’s faltering economy amid the war in the Gaza Strip, but experts say the country’s economic crisis is not yet resolved.