Maurice R. Greenberg Senior Fellow for China Studies and Director of the Digital and Cyberspace Policy Program, Council on Foreign Relations
Research Fellow and Director of the Technology Policy Program, Mercatus Center, George Mason University
Principal, Chertoff Group; Former Director, Central Intelligence Agency (2006-2009); Former Director, National Security Agency (1999-2005)
Executive Vice President of Worldwide Government, Legal and Business Affairs, MacAndrews & Forbes Holdings Inc.
Experts discuss possible policy approaches toward cyberattacks.
The Home Box Office What to Do About... series highlights a specific issue and features experts who will put forward competing analyses and policy prescriptions in a mock high-level U.S. government meeting.
TOWNSEND: Good afternoon, ladies and gentlemen, and welcome to today’s Council on Foreign Relations, HBO What Do you Do About Cyberattacks meeting, with Eli Dourado, Michael Hayden, and Adam Segal. The Home Box Office, “What Do you Do About?” series highlights a specific issues and features experts who will put forward competing analysis and policy prescriptions in a mock-high level U.S. government meeting. On behalf of CFR, I’d like to thank my friend, Richard Plepler at HBO for their generous support for this series.
OK, well, let’s—let me start. My panel sort of needs no introduction. Eli Dourado is a research fellow and director of the Technology Policy Program at George Mason University. General Michael Hayden is a principle in the Chertoff Group, and he’s the former director not only of the Central Intelligence Agency, but the National Security Agency, and a former colleague. And Adam Segal is the Maurice R. Greenberg senior fellow for China Studies, and the director of the digital and cyberspace policy program here at the Council on Foreign Relations.
General Hayden, let me start with you. Why don’t you talk to us about your view of not only cyber threats, but how the U.S. government ought to be prepared to respond to those threats?
HAYDEN: Sure. I think everyone’s aware of—you know, that’s a dangerous domain out there. And my casual use of the word domain should suggest to you how fundamentally disruptive this cyber thing is. I mean, your military goes land, sea, air, space, cyber. We think there’s a new operational environment. And all of us, because of ease of use and scale and a whole variety of empowering things, have decided to take stuff we used to keep in a vault, in a safe, or at least in a desk drawer down here, and put it in our cellphones up here. And it’s all become very, very vulnerable.
I fear I can fill the rest of our space talking about this, so I’m going to cut to the—cut to the chase. We underestimate how disruptive this is. And I know all of us get it. Cyber’s a big deal. But I said this at Black Hat, saying even you don’t understand how disruptive this is and how new this is. And we are having difficulties coping with life up here in the new domain. We have difficulties technologically, we have difficulties with trained personnel. But all of that pales in comparison with our challenges of developing law and policy and standards and norms up here in a domain that is quite different.
And so the issue we have, Fran, isn’t the lack of technology. The greatest concentration of cyber power on the planet is about a $40 cab ride from the White House up the B.W. Parkway, all right? Sure, we can use more talented people, but we’re not bad at that either. What we don’t have is a structure of law and policy that allows the government to do for us up here what we have traditionally expected it to do down here. We, the big we, the 330 million we, have not yet decided what it is we want, or what it is we will allow our government to do to keep us safe up here in this—in this domain.
And the real devil and the problem, Fran, is that the three things that change because of this disruption are technology, social norms, and policy, and law. And each of those three change by nature—an ironclad law of nature—they change at different rates. And so I fear that the government is going to be forever in a tail chase, providing government security services in this domain that they’ve worked out broadly. Well, as she said, I was head of CIA. We haven’t always worked it out down here too, what represents national consensus about appropriate government behavior. But up here, it’s a blank sheet.
I guess one more sentence, since you’ve asked me to tee it up. Which then tilts me in the direction of the private sector being at least—at least the immediate and maybe even the long-term salvation of making ourselves more secure up here. I tell public audiences, the next sound you hear is not digital hoof-beats and the digital bugle of the digital federal cavalry coming to your rescue up here. You’re kind of on your own.
TOWNSEND: Eli, talk to us about what you expect—we’ve seen the OPM breach. We’ve seen the loss of fingerprints for millions of Americans, many of whom hold security clearances. What do you expect the next point of vulnerability or attack to be? And what should be doing to get out of the tail chase General Hayden describes, to get in front of it?
DOURADO: Well, I think that what we—what we actually observe a lot of is—cyberattacks is a very broad term. It’s used to describe what’s known as cybervandalism, it’s used to describe cybercrime, it’s used to describe espionage, and it’s used to describe cyberterrorism, and cyberwar. I think what we observe—what we are observing a lot of right now is cyberespionage, this category in the middle. And so I think—I think we’re going to continue to see a lot of that. I think—I fully expect China to try to exploit the data in the OPM database. And I think probably the most important thing that we can do is get serious about hardening our targets.
One way to do that is to think about the way we handle vulnerabilities. So there is a process right now when the government acquires a zero-day vulnerability—a vulnerability that vendors have had zero days to respond to—in terms of, like, weighing the equities of the intelligence collection agencies versus the defense agencies that have interests in this. Well, I think that—the actual process is classified, the existence of this process is not, I would worry that this is unduly biased against disclosure—against responsible disclosure of the vulnerabilities. So I would like to see more disclosure to the private sector of what these vulnerabilities are that are being exploited by nation-state actors in this domain.
TOWNSEND: Adam, let me ask you, you know, it’s interesting, we talk about hardening vulnerabilities, we’re constantly in the tail chase, but the fact is in the OPM example, they were inside that system not for months but perhaps for years. So it’s one thing to harden the vulnerabilities before there is a penetration. Do we have the policies and the process right, or what should the government be doing in order to respond more quickly and more effectively?
SEGAL: I think there are two broad issues, right? One is, what should the government be doing to defend itself, right? And OPM clearly showed that the U.S. was not defending itself, right? At OPM the data was not encrypted, there was not two-factor authentication. Lots and lots of basic security that should have been in place was not there. And that’s because in many cases the United States government has a kind of top-down checklist approach to cybersecurity, right? Have you gone through what you’re supposed to do? Have you checked the boxes?
But the defender, once it checks the box, the attacker will then just change how they go about it, right? They know what they’re checking their box for and they can change their approaches. And that clearly isn’t working. You know, we’re getting some change in there by focusing more on outcomes as opposed to what processes are in place. The Obama administration put in place a 30-day sprint to try and get two-factor authentication and encryption in place. But we still have a long way to go there.
The larger issue, of course, is the one that General Hayden mentioned, is where does the public sector and government stop defending, and where does the private sector’s responsibilities begin, right? If North Korean rangers or commandos had broken into Sony and stolen a movie and then, you know, fled the country, the United States government would have responded, right? They would have used physical force against that technology company. If they had just stolen the movies through cyber-enabled means, the U.S. government probably would not have responded, just from a cyberattack. But because free speech issues became involved, and there was also a physical threat, then the U.S. government got involved.
When the next attack happens, the U.S. government has not clearly signaled when it will or will not get involved. And that clearly is a problem. We have a massive deterrence problem. Our potential enemies don’t know if we will respond or if we won’t respond. And that, I think is a real problem.
TOWNSEND: General Hayden, doesn’t that get to the very essence of the question of when is a cyberattack an act of war? How do you—how do you define that, not only for the deterrent effect, but of course it implicates the private sector. The private sector when they were suffering dedicated denial of service attacks in the lead-up to the Iran negotiations were told not only—that they should continue to defend themselves, but they should not take offensive action. So talk to us about when is it an act of war, and when is it appropriate for the private sector to take offensive action?
HAYDEN: So we’re involved now in this massive intellectual drill. And I’m serious that it’s really important, because if we don’t get—I think I’ve suggested in my first intervention here that what we’re missing are the big ideas, and getting them—getting them straight. And so we’re trying to see what of our life experience here transfers up here. And the guys down the street at Turtle Bay have actually taken some laws of armed conflict down here and transferred them, I think rationally, up here. But there’s a whole bunch of things that are still undecided, Fran.
Estonia had a massive DDOS attack back in 2007 from patriotic Russian hackers, all right? And the Estonians now have a NATO cyber center of excellence. They’ve created a manual, the Tallinn Manual that attempts to define—it attempts to answer some of the questions you’re raising here. We’re not totally in agreement with the Estonian definitions, all right? Number one, we’re a pretty powerful cyber nation. We like to throw elbows out there too. We think there are some things that are accepted international practices, all right, that perhaps others might view in a far darker light.
Eli’s point, most of the sins—we are very sloppy with our language. Anything unpleasant happens on the web, we call it an attack. When we do it, we have a very definition of what constitutes an attack, all right? Eli’s right. Most of the stuff going out there is cyberespionage, state-on-state. State on anyone else for safety and liberty, accepted international practice. State on anyone else for commercial profit, we view that to be illegitimate. I know of four other countries who agree with us. They all speak English. It’s the usual suspects. Everyone else thinks state on target for commercial success is a legitimate state espionage activity. So we’re really down here at the fundamentals of disagreement.
So I could keep going on and showing how hard it is. I would suggest a norm, all right? Judge an event, categorize it by its effect, not by its means. Look at what it created, and then at that point decide whether that was vandalism, criminal activity, espionage, or something more.
TOWNSEND: Is it—Adam, let me go to you. Is it possible to construct an international understanding of cyber norms? There’s been much discussion about that. And even if you could, what would the effectiveness of it be? Of course there are General Hayden’s old agencies that would apply them or not. And so what do you think about the policy objective of trying to come to an international agreement on cyber norms?
SEGAL: I think we have to be fairly narrow on what norms we think we’re going to get. We’re never going to get a norm against espionage, right? We already all hack and steal, and we’ve done that for thousands of years. We’re going to continue doing that. And as the general said, the U.S. probably doesn’t want that norm, right? We’re pretty good at it and we don’t want to stop doing it.
We’re at the beginning stages, right, where—if you look how long it took us to get norms about nuclear, or chemical, or biological weapons, it took decades over time. And we’re at the beginning stages for cyber weapons. I do think that we will eventually get some norms on what will be considered a use of force or an armed attack, right? That is actually not that complicated. If you cause death or destruction, I think most states are going to say, as the general said, we’re going to look at the effect and we’re going to consider that a use of force or an armed attack, and we’re going to respond that way.
It’s going to be—it’s going to take time, right? The Chinese have not been as eager to use norms. They basically think that the United States wants these norms because they’re soft and they give us a great deal of freedom. And they prefer a treaty where you ban certain types of cyber weapons or cyber behavior. Of course, how you do an inspection for cyber weapons is completely unknowable, right? I could have a cyber weapon upstairs on the 5th floor here and nobody would know. I could have one on my phone and no one would know. So the idea of having a treaty is probably not going to work. We were getting some traction with the Chinese and Russians about some norms. One of the things that came out of the summit between President Xi and President Obama was a kind of welcoming of this discussion about not attacking critical infrastructure in peacetime. I mean, that’s a very soft agreement, but it’s a first step.
TOWNSEND: So let me engage all of you. Let’s use Adam’s definition that an act of war is that sort of cyber act that would result in loss of life or destruction. General Hayden, I’m going to start with you. By that definition, the nation—not publicly acknowledged—that launched Stuxnet might be accused of having committed an act of war against Iran.
HAYDEN: Well, I have no views on that subject. (Laughter.)
TOWNSEND: OK. Eli—
HAYDEN: Yeah, no—
TOWNSEND: I was going to (let you finish ?).
HAYDEN: No, it is. And look, I—look, you know, blowing a thousand centrifuges in Natanz, OK, I’m good. That’s fine. But what I’ve actually said, Fran, fairly publicly for a very long period of time—let me just rephrase what I just told you in a slightly different way. Someone, almost certainly a nation-state because this is just too hard to do from the basement—someone, almost certain a nation-state during that time of peace, just used a weapon comprised of ones and zeros to destroy what another nation could only describe as their critical infrastructure, at which point you should—you know, loud sucking sound to follow—that’s a big deal.
And I mean it. I said on 60 Minutes that somebody’s crossed a Rubicon. We’ve got a legion on the other side of the river now, right? That is the first time that weapon has been unsheathed for that effect. And our species does not have a good track record of putting such weapons back into the sheath and not using them again. So it is a very dramatic act. And one hopes that whomever conducted the attack thought through the implications in terms of establishing international norms of behavior before taking that step.
TOWNSEND: Because presumably, right, we’re not—there is a whole host of nations at the top tier of cyber capability, who might share those—that same weapon. Let me go back to you.
TOWNSEND: What are the implications of that? And you talked earlier about vulnerabilities. How do we, as the U.S. government, we as the private sector, protect ourselves from the use of that type of weapon against us?
DOURADO: Well, so just to answer your earlier question, this is widely believed to be a joint operation between the U.S. and Israeli military. And this is—Stuxnet was an incredibly sophisticated attack. There are not very many countries in the world that could pull this off. It relied on, I believe, four separate zero-day vulnerabilities. So you know, to date not a single person in the world has died as the result of a cyberattack. I think the cyberwar rhetoric so far, at least to date, is overblown. I think, you know, it’s much more on the espionage side. And the way we—the way we protect ourselves is—there are a few ways. One is disclosing more vulnerabilities to vendors, so getting the vulnerabilities out there.
The other might be promoting security research. So we have laws on the books in the United States that prohibit unauthorized access to computer systems, which has the effect of chilling security research. So I’d like to see those laws reformed so that we could have more people looking for vulnerabilities, reporting them, et cetera. The law that prohibits this security research, by the way, is called the Computer Fraud and Abuse Act. There’s no fraud or abuse going on here when a researcher, you know, targets a piece of critical infrastructure to try to see if it’s secure. So I think we should more narrowly define fraud and abuse to enable more research.
TOWNSEND: So you’re suggesting we should repeal that section of the Computer Fraud and Abuse Act?
DOURADO: I would love to see that, yeah.
TOWNSEND: OK. And then how do you encourage—are the policy mechanisms to be put in place to encourage the sort of private sector-public sector sharing of that information?
DOURADO: Well, there’s a bill in Congress now, which I think is not a very good bill, personally, that is trying to establish an information sharing—a new information-sharing program. And it will give corporations legal immunity for any liability that they would incur from sharing user data. I think that information sharing is a bit overstated in its effectiveness. We already have—my colleague Andrea Castillo and I have looked into this. And we already have in the U.S. government at least 20 offices and sub-offices that deal with public-private sector cybersecurity information sharing. Many of them have identical mission statements.
The GAO has said that this is a complete mess. So I am very skeptical that, you know, creating a new information sharing program is going to do much. And indeed, the private sector already shares information pretty well with itself without the federal government coming in. And as far as I can tell, the federal information security information sharing tends to slow down the actual dissemination of information because there’s so many people that have to sign off on approval.
TOWNSEND: So one of the—one of the policy proposals that’s been discussed with the White House is having a national cyber center—sort of the cyber equivalent of the National Counterterrorism Center—to, at a minimum, alleviate all the different places that the private sector hears from, so there could be a single responsible unit. Helpful? Would that be useful or not useful?
DOURADO: Potentially useful, if we actually got around to eliminating all those other programs. I don’t think any of the proposals do that now. They’re just creating more. So I don’t—I’m pretty skeptical that that will help. And the other thing to remember is that there are important civil liberties concerns with the information sharing bills, as they’re being discussed in Congress, because the information that’s being shared can be used for other purposes, unrelated to cybersecurity.
Some limited purposes, but including prosecution of violent felonies, and so on. And so this is—there are Fourth Amendment concerns with the way this is being structured. And there’s also, of course, the issue of parallel construction, where an intelligence agency can phone up their law enforcement buddies and say, hey, you know, pull this car over and find a reason to search the trunk. So there are—there are definitely other issues relating to information sharing that may be not a great idea.
TOWNSEND: Adam, can you talk for a moment about the policy prescriptions we ought to see put in place that aren’t there, that would encourage a better—you know, we’ve been talking about public-private partnerships, especially in the post-9/11 world. Here it’s really critical, right, because most of the expertise and the innovation is in the private sector side. And so it’s really is in the government’s interest to foster this information sharing. How do we do it better? Why don’t we wargame together?
SEGAL: I think we—I think we do. And I think there’s several cyber exercises that involve the ISPs and the big private companies. You know, I agree with Eli that information sharing is kind of a red herring that should be done, like we should get it better. It’s not going to solve all of our problems, but it would clearly help there. You know, I think the big issue in what the government can do is actually help the private sector figure out what it should be doing, what it should be—what successful cybersecurity looks like, all right?
So Jamie Dimon after the JPMorgan hack said, you know, I’m going to double by budget from 250 million (dollars) to $500 million a year on cybersecurity. Do we have any idea that’s enough? Is that too much or is it too little? I don’t think we have any sense, right? People are constantly coming to me saying, you know, we know cybersecurity’s a problem, but we have no idea what we should do. There’s plenty of vendors out there that are going to sell you a specific solution to the problem, but there’s no transparent way or metric to know which product is working better than the other.
So we have this NIST framework, the National Institute of Standards and Technology that is supposed to help popularize these kind of metrics, cyber breach laws, right, which give us a sense of how big the problem is, what it costs, how you would remedy it, I think are probably more important in the long term to creating a private sector model of insurance and regulation than any kind of information sharing between the public and private sector.
TOWNSEND: And can we link that to insurance coverage and insurance premiums in the private sector to reduce their vulnerability?
SEGAL: I think that’s where we’re moving, right? I mean, the FTC ruling against Wyndham Hotel that said you are legally responsible for the breaches that happened know means that companies are going to have to say, what did we do? Did we do enough, right? And we’re going to have to—the courts are going to have to decide what’s enough, right, because any state actor can—is going to defeat a private sector defense. So there’s going to have to be some kind of agreement on what is enough, what did you do that was legally justified and rational? And I think greater transparency, breach laws, all these things are moving us in that direction.
TOWNSEND: General Hayden, you talked—or, I think, Adam spoke about the agreement with President Xi, the cyber agreement protecting critical infrastructure. We’re going to have a new president. And the president asks you, what are the most vulnerable critical infrastructures so that I have an order of priority with which to protect them from a cyberattack? What do you tell him are the top ones?
HAYDEN: The two I would suggest out of the gate are the two that are actually most seized to the issue already and are working most assiduously to try to defend themselves because they both have to be outward facing in order to be successful. One is financial services and the other’s power. And so those are the first two that I would emphasize you need to really make sure those are secure. And then you go through DHS’s list of the other critical infrastructures that you have to—you have to defend.
TOWNSEND: Do you think they’re the most hardened at this point?
HAYDEN: They are the two industries—they have the toughest problem, OK—but they are two industries that are working most energetically in trying to resolve this.
TOWNSEND: So, Eli, let’s use that as an example. So electric and financial are both the most vulnerable, but the most sophisticated in terms of their hardening. How can we take those lessons, the measures that they’ve applied to those critical infrastructures, and push it out more broadly within the private sector?
DOURADO: That’s difficult. The nature of the problem is different in different industries. I think there’s a lot of companies that have no special reason to believe that they’re going to be vulnerable, and yet they are. So I think you need to have just the off-the-shelf products be more secure, right? So Microsoft Windows, Apple iPhone, et cetera, like those sorts of things need to be more secure from the get-go. And that’s the only way we’re going to get the other part of industry, I think, to get really secure.
TOWNSEND: Adam, what do you think is the most likely next order? We talked about the OPM breach and the loss of the fingerprint database. You were looking at the government and you were an adversary, where would you be looking to go next? And if you’re on the government side, what are you most worried about protecting now?
SEGAL: I mean, given what we know, I have to assume that the adversaries are already in everything. (Laughter.)
TOWNSEND: That’s very depressing.
SEGAL: So I don’t feel like there’s a set of targets that has not been exploited. You know, I do get the sense that the adversary’s trying to get into everything that they know about. I guess the next set of issues that we have to worry about are a whole set of economic opportunities that are developing that we are not really thinking about security as we move forward. And the big one, of course, is the internet of things, which broadly refers to all of our physical devices connected to the internet.
But the most clear example of that was the hacking of the Jeep a couple of months ago. So all of you who have not seen the video, I would suggest you watch it, which is basically two hackers 10 miles away turn off a Jeep as it’s driving on the highway. These were no ordinary hackers. The guy that did it used to be—probably worked for General Hayden at some point at the NSA. So he was highly, highly skilled and he actually had DARPA money behind him helping him think about this. But the next big thing is one of these attacks on the internet of things, that’s going to cause, you know, the type of disruption we’re not expecting that General Hayden referred to.
And that, you know, we have the opportunity now to get the security in place. You know, people always say the problem with the internet was we built it and we didn’t need any security because it was only, you know, 20 or 40 scientists who all knew each other, when it was the ARPANET. And so nobody thought about security. Well, we’re now at that kind of beginning stage for the internet of things and smart cities and other areas. And so this is now the time to do it and to make sure that we have the policies and regulations in place that are—that are there as we build out.
TOWNSEND: General Hayden.
HAYDEN: Yeah, I’d just offer a couple of thoughts on—you asked about targeting. There is a changing flavor of attacks. I have—I don’t think I was ever that overly dramatic about the cyber Pearl Harbor thing, you know, a digital 9/11. I know somebody—some people use that as a metaphor to try to get everyone’s attention. But let me offer you a view. If the Chinese are turning out the lights on the eastern seaboard that is not the first item in the president’s data brief the next morning, OK? There is a bunch of other stuff going on before that event takes place. And so I’m less concerned about that catastrophic event, and I’m kind of going to school on the North Koreans going after Sony, all right, which was more than espionage. It was destructive and it was threatening and it was designed to coerce.
So I’m not focused on the near peer creating catastrophic destruction. I’m now beginning to get concerned about the talented-enough renegade, isolated, reach out and touch somebody, don’t think this is cost-free nation-state, which is kind of a permanent definition of North Korea, OK? (Laughter.) But it could, in some circumstances include an Iran. And I’m even dark enough to think in some circumstances it could include a Russia, where you go out there—and again, not a catastrophic attack, but an attack—I mean, the North Korean attack—nation-state, private enterprise, designed to coerce. OK, so that’s the second kind of new flavor.
And the third I would suggest the Anthem attack and the OPM attack. In terms of data theft, for the most part, kind of the rough history has been stealing end product, stealing intellectual property, stealing negotiating positions, stealing data from a competitor. This was stealing big data. This was stealing massive databases, which is a new kind of thing. And I believe firmly, and the Chinese did both. And I believe the Chinese now are productizing the results of these massive thefts on their own, rather than what we have historically seen, which is stealing the product, stealing the outcome. Now they’re stealing the raw material and running their own algorithms against it. So your original question was, where should I guard inside the government? I do think it’s the big data databases.
TOWNSEND: All right. Let me ask you, you know, we talk about this—we’ve now broached the subject of the internet of things. Last season, for those of you who watch Homeland, there was this episode where, of course, the digital signal to the then-vice president’s pacemaker was interrupted and he was killed. There’s no question that currently pacemakers, using wireless technology in your home, record that medical data. Is that possible? Can cyber be used as an assassination tool? General?
HAYDEN: Maybe. (Laughter.) Look, the more we become dependent upon the ease and the empowerment that the digital universe gives us, the more it could be turned to dark purposes. Many octaves below what you just asked, I go to Pittsburgh to watch the Steelers play football. And I never stop to get those tickets at the Pennsylvania Turnpike, all right? I got E-ZPass and I just slip right through and everyone else is in line, and I love that. And my tradeoff for that is the Pennsylvania State Police know where I am all the time, OK? There are all of those sorts of tradeoffs. So I don’t know the specifics, Fran, but the more and more—we put OnStar in our car, and make our car in essence a node on the World Wide Web rather than an internal combustion machine. To that degree, people have the ability to penetrate it and take control of it. So, yeah.
SEGAL: I think every year at Black Hat or DEF CON there is always a proof of concept, right? And somebody will either hack a pacemaker, or a dialysis pump, or some other medical instrument. They tend to be in the same room. The situation is always very controlled. But the fact is that they know that the manufacturers of these devices almost always set the password as password. (Laughter.)
TOWNSEND: Oh, god.
SEGAL: And nobody ever resets it, right? So you can see across whatever industrial sector it is, whatever new product that’s been rolled out, it always comes with this very, very weak security. My neighbors just refurbished their house. And I can now communicate with their oven. (Laughter.) They didn’t—they didn’t know. Like, they had us over for dinner. And I said I’ve been talking to your GE oven, because the Wi-Fi is now, you know, beaming into my house. So people don’t know, and they don’t reset the passwords. So it’s not going to happen now. It’s unlikely to happen in the future. But the possibility is there.
TOWNSEND: Adam, you touch on a real point, right? There are two things. One is education. The other is privacy. Talk to me about—I’m the newly—there’s the newly elected president and he said: We have to get our arms around this. Part of protecting the nation is starting with the individual. How do we launch a campaign to actually educate people about their own vulnerability and close it? You know, this is—in the disaster-recovery world, we stay good healthy people need to take care of themselves and their families so first responders can take care of those who can’t. There’s an analogy in the cyber world. And so we really do need individuals to care about their own cybersecurity and their own cyber health. How do we get there? How do we penetrate and educate people?
SEGAL: I think there’s two things. I think one is that we’ve finally moved away from the model of blaming the user, right? It always used to just be the user’s fault, and why did they—why did they, you know, go to the Chinese internet web café and just start downloading everything. They should have known better, right? We now, you know, have much more education. I think we are teaching people about what they should actually do. I think, you know, that’s getting better. And you know, my kids at school now, in addition to the lecture about not posting things to Facebook because it’s going to last forever, do have a—you know, a series of discussions about what they should do to keep their devices safe. And that’s, you know, a West Chester public school. So I think education is clearly part of it.
I think the larger issue is just not a debate we’ve had, which is that—how much convenience are we willing to accept in return for loss of privacy? That debate we just haven’t had, right? We’ve had it at the extremes. You know, when the—when the Snowden revelations occurred, you know, the president assured us that it wasn’t Americans’ data that was being collected. That turned out to, you know, not entirely be the case. Most Americans have seemed to be OK with what has been revealed. There hasn’t been a massive, you know, protest about metadata. But we really haven’t had a very clear debate about what it is you’re willing to give up because the assumption is, well, you give it to Facebook, you give it to Google, so you must be OK with it. But I don’t think most people truly understand it. And until they do, we can’t really have that debate.
TOWNSEND: You know, when you talk about privacy on the web, I’d like one of you to explain to the—to the audience what Tor is. This is a masking device, but I want—
DOURADO: I’ll do it. Sure, so Tor is a protocol and system developed by the U.S. Naval Research Lab, I believe, to—it originally stood for The Onion Router. So the idea is that it would—it would route traffic around the globe with multiple layers of encryption, removing a layer of encryption at each hop on the—on the internet. So it would peel off a layer of encryption, like an onion, as it transited the globe. And it makes it very difficult to pinpoint the source of the—of the traffic.
TOWNSEND: And is it—you might—describe for a moment, it’s often used—I can tell you, for those of us in the counterterrorism work, it makes it very difficult to pinpoint—geo-locate the user. It’s difficult to unmask who the user is. Can it be used as a—for a good purpose if somebody wanted—
DOURADO: It’s used for many good purposes and many not-so-good purposes. It’s used by activists in—that live under repressive regimes to organize, to communicate with the West. It’s used by criminals selling drugs on the internet. It’s used by hackers who are just trying to hide their location and to carry out an attack on U.S. soil. So I think it’s a valuable tool. I mean, there’s a reason the U.S. government created it. I don’t think we should demonize it. But it does pose a real challenge. And it does seem to be something that when used properly works. There are ways that the NSA has attempted to unmask users, but for the most part it’s fairly reliable when used properly.
TOWNSEND: General Hayden, Tor, a force for good, a force for bad?
HAYDEN: On balance, good. And that’s hard for a director of the National Security Agency, even a former director, to say. But Eli mentioned earlier this equities decision that has to be made. I mean, NSA is both our offense and our defensive squad when it comes to information security. And it’s organized that—not everybody’s done it that way. A lot of friendly governments have divided the task. But we’ve kept them together because we think both of them pivot around one single concept. And the concept’s vulnerability, OK? If you master vulnerability, you can play offense with it or you can play defense with it. You can use it to attack, or you can use it to stop attack.
The equities decision that Eli suggested is made, has always been made, and always been made with great seriousness. But even I, as a former DRNSA, I’m prepared to say that the habits, the cultural habits, the approaches to the equities decision that have been built up over four or five decades, when in reality the encryption wasn’t universal and the encryption vulnerability you were debating was generally used by foreign rather than domestic audiences, rather than being a ubiquitous encryption program—the habits have probably tilted us more in the direction of playing for offense rather than playing for defense.
And I think now that American security—not just American privacy, and not even American business—but American security might be best secured by tilting more in the direction of giving up the offensive advantage in order to more secure American communications.
HAYDEN: I mean, and we got a lot of money and a lot of people, we’ll go find—seriously, we’ll go find other indirect paths to—Mike McConnell—sorry, long answer—Mike McConnell, my predecessor once removed, he fought this debate called the Clipper Chip. It was to bake into the chip a way to get through encryption. And Mike, you know, fought it to the death and then he died. I mean, he lost it, all right? In retrospect, we mastered the problem created by the lack of a Clipper Chip. We were able, through a whole bunch of other things. Now, unfortunately, some of the other things were metadata and bulk collection and so on. But on balance, I think we’re better served by stronger encryption rather than baking in weaker encryption.
TOWNSEND: Now I’m going to ask one more question before I give the—open it up for your questions. Adam, let me ask you and maybe General Hayden, do we need an understanding with the private sector about when they can use offensive capability? Let’s remember, we talked about the financial sector, we talked about power and energy. Particularly the financial sector has a fiduciary responsibility to their shareholders to protect corporate assets. Should there be rules of the road for the private sector to use offensive capability?
SEGAL: Yeah, so this comes back it seems like every three years. It’s known as the hacking back debate, right? So if the government can’t defend us, then we should be able to hack back and perhaps either destroy the data that was seized from us or attack the attacker. You know, nobody does it, but according to some surveys 75 percent of companies do it. (Laughter.) It also tends to be very useful—I guess you travel to Israel, you sit in a café in Tel Aviv and you kind of say, you know, can anyone rid me of this problem? And some Israeli scoots up to you says: Oh, I have a company. (Laughter.) And you hire that company and they start hacking back. The historical analogy that everybody loves is the letters of marque, right? You know, you get the privateers to go out there and attack and, you know, have some kind of legal kind of form that they can do it.
I think personally it’s a terrible idea. One, I don’t think it’s going to be effective, right? Even the best cybersecurity companies are going to lose against a nation-state attacker. The nation-state attacker, if they’re serious about getting in, it just means you’re going to escalate. You’re going to have a serious question about signal sending. So, you know, to go back to what General Hayden said, if the Chinese are in the energy grid, it already means that things are really, really bad. So then if you have that, and you have U.S. companies hacking back to China when we’re probably trying to de-escalate, or at least send signals, that could be, I think, incredibly—create tremendous strategic instability. So I don’t think it’s a very good idea, and one that I think, you know, will come back again in three years and we can say it’s not a very good idea. (Laughter.)
TOWNSEND: General Hayden?
HAYDEN: I’m a little more generous with the concept than Adam.
HAYDEN: Everything Adam says is true. And these fellows know more about the technologies and everything, far more than I. When I talk to folks like that, they tell me I’m crazy. But I do think—Eli already suggested the Computer Fraud and Abuse Act creates an equilibrium between theft and trying to prevent a theft. And it’s all the same under the law. And I do think, you know, there may be some instances where we allow firms a more robust opportunity for defense. I would say—this is actually a true story. I can’t give the date, times, and companies.
But I am aware of a company that did see its data stolen, was able to track where it had gone. It went to another server in another country. Let’s just say Kuala Lumpur. It wasn’t Kuala Lumpur, but it’s sitting in a third country, in Malaysia. It is not yet the waking hours during the work week of the country in which they believe the source of the attack emanated—from with it emanated. And so it’s just sitting there on this server in a third country place waiting for the thieves to come grab it and bring it home. They saw it there. They wanted to go get it and bring it back.
And their lawyer said, you can’t do that. That is as much a violation of the Computer Fraud and Abuse Act as the original theft. Now, I agree with Adam. I think American companies are doing it left and right. They’re just not doing it from American soil, OK? I don’t know that it’s a good thing to prompt lawlessness as the preferred practical answer to theft. I do think we might be able to create some sort of structure, far short of the things Adam rightfully fears, that allows American companies to do a bit more in their own defense.
SEGAL: That makes me very happy.
DOURADO: Yeah, I—so I actually agree with General Hayden on this more than Adam. I think active defense—all the evidence shows—that I’ve seen, shows that it works. I don’t know if this is the incident you’re referred to, but it is open-source. When Google was hacked in 2009, in the Aurora attack, they hacked back at a server in Taiwan. And they not only discovered that their data was stolen, but a number of other companies had data stolen. And so they were able to notify those other companies that Chinese intelligence had hacked them.
So it seems to—seems to be effective. It seems to aid in attribution. And you know, I think—I think everybody is doing it. We haven’t had any prosecutions in the U.S. for this. But it is, I think, arguably illegal under the CFAA. You can maybe argue that there’s a common law defense. So it has not been litigated in court. And so providing clarity, I think, and allowing hacking back with strict liability for damages to innocent parties, I think would be a good solution.
TOWNSEND: OK. So at this time, I would like to invite members to join our conversation with their questions. A reminder that this meeting is on the record. Wait for a microphone and speak directly into it. Please stand, then state your name, and affiliation. Please limit yourself to one question and keep it concise, to allow as many members as possible to speak.
Questions? OK, let’s start over here.
Q: Hi. I’m Michael Richter. And I’m a former intelligence officer, which means I did work for General Hayden at one point in time.
But my question goes to Adam, who mentioned the FTC decision in the 3rd Circuit recently, where the court kind of vaguely said that companies need to make a cost benefit analysis. And it very clearly made the decision that Wyndham didn’t meet that threshold. If you were talking to board rooms right now who were trying to make that cost-benefit analysis, what would you say to them?
SEGAL: Yeah, I don’t think we have the answers, quite honestly. I think we only had the one ruling, right? We don’t have much of a precedent. And it’s been unclear about where that—where that line is drawn. So I think that’s a real problem, is that all these companies now want to do something, but they don’t know how much they have to do. And it’s not clear who’s going to be in charge of deciding where the cost—right, so the FTC says now you have to report attacks that cause material damage. For a long time, companies just ignored that. You know, no hacks caused any material damage. Now it seems like they can’t get away with that as much. So the board is going to have to decide where that material damage, I think, comes in.
HAYDEN: I think today’s safe harbor—safer harbor is tucking yourself under best practices. And that at least gives you the argument in court that you’ve done all that could reasonably be expected.
TOWNSEND: Yes, sir. Right here in front.
Q: Thank you. David Preiser, Houlihan Lokey.
So heard a lot about situations where systems get penetrated. Wow, they’ve been in there for months, years, we don’t know. And I’ve heard from a number of people in this business that this is causing a fundamental rethink in the idea that there’s always an incredibly complicated systems and these are more and more complex, there are always going to be unexpected entry points, unintended ways in and around, and that this whole idea of moats and walls, you can strengthen them, you can deepen them, but that’s forcing a rethink to look more like a biological system with an active immune system. And I wonder if you’d like to comment for the audience on that, because that really interested me.
HAYDEN: Sure. We put up a traditional risk equation. The risk is equal to your vulnerability, times the consequence you might suffer, times the nature of the threat, right? So it’s classic risk. You can use it for armed conflict. You can use it for automobile insurance. Risk is equal to threat, times vulnerability, times consequence. Most of the history of cybersecurity has been in vulnerability, reducing vulnerability. That’s your wide moat and high walls. It’s firewalls, cyber hygiene, passwords, turn the machines off on the weekend and so on. People who know this better than I say that’ll keep 80 percent of the less-competent hackers out of your network, which means if you do it perfectly, 20 percent of determined hackers are getting in.
And so most current energy—technological, entrepreneurial in the private sector, is the C factor, consequence management. You’re penetrated. Get over it, all right? Operate while under attack, survive while penetrated, wrap your more-precious data more tightly than your less-precious data. And you used the right analogy there, almost a biological sense that something is different in your network, that your network is so self-aware that it identifies an anomaly and you can get on with responding and being resilient, as opposed to just defending at the perimeter wire.
SEGAL: I’ll just add that I think General Hayden’s right. And the move clearly is towards anomaly detection and big data, right, so you can see all of a sudden someone’s acting in a way that shouldn’t. You know, this computer’s talking to this computer at 3:00 in the morning is an easy one, but all of a sudden we see other things. But of course, that raises privacy issues again, right, because the people that are going to be collecting the data are going to be collecting massive amounts of it on users, which will make us more secure, but also mean that people know exactly what you’re doing all the time.
TOWNSEND: Sure, right here.
Q: Joel Mentor from Barclays.
So my question, I think part of the problem, and I was discussing with people here at the table, that in the private sector spending money on cybersecurity, it’s a cost, right? And even though we say that we have become aware of the problem based on Sony and so forth, we’ve heard about that in the past with other attacks, right? So I still think there’s a problem in the private sector with putting resources to fixing this problem. On the other hand, we have this narrative that it’s not a question of if you’re going to get hacked, it’s when.
So I think one of the reasons—one of the biggest motivations for a company to actually put resources to this is the likelihood that if something happens it’s going to affect their brand. But if you believe that this is going to happen to everyone, it actually takes away that motivation because companies are like, well, yeah, they hacked them, but they hacked, you know, us, but they hacked them as well. I mean, me personally, I’ve been hacked at least five times, it seems like, my personal information—I’m including OPM. So are you afraid of that—of that dynamic happening, where the American people actually become defeatist, and that takes away some of the motivation from companies to act? Thank you.
HAYDEN: I must admit, I hadn’t thought of it that way. And I don’t think we’re there yet. I still get outraged at OPM—(laughter)—and so, I would ask these fellows.
DOURADO: My data was breached in OPM as well, so. (Laughter.)
TOWNSEND: Me too. I think—can I tell you, I don’t think so. I sit on a board of directors. You asked the board question. Board of directors are worried about it. Now, it’s a question of the balance—the board pushing management and where—how much investment management is willing to take. But I’ll tell you, the fact that others have been hacked and there’s this sense of inevitability, at least on the public boards I sit on, boards are pretty focused on it. And I don’t think that that sense of inevitability will sort of give any comfort to a board.
SEGAL: And I would also—I think, because so much of the damage is reputational, right? Because the Sony attack, so much of the damage was the emails, right, exposed about the racist, misogynistic, all the other things that the executives were saying. So my sense is that boards are very, very sensitive to that still. I know I’m very sensitive about that still, not that my emails are filled with racist, misogynistic emails. (Laughter.) But I think the personalistic exposure is really what is motivating a lot of people in this space.
Q: K.T. McFarland from Fox News.
We have stuff other countries want to steal. They don’t necessarily have stuff we want to steal. Could we—if you look at the countries who are potentially our cyber enemies—China, Russia, Iran, maybe North Korea—they all have one thing in common, is they have censorship of their own populations. And they’re terrified that their populations would have an Arab Spring-like revolution to throw them out of power. Could we ever use Tor, VPNs, to tear down their cyber walls and cause them an enormous amount of grief?
HAYDEN: So you’re asking a covert action question for me, aren’t you? There’s a reason it says covert, you know. (Laughter.) Of course. K.T., it would be—number one, you don’t have to be sinned against to think about it, all right. I mean, there is a category of covert action called influence campaigns, all right, and I can’t really say much more about it. So it does not have to be in response to a particularly egregious cyber offense. It could, however, be part of your kit to try to, for want of a better word, discipline an adversary that their activity is not cost-free, that actions do have consequences.
I generally trend, since most of that—look, stealing OPM is—that’s just espionage, all right? I have no anger against the Chinese for stealing that. I mean, it’s my data so I’m mad, but intellectually I have no right to be angry. If I could have stolen that data in China, I’d have done it in a heartbeat as director NSA, and I would not have had to have gone downtown for a meeting to ask permission. It’s just what adult nations do to one another. The stuff we really get offended about is stealing for profit. And so my sense is, rather than trying to respond in the cyber domain, creating some sort of discomfort up here in the cyber domain, why don’t you go after the domains for which they’re actually committing the original crime, which is commercial profit.
And so I think the executive order from 1 April, the paper Denny Blair and Jon Huntsman wrote about two years ago that says punish them in the economic lane, it’s about sanctions, it’s about who gets to be listed on the New York Stock Exchange, who sells that product inside North America, it’s about who’s kids get to go to prestigious American universities, it’s about who’s international trade gets to be denominated in dollars. All those things can be used. And that’s what the president’s executive order of 1 April said. And that’s what I thought we were going to do to the Chinese before President Xi arrived.
SEGAL: I think there’s only—there’s two larger issues, I think, with that strategy, which is—one is that we couldn’t take it down forever, right? We would basically enter a game where we would take it down, they would put it back up, we would take it down, they would put it back up, so there would be kind of a running kind of issue there. The second is that the Chinese and Russians already believe that that’s what the internet is designed for, right? (Laughter.) They already assume that we are engaged in a broad ideological battle where our argument about openness and the internet is to expose them and overthrow them. I mean, Putin said the internet is a CIA tool, right?
So they already think we’re doing it. And so I’m not sure we gain much more by occasionally turning it off or turning it on. You know, there may be something to be said about exposing certain leaders’ bank accounts in Switzerland, if they’re portraying themselves as being, you know, a real revolutionary who’s not corrupt, that you could, I think, gain some points that way. But I’m not sure there’s much to dismantling the entire system.
DOURADO: I would add that I think encryption is already doing this role without the government playing—make an explicit policy. And an example is the great firewall in China. They block certain Western news sources. Well, people started posting those news articles on GitHub, which is a site used by programmers around the world to exchange code. And GitHub uses encryption. So when you connect to a GitHub server, your connection is encrypted. So China cannot selectively block content on GitHub. They can either block the site, all of GitHub, or they can allow all of GitHub. And so for a while, they tried to block the site. And there was an outcry in China, because you’re killing their software industry because all of the best code is on GitHub. So they were forced to unblock it. And the content is still being posted there, to my knowledge. So I think—
SEGAL: Well, then they invented the great cannon and knocked off the track.
DOURADO: Yes, they retaliated, yes.
TOWNSEND: Sure. All the way in the back.
Q: Stephen Orlins, National Committee on U.S.-China Relations.
How significant is the agreement between President Xi and President Obama, which states, and let me quite it precisely: neither country’s government will conduct or knowingly support cyber-enabled theft of intellectual property, including trade secrets or other confidential business information with the intent of providing competitive advantage to companies or commercial sectors? Does that mean anything, or are you guys just dismissing it?
HAYDEN: It is an explicit Chinese acceptance of the American definition as to what constitutes legitimate and illegitimate state espionage.
Q: So do you give the Obama administration huge credit for getting the Chinese to agree to this?
HAYDEN: Not huge. No, no, not huge. Not huge and I’m not taking it to any bank, all right, but it is a useful statement to have the Chinese on record. I am incredibly skeptical that it will make much difference going forward, but the fact that they have said that means they have—back to my earlier model about state espionage and what distinguishes the English speakers from the rest of the world, explicitly Xi signed up to our definition.
TOWNSEND: I’d add to that. Remember the context of this. Susan Rice had gone to China in advance of the U.S. visit, carrying the explicit message of the intention of the administration to impose sanctions. When she arrived back, the Chinese sent a very senior-level delegation. And this was—this statement was arrived at under duress. This was solely designed to avoid the imposition of sanctions in advance of Xi’s visit. It remains to be seen whether it will be observed by the Chinese. And it remains to be seen whether or not the administration post the visit will reconsider the imposition of sanctions.
Yes, right here.
Q: Nick Beim, venture capitalist with Venrock, invested in a variety of security companies.
Two interesting and distinctive characteristics of cybersecurity that, one, offense has such an overwhelming advantage over defense today. As many administrative passwords as you change, as much as you invest in your defense, offense can almost always get through—whether you’re government, whether you’re corporate, whether you’re an individual. And secondly, traceability is difficult. And I don’t know exactly how difficult, you guys would know much better than I, but certainly given the ability to mask attacks and the ability to outsource it to the third-party groups. There may be 10 missed connections with it.
So given those, it seems to me that it will be—it gives me a great deal of humility in thinking about how much of a difference we can make in the wholesale theft of corporate IP and continued attacks on government data that could be used for intelligence purposes. And I applaud the attempts to cooperate with the private sector to set guidelines, the executive order that you described. But as I think about the incentive aggressive offensive theft pays, it wins, and particularly if you can mask it. And that’s going to be difficult to stop. That may be too pessimistic a view, you guys would be better informed than I, but I’d be interested to hear your thought on that perspective.
TOWNSEND: So I think—I think the question is, right, if you can mask it and you don’t exact a cost, General Hayden, how do you deter it?
HAYDEN: Yeah, well, the whole question of deterrence theory, growing from our physical experience up here, lots of questions to be resolved. It may not grow automatically or easily. And you’re right about the technology of the way we have built today’s web, it has those attributes that do give advantage to the offense, making defense very hard. You work very hard at it. Attribution is not impossible, it’s just hard. And I guess that—there’s a lot to be said. Now, these guys know the technology better than I.
I’ll just give you one additional thought. In my life experience, good attribution does not include up to the point of beyond all reasonable doubt, all right? This is not about court of law stuff. This is about enabling governments to act in the face of continued doubt, right? So I guess what I’m saying is we’re getting better at it, but don’t expect very often to get to the beyond reasonable doubt that would meet some sort of judicial standard. That said, governments act with far less confidence when they have to protect the national interest.
SEGAL: I’ll just say that—so the offense defeats the defense is—you know, there’s close to a kind of widely accepted belief that we have in cyber. I am now hearing some people saying that that—they don’t believe that will always be true. And they point to this data—big data anomaly detection, machine-to-machine communication. So the more that all of this is automated, the more likely it is that the defense starts having some advantages. You know, is it now 80 percent still believe that the offense still has the advantage and 20 percent now are beginning to think that? I’m not exactly sure. But I’m starting to hear things that this is going to change.
And I totally agree with the general’s point that attribution is going to be political in some many cases. What are you will to show for what purposes? And if we’re willing to burn intel in certain cases, to make specific points, then that’s what we’ll do. It may be worth it, you know, in the follow-up to the Xi-Obama agreement, right? If we see an increase or a lack of downturn in the hacking in the next three months, or however many days it is, we may have to say, yes, we have these sources, this is more than we’ve exposed before but it’s worth it to impose the sanctions and follow through on the agreement.
DOURADO: I think—I think I probably agree with Adam, and I’m a little bit more optimistic than the conventional wisdom in terms of our ability to defend. I think of it sort of as an immune system, every time we get attacked—there is an attack, there is a breach, we actually learn something about what was exploited. And usually we learn several things. There’s not just one vulnerability that’s exploited. There are typically a string of vulnerabilities that are exploited in succession. And if we can learn something about each of those, then we get—we get better at defense, faster than the adversary gets better at offense. And so I’m not—I’m probably a little less pessimistic than the conventional wisdom.
TOWNSEND: Yes, right here.
Q: Tom Glocer with the Council.
General, you mentioned earlier that you did not make a distinction when at the agencies between offensive and defensive. There’s a major distinction though, which I think gets in the way of our agencies that help us, which is between foreign and domestic in the U.S. In kinetic warfare, arguably that makes some sense. In a world where zeros and ones move seamlessly around the world and don’t care what server you’re on—whether it’s a proxy, or a Tor, Onion, or whatever—should we—again, the 330 million we—should we revisit that distinction?
HAYDEN: Yeah, that’s a great question. Welcome to my world. That is the core of the controversy about electronic surveillance of the past 15 years, all right, because technology has eroded the distinctions on which we have based our policies in order to be both secure and to be respectful of privacy, right? We Americans have a very binary view of the world, which puts us at odds with the Europeans. The Europeans have this kind of soft edge, almost fuzzy, human right to privacy, right? And it can be whatever it is you want it to be, based upon today’s newspaper accounts of how they’re viewing safe harbor and not, OK?
We, Americans, are Manichean when it comes to this. You either are or are not protected by the Fourth Amendment to the U.S. Constitution. And if you are protected, we can’t do anything. If you aren’t protected, and your communications contain information of value to American safety and security, game on. You don’t have to be a bad person. It’s the information. It’s not you. And that’s been our view. The problem is with a single global integrated telecommunications structure, most of which is housed in the United States, we have developed habits of American/not American, largely based on geography, OK, rather than citizenship, and have generally considered anything in America to be of America.
Hence, you got to get that little 702 finesse and the FISA Amendment Act to allow NSA to collect from companies emails between Pakistan and Yemen, whose only American identity is the fact that they’re sitting on Google servers somewhere on the West Coast. And so we’re working our way through that dilemma. How do we continue to live up to the spirit of the Fourth Amendment, protecting American privacy, in a world in which all the technology surrounding that is very different? And frankly, it’s very controversial. The 702 program is the email system and, OK, you can go via a generalized warrant—not a specific warrant—a generalized warrant and get what is truly a foreign communication but it’s sitting in the United States. Because there’s great concerns even among Americans, frankly I think because it’s been misrepresented, that that’s the core of the dilemma.
I’m sorry, just one additional turn on this. There’s nobody alive I know of who raised a finger of civil liberties concerns about NSA going after Soviet strategic rocket forces communications coming out of Moscow out to ICBM fields in the Far East, you know, where we were kind of listening for words of interest, like launch, OK? (Laughter.) And the 21st century equivalent of that are terrorist, proliferator, trafficker communications, coexisting with your Gmail and mine, in the same communications pipes. So if you want your government, in this case NSA, to do for you in the 21st century what it did in the 20th, it’s got to be on those pipes. How does that make you feel? And for a lot of Americans, not all of whom routinely wear tinfoil—(laughter)—for a lot of Americas, that is a very concerning reality.
TOWNSEND: All the way in the back.
Q: Thanks. My name is Matt Spence. I recently left the Defense Department.
When I was in the government, one of the things that struck me was the disconnect between how the government and the private sector viewed the cyber threats, viewed the opportunities, viewed steps to take to defend against it. One question that hasn’t been talked about as much is, if the private sector gets increasingly not confident that the government can help protect them, do you see the rise of more vigilantism amongst the corporate sector in trying to defend themselves? And do you foresee a role for offensive cyber operations by any corporations, other private actors?
DOURADO: I think we’ve—(laughter)—so I think we’ve talked about his some, so just to underscore a few points. Generally the civilian agencies are not very good at cybersecurity. We experience 70,000 security incidents a year, in fiscal ’14 anyway, of federal agencies. So I think that people are right to ask whether the government can defend them. I think we do see more vigilantism and more companies engaging in acts of defense. And I think that will continue.
HAYDEN: Yeah, to the degree that the government remains beyond the power curve, late to need, I mean, you know, it’s also a part of our political culture. When the government doesn’t show up, we begin to provide for ourselves. And so my hope is that we recognize that reality and then provide a legal framework in which that can be done, but not done to excess.
TOWNSEND: OK, last one, all the way in the back. Yeah, you had your hand up.
Q: Mike Moran from Control Risks.
If attribution is central to the problem, is there some way—we’ve just seen TPP, I guess, signed yesterday. The Europeans and the U.S. may eventually enter into something like that. Is there some way to use these large international constructs to at least create an attribution arbitration panel that it ultimately gives the best guess as to where a major attack or strike came from? And then perhaps within the confines of that very large trade bloc, you could have some kind of graduated penalties that ultimately got to the point of sanctions.
SEGAL: I don’t think it would be through the trade agreements, quite honestly. This idea’s been floated. I think Dick Clark used to talk about a cyber reduction—cyber risk reduction center, which would kind of pool information about threats and you could cut off or give to nations that did or did not supply information to or were acting hospitably or inhospitably. Interpol might be a role, right, where you could have discussions about what type of attribution you’d have there. So I think there are some discussion of—and it would be worthwhile to kind of pursue.
And you know, I think the—one of the big questions is, the U.S. strategy in cyber has been—in many ways, was try to embrace as many as possible, right? There was a kind of a debate about should we do likeminded first and then kind of bridge out? As opposed to, you know, should we do our adversaries, and then kind of get it right with the likeminded? That type of approach, I think, is—the likeminded approach, and sending messages to our potential adversaries, I think is gaining traction and in some ways cutting them off from resources that they would want.
HAYDEN: I’m in the likeminded club. And I would begin with the Five Eyes, those five English-speaking democracies that have a good track record in this domain all the way back to Bletchley Park, out to G-7, then out to G-20, kind of in concentric circles, because you’re, in essence, addressing nations roughly of like values, but all of them have skin in the game. And I think that’s a logical approach.
TOWNSEND: Well, it’s 2:00 and that concludes our meeting. Help me to thank Adam, General, and Eli. (Applause.)
This is an uncorrected transcript.