Fred Kaplan discusses 'Dark Territory,' his book on the untold story of the officers, policymakers, scientists, and spies who devised a new form of warfare — cyber war — and who have been planning (and, more often than people know, fighting) this kind of war for decades, from the 1991 Gulf War to conflicts in Haiti, Serbia, Syria, the former Soviet republics, Iraq, and Iran.
The CFR Fellows’ Book Launch series highlights new books by CFR fellows. It includes a discussion with the author, cocktail reception, and book signing.
HAASS: Welcome to the Council on Foreign Relations. And tonight is one of my favorite nights, because I always like nights where we celebrate the publication of books. Books are at the core of what we do in the David Rockefeller Studies Program. And it’s not the only thing we do—we like to produce a whole menu of products, from the long to the short—but books require a degree of research, of depth, of commitment, of analysis, in many cases also prescription that are qualitatively different than shorter pieces which so often dominate in days—in this day and age of 140 characters or blogs or op-eds. Which is not to say there isn’t a place for tweets and op-eds and blogs, but they are—if they are necessary, they are not sufficient. And hence, the emphasis here, as well, on books.
It coincides and intersects with something else we’ve done at the Council, which is to place a significant emphasis on all things cyber. I grew up in a different generation, and one of the big intellectual challenges of that era was how to regulate the technology that was nuclear weapons. And in this era, I actually think one of the big intellectual challenges is how to—how to regulate the technology that is all things cyber, Internet, digital, however you want to describe it. And in some ways, it’s even more complicated than nuclear because it’s in so many hands, and its civilian role is far more profound than would be its nuclear counterpart. And as if often the case with things global, it’s how to encourage and protect and promote the aspects of it that are benign and contribute, and how is it to push back against those aspects of the technology that are anything but. And I would simply say the intellectual challenges are considerable, and it’s one of the reasons we’ve made things cyber, at the Council, one of our biggest intellectual commitments.
And it just so turns out that this week is cyber week. Tonight we’ve got this new book, “Dark Territory,” subtitled “The Secret History of Cyber War,” by Fred Kaplan, who spent a year here in our midst at the Murrow Fellow, which is our annual fellowship for someone of the media, and I think one of the more important programs that we do. And then, this Wednesday night, we have his former colleague and our present college—current college, Adam Segal, whose book “Hacked World Order” will be the topic of the—of the conversation.
But tonight is Fred’s. You have his bio.
So, first of all, welcome and congratulations.
KAPLAN: Well, thank you very much.
HAASS: So tell us about the title. What is—what is “dark territory,” and why did you call your book that?
KAPLAN: Right. Well, you know, this is my fifth book. And I always say, well, a title will emerge from the notes, and it never does. But this time it did. I was going over some notes of an interview with Robert Gates, and he was talking about when he first became defense secretary and he was getting briefed every day about cyberattacks and attempted cyberattacks. And he went to some of his colleagues—this wasn’t anything formal, just sort of corridor conversation—and he noted, you know, we’ve got to get some kind of quiet meeting with the other cyber powers, work out some rules of the road. You know, when I was CIA director in the darkest days of the Cold War, we had rules, the Americans and the Russians, like, you know, we wouldn’t kill each other’s spies, something like that. We need to have some rules about what kinds of targets we do not go after with this stuff, because—he went on—because we’re wandering in dark territory. And I looked at that and I said, that is the title of this book.
But I want to—I googled because, you know, I wanted to make sure it wasn’t a euphemism for some obscene act or something. (Laughter.) So it turned out it was a term of art in North American railroads for stretches of track that are not governed by signals. And I said, wow, what a great metaphor for cyberspace. So I sent him an email, did you know this? And he said, oh, sure, my grandfather was a station master on the Santa Fe Railroad—(laughter)—for 50 years in Pratt, Kansas. We talked about railroad terminology all the time. So it just kind of—it’s one of these titles that fits in many, many ways.
HAASS: And just in case it is misunderstood, it’ll increase sales.
KAPLAN: That’s right. Yeah, yeah. (Laughter.)
But I think that is the situation that we’re in. I mean, there are no rules of the road. It’s been around for a long time, as I say. But, you know, a whole military structure has gone up. Cyber Command, it is a combat command. It is linked with every other combat command. They have war plans. They have targets. They are hiring people. It’s a $7 billion annual budget.
And yet, the most fundamental principles governing this, you know—right now, there is a Defense Science Board panel at the Pentagon writing a report on cyber deterrence, because nobody’s defined it. What are you trying to deter? You know, does the government really have a role in deterring an attack on a bank, on a movie studio, on five banks, on five movie studios? Nobody knows. What happens the second day of a cyberwar? How can these—you know, this is the kind of thing that was discussed almost immediately after the nuclear bomb because you had—well, there were some secrets about it, but everybody saw what it did. You know, the effects were pretty clear. Everybody knew it was made of uranium and you knew that other countries had—so, you know, people—economists, social science, physicists—they all—people with a strategic bent, they started, well, how does this alter the nature of warfare? And they started writing things right away. This has been embedded in so much secrecy—you know, the agency that used to be called “No Such Agency”—that nobody outside of it has known enough to think about it strategically until very, very recently. And I guess that’s—not to get too lofty or anything, but that was one reason I thought it would be a good idea to write a history of this, a history of something that most people think doesn’t have a history.
HAASS: So help us think this through. When I look at the United States, here we are, if not the world’s most developed society, on a shortlist of world’s most developed society. Cyber now is critical to so much of what it is we do, whether as an economy, a society, as individuals, as a military. Many other countries or societies are also dependent on it, but do we have a problem here just structurally, where ultimately we are more dependent on it, so as a result if things go wrong we’re inevitably more vulnerable than others?
KAPLAN: Yeah. I mean, we are better than any other country at cyber offense. It’s more developed machinery. Other countries are pretty good, too, but we have the best for that. At the same time, you know, it’s like we have much better stones to throw at houses, but our house is much glassier than anybody else’s house. So it can be wrecked pretty badly by much less capable stones.
And, you know, this grew up unthinkingly. There was a commission in the Clinton era looking at vulnerability of cyberattacks, and they looked at critical infrastructure. They kind of defined what that meant: banking, finance, transportation, waterworks, energy, and so forth. And they would call in the heads of, you know, various, you know, utility companies. And they’d say—and the utility company head would boast about how much they’re going online, and they’re saving money, and it’s very efficient, and they can expand more. And then they would be asked, well, what are you doing about security? And they’d be like, well, what do you mean? You mean like fences outside the power? What do you mean? They didn’t know. We grew up this entire infrastructure—and now everything is plugged into this—before and completely independent of the group of people who were thinking about things like attacks. And part—again, part of it was the extreme secrecy of this, that the people who were actually building up these SCADA industries or these everything that’s plugged into computer controls had no idea that this other thing was even going on.
HAASS: But, given that we are where we are where we are, is—again, I’m not an expert, as everyone in this room has already figured out—but, given that, is a chance—I never use the word “invulnerability” because nothing is invulnerable, but even reducing vulnerability sounds tough. So no matter how much we spend and how much we do, significant chunks of this, quote/unquote, “system” are going to be vulnerable. So how is it, we think—before we talk about the offensive, how do we think, then, about defense? If you can’t essentially eliminate or come close to eliminating vulnerability, and given how dependent we are on it, how do we then—how do we deal with that dilemma?
KAPLAN: Well, in the military they’re actually doing a much better job than they used to. They have whittled down to just eight points the intersections between military networks and the broader Internet. So, you know, theoretically, the NSA could just sit on top of those intersections and do a pretty good job of seeing things coming over the transom.
That said, every time they do a wargame where this is part of the game—and it usually isn’t part of the game because if it were, the wargame would shut down instantly and you couldn’t play it—but every time they make it part of the game—like, come in, red team, come inside our command and control and see how you can do it—apparently they always get in. They always get in.
Now, outside of the military, civilian government, I mean, they stopped counting long ago how many points of intersection. Industry, I mean, nobody can even begin.
And that is why—and this has led to something that on the one hand makes sense, and on the other hand is kind of dangerous. And that is—and this has now become part of public doctrine. It was held secret for quite a long time. Especially this started with Keith Alexander was NSA director, and it’s continued under Admiral Rogers: What is the best way to secure the nation from a cyberattack? Well, you have to get inside the network of the other side and see if they’re planning an attack.
So back when they were first systematizing this, they talked about—you know, the military starts putting in acronyms. So you had CND, computer network defense; CNA, computer network attack. And then there was something in the middle called CNE, computer network exploitation. So literally, this just meant you exploit vulnerabilities in the other side’s networks. You get in. You could say we’re just poking around to see what you guys are doing, the same way you’d have a spy in somebody’s government office just to see what’s going on there and whether they’re planning an attack. However, the differences is that in this case all it is is just one more step to go from CNE to CNA, to computer network attack. It’s the same technology. It’s the same skillset. It’s the same bug that’s making its way through the networks.
So they’re doing it to us. We’re doing it to them. So the danger is that, in a crisis—what we used to call in the nuclear field a crisis situation, or crisis stability and crisis instability—this is inherently a crisis—a situation of crisis instability. We’re in each other’s networks. If something looks like it’s approaching war and if you think that going after the guy’s computer networks can seriously degrade his command and control, and therefore you have a one-up in the war that seems on the verge of happening, then you have an incentive to go first. And it can be done instantaneously, before the other guy goes first.
HAASS: But you’ve even using words like “we.” I would think that part of the problem is that, in nuclear, with missiles and aircraft and the like, you’ve got a—your ability to identify the source, you know, the other side, is not that difficult. Nuclear terrorism it potentially gets more difficult. I would think the problem here is exponentially more difficult just simply because at times we can’t even figure out who the other side or sides happen to be.
KAPLAN: It used to be a huge problem. They’ve gotten much better at being able to trace this. And one way they trace this is part of this business of being inside the networks.
For example, this is an extreme case, but how is it that the government said very soon after the hack of Sony that it was—we have very, very high confidence that it was the North Koreans who did it? And you might recall there were some computer experts at the time—I think most of them have changed their tune—but at the time said, I don’t know that it was North Korea; it might look like an inside job to me. Well, the reason why they knew that it was North Korea is that the NSA had hacked into North Korean networks for quite a while. Now, they weren’t monitoring them in real time, because there’s no need to, but they could go back through the files. And this shows you how inside this is. The people in the NSA could watch on their monitors what the North Korean hackers were watching on their monitors while they were doing the attack. And I am told that when the Chinese hack into certain military networks here, they can kind of do the same thing. The level and depth of penetration are pretty extraordinary.
HAASS: Let me ask you one more question on this area, and then I’ll broaden it. Which is, when you look at where we are, but more important where we might be—when you see what’s going on with both the advance of the technology, the proliferation of the technology, and so forth—are you at all optimistic that any—almost to use the nuclear parallel again—we can set up rules of the road that deter/discourage certain kinds of behaviors we don’t want to see? Is your sense that’s just a bridge too far, there’s too many players, these things are too dispersed, you just can’t set up, again, the equivalent of an arms control regime for all this?
KAPLAN: Well, a few levels.
One, the more that other societies become networked, then the more that a kind of mutual assured destruction situation arises, almost without anybody deciding it. But, you know, you have some countries that they’re a long ways from there—Iran, Syria, North Korea. There are about 20 countries with explicit cyber units inside their militaries. I mean, some a lot better than others, but it’s growing up in governments that aren’t very well plugged in.
With the Chinese, you know, because now it’s come out in the open, you know, we’re kind of at a level where we are currently talking about perhaps setting up a forum which might decide what kind of agenda might be set for talks. You know, we’re about five steps away from—
HAASS: But to be fair, though, whether it’s implemented or not’s a separate issue, but the U.S. and China did agree not to do theft of intellectual property.
KAPLAN: That’s right. The thing about that is we don’t really steal their intellectual property anyway.
HAASS: But they might steal—rumor has it they are stealing—
KAPLAN: They are stealing ours.
KAPLAN: Yeah, yeah. But, you know, I forget the exact wording, but there were some awfully big loopholes in that agreement. But, yeah, the fact is that we are talking about it, which—it used to be, you know, Obama’s diplomats, at every one of these Asian security meetings they’d bring it up and the Chinese would say, oh, no, we don’t do—we will fight together on this; we don’t do any of this. After the Snowden revelations, it became, pfft, what are you talking to us for? You do this more than we do. And, yeah, that’s kind of—they know—they realize what situation they’re in vis-à-vis each other, and that is progress of sorts, that’s true.
HAASS: So what about—let me sort of throw out, then, a potential rules of the road. You allow cyberespionage—
HAASS: —because we allow espionage of other stuff. We try to disallow intellectual property theft because that would violate our basic rules of patents and other sort of stuff. And third part is, what is it we either want to allow or disallow—encourage/discourage—about the offensive use of cyber? Since Stuxnet and all that was an occasion where rumor has it, again, that the United States or Israel might have been involved in ways to disrupt the Iranian program. Is this—do we want to, if you will, unilaterally or even collectively disarm? Or is it too important, that we want to preserve this potential tool in certain situations where economic sanctions or military attacks might not look to be very appealing?
KAPLAN: Well, yeah, I mean, I think that’s a key question. I guess—
HAASS: Well, I ask the questions. You give the answers here. (Laughter.)
KAPLAN: I’m complimenting you.
HAASS: (Laughs.) (Laughter.)
KAPLAN: I’m not throwing it back at you. I plan to deal with it.
I guess there are two ways to look at this, maybe more if I think of them along the way. One is, going back to the problem I spelled out earlier, the fine, fine, almost invisible line between computer network exploitation and computer network attack, because I’m just in here poking around, but with one more push of the button you’re destroying what you’re poking around in.
On the other hand, I mean, we have an—you’d think we have an incentive to go there because, as I said, our house is a lot glassier than anybody else’s house. What kind of advantage we have by going first in this kind of conflict or whatever it is sort of eludes me. I don’t see it. We are so much more vulnerable to this than anybody else, unless we’re in a position where we can—you know, the equivalent of a disarming first strike—you know, we send their command and control and their communication networks in such disarray that they can’t do anything on the bounce back. Unless we can do that, I don’t see how we, you know, two or three moves along the chessboard, get any kind of advantage off of this. So, yeah, to the extent that there are political/diplomatic openings for this, I would—I would be in favor of moving it as quickly as possible.
On the other hand, you know, you do now have a situation where, you know, you have one four-star who’s the director of NSA and the commander of U.S. Cyber Command. He has had the advantage, until very recently, of engaging in something that nobody really understands remotely. You know, you look at the—at the congressional hearings on this, and he’s using all this kind of euphemistic language to talk about things, and it’s clear that nobody up there on the panel has any—and these are the people who are entrusted to deal with it in any way. They don’t know what he’s talking about.
So I think we all have to become more thoroughly educated. It’s the kind of thing—I mean, you know, Richard, when we were—when we were both studying nuclear weapons—
HAASS: By the way, I should—we have a conflict of interest up here. What you have is Oberlin Class of ’76, Oberlin Class of ’73. So I just want to get that out on the—(laughter)—on the table.
KAPLAN: I think we went to different graduate schools, though.
KAPLAN: But, I mean, there was kind of a—I don’t know. If you knew anything about nuclear weapons, you kind of kept up the pretense that it was way more complicated for anybody else to understand, right, so they would keep coming to people like you and me for advice on what to do. (Laughter.) Or, like, you know, or in my case for—you know, that’s how I rose through journalism: oh, well, nuclear has become a very big deal, and, God, we don’t know any—we have to get this—you know, we have to get Michael Gordon for The New York Times and Jeff Smith for The Washington Post and, as it was, me for The Boston Globe to decode all this stuff for us poor—with cyber, it has—because it has been so heavily classified for so long, to get into it deeply, people just would rather let the smart people in the NSA deal with it, which is a big mistake.
HAASS: It’s also the reason we have children, because they deal with the remote. (Laughter.)
KAPLAN: (Laughs.) That’s right, that’s right.
HAASS: Since none of us can figure that out.
Last question for me, then we’ll open up. Talk about one of the current issues, which is, given all this, your thinking about the whole question that’s come up with the Apple versus law enforcement, and this issue of the tradeoff between privacy—individual privacy, the commercial rights of companies versus what you might call collective security. What does your own research in this area—where does it take you to?
KAPLAN: OK, I just have a few points and I’ll try to do this quickly.
First, the thing that people don’t understand is that the cooperation or complicity, whichever one you will, between telecoms and the intelligence agencies goes way, way back. I mean, it starts in the 1920s when an offshoot of a World War I-era intelligence agency persuaded Western Union to give them access to all the telegraphs going in and out of the country. When telephones happened, AT&T had a special relationship with the NSA and the FBI. They thought that this might get disrupted by the splitting up of the phone companies, but no, everybody was cool with maintaining the same thing.
With the Internet companies, it actually got deeper and more two-way. For example, if you’re Microsoft, if you’re Cisco or something and you want to sell some software or hardware to the Defense Department, it’s got to be vetted for security. And who vets that? It’s something within NSA called the Information Assurance Directorate. And so when Microsoft put forth its first Windows operating system for vetting, these guys, they found 1,500 points of vulnerability in the—in the system. Now, they helped them close them—most of them, not all of them. They left a few open so that when other countries bought this they could get in easily, and Microsoft was fine with that. Just recently, I think in 2009, Google, the Chrome system, the Chinese just hacked into their source code. The NSA helped them fix it.
So fundamentally what’s going on here is the FBI and the NSA trying to preserve a relationship that might be—that would certainly be made much more difficult to maintain in the coming generations of encryption, and Apple—specifically, Tim Cook—wanting to disrupt it. In fairness to Tim Cook, he’s never really been quite as complicit as some of the others. I mean, they have answered to court orders requiring them to open up 70 phones over the period—over the past several years. He devised this new software that he thought got around that—well, you know, hey, I can’t open it up; the user has the code, I don’t know. The FBI, which probably really means the NSA, figured out a way around it. And, you know, in case some of you aren’t quite clear on this, basically, they said, well, look, what we want you to do is to—you have this security layer which, if somebody tries to put in 10 passcodes and they’re wrong, all the data in the phone vanishes. We want you to change that. I think—I’ve been convinced that Apple overstates it when they say they need to—they would need to write a whole new operating system. They could change it so that it shuts down after a thousand tries or 10,000 tries. And then the NSA, FBI, or even Apple—because this stuff is commercially available—can go in and do what they call brute force—just hit it with these password-sniffing programs that, like, tries out a thousand codes per second, and eventually you’ll get there.
But what’s going on with this case has, I think, almost nothing to do with this particular phone. They’ve already got the metadata for this phone—you know, who’s called whom over what period of time, what dates—because that’s up there with the phone companies. They’ve already got it, and apparently there’s no foreign numbers that either called or were called by this phone. Maybe there’s something, but I think the FBI and the intelligence—well, mainly the FBI more than the intelligence agencies—have been looking for a case where they can expand their legal prerogatives to get into this.
And this is, I think—I think Cook is making a mistake by making such a big deal out of it because it’s not a good case, from his point of view. There’s no Fourth Amendment case because the county owned the phone, and the county has given consent. It’s not a privacy issue; the guy’s dead. And political optics, I mean, they really suck. I mean, this wasn’t just some two-bit drug dealer. He was a mass-murderer who was in touch with ISIS. I mean, and also, you know, let’s say they win their legal case. Dianne Feinstein has a bill set to go which requires companies to strip their encryption layer if presented with a legal warrant. And to me, that goes much farther than what FBI is asking for in this case.
But it’s a dilemma. There is no—there are—there’s no legal history. There’s no case history for this sort of thing. And, you know, the Supreme Court, courts don’t like to make new case history over—especially over something that is filled with so many dilemmas, wrapped up with national security, and really probably better requires a master judge to get into the depths of.
HAASS: OK, we’re going to open it up.
Before I didn’t do my work completely. I should have mentioned that Fred now is the national security columnist for Slate. He’s the author of four books before this one. As he said, this is his fifth. His most recent one was “The Insurgents: David Petraeus and the Plot to Change the American Way of War.” He was a Pulitzer Prize finalist for that and got a Pulitzer Prize when he was writing for The Boston Globe. Other than that, he has not been busy. (Laughter.)
So let’s open it up for all the questions I didn’t have the wit to ask. We’ll get as many in as we can. Raise your hand, identify yourself, and we will let Fred do the answer. First one from the gentleman here.
Q: Masazumi Nakayama, Citigroup.
Should U.S. taxpayer(s) spend more money to building up and enhancing the cybersecurity infrastructure, or not? If so, which area we should spend: offense capability, or defense capability, or something else?
HAASS: Please—(inaudible)—just so we can get lots of people in. I don’t mean to be rude.
KAPLAN: Well, I mean, again, one problem with this is that the two have been defined as synonymous at this point. Because there are—at one time they tried to say, OK, what will it take to defend critical infrastructure? The number of intersections between their networks and the—and the—and the overall Internet are too many to count.
During the Clinton administration, there were some people who wanted to impose mandatory security requirements on critical infrastructure. This was resisted by lobbyists, by the companies, and even by the Treasury and Commerce departments. There was one guy who maybe some of you know, Richard Clarke, who was in the White House. He wanted to create a parallel Internet for critical infrastructure which would be tied into a government agency. You know, this was leaked and denounced as Orwellian. So defense is now offense. That’s kind of what people need to understand, whether they like it or not—or it’s been defined as one step short of offense.
Q: Thank you, Richard. Vijay Vaitheeswaran with The Economist.
You mentioned China in passing. Can you give us a sense, when it comes to cyberwar, what does China do well and what are areas or tactics which it hasn’t yet perfected? Thank you.
KAPLAN: What I’ve been told by people who know this stuff, you know, intimately is—the distinction they make between Russia and China is that Russia, they’re very cagey, they’re very good, and sometimes they’ll be doing something and it’ll take a while to see that they’re doing it. China is just indiscriminate. They’re everywhere. They don’t care if you see it or not. They’re going after everything—trade secrets, military secrets, company secrets, just everything they can possibly do.
And the other thing about Russia, it’s kind of this weird system where some of it’s the FSB, some of it is sort of criminal networks that have deals with the FSB where the FBS protects them and the criminals share their newly discovered tools and techniques. There are also some well-known antivirus companies that might be tied in with this. I don’t know; I don’t want to get hacked tonight, so I’ll leave that alone. (Laughter.)
But China, most of this is centered in a unit of the—of the Chinese army. They train in this. They have doctrine in this that is very much modeled after American military doctrine, but with a five-year lag time. They are everywhere.
I mean, they—and then, your implicit question is what to do about it. I mean, I’ve told people—you know, friends of mine knew I was doing this book, so they said, well, what do you do? What can I do? And I say, well, look, you know, if what you’re worried about is some criminal or some punk or just a mischief maker, you know, getting into your account, stealing your—you know, your Netflix code or whatever like that, there are things you can do, I mean some pretty obvious things you can do, and, you know, they tend to work. If somebody really wants to come after you or something you have, and they really know what they’re doing, and especially if they have the resources and time of a nation-state, there’s ultimately very little you can do about it. Even it used to be thought that, well, I’ll just unplug this Internet from the—unplug my computer from the Internet, create an air gap, there are ways over the air gap. It’s harder. They’d have to really go to work on you. But if they wanted to, you know, they always get in.
HAASS: Yes, ma’am.
Q: Hi, thank you. Mona Aboelnaga Kanaan with K6 Investments.
Other than the punk, can you talk a little bit more about asymmetric cyberwarfare, what criminal and terrorist organizations are doing against us and other governments, and how much we should be concerned about that?
KAPLAN: Well, in terms of the terrorists, I mean, this has long been a concern, and in fact, you know, starting when some people in the White House began to discover people like the L0pht, if you’ve heard of them, and this guy named Mudge, people—just private citizens, kind of brilliant computer geeks who could do the kinds of things that had previously been thought exclusive to nation-states.
As far as I have been able to tell—and maybe some of you out there know differently—groups like ISIS, al-Qaida, this sort of thing, they’re not there yet. They neither have the technical skill nor the money to pay people with the technical skill who would then consent to work for them while knowing that their communications and everything else is so heavily monitored. So I think we’re a ways away from a terrorist cyberbomb in that sense, but, you know, it’s not to say that it isn’t out of the question. There are a lot of people out there who used to work at, you know, things like the NSA’s Tailored Access Operations office. You know, one of them—what if one of them all of a sudden got a screw loose or something? I mean, it’s—movies are written about this, right? So it’s not out of the question.
In terms of other governments, I mean, you know, who would have thought North Korea—you know, which can barely maintain electricity for its citizenry—would be doing some of—often, they hire people in places like Singapore and Thailand to do it for them. I don’t think—I guess the short answer is I don’t think you can exclude anybody when it comes to nation-states. And now that it’s become such a fashionable thing, everybody who wants to become a main power, and can’t afford to do a Manhattan Project and get a nuke, this is a much shorter cut to be able to do instantaneous attacks of one sort of another halfway across the globe. So people value it, and so you have to assume that they’re going to get it.
Q: Thanks very much. Liz Economy from the Council.
Welcome back Fred. It’s great to have you here.
Q: So near as I can tell—I haven’t had a chance to read your book yet, but it looks like it’s a history—
HAASS: That’s because you’re writing your own. Is that the reason?
Q: (Laughs.) Yeah. (Laughter.) That’s right.
KAPLAN: So she’ll never—she’ll never read my book.
Q: It’s almost done. (Laughter.)
Anyway, so, back to the point. So, seems to be a history of sort of U.S. cyber program development. I’m wondering whether there were two or three moments in time or people, events, something that seemed to put us on a certain trajectory or defining debates that we should really take away.
KAPLAN: I know Elizabeth, but she—I did not plant that question with her. (Laughter.)
But the thing that people don’t realize is that this goes all the way back to the dawn of the Internet. In 1967, the ARPANET was about to roll out. That was the precursor to the Internet. And there was one guy, his name was Willis Ware. He was a computer pioneer. He’d worked with von Neumann at Princeton. He was the head of the Computer Science Department at the RAND Corporation and he was on the Scientific Advisory Board of the NSA. And he wrote a paper. It was secret at the time, but it’s declassified since. It’s a fascinating paper to go back and look at. And he said, look, the thing is about putting information on a computer network with online access—it might be the first use of the word “online”—online access from multiple unsecured locations is that you’re creating inherent vulnerabilities. You’re not going to be able to keep secrets anymore.
So when I was doing my research, I talked with this man named Steve Lukasik, who was the deputy director of ARPA at the time, and I said, did you read Willis Ware’s paper? Oh, yeah, sure, I knew Willis. Well, what did you think? And he goes, well, I took it to our team, and they looked at it and they said, oh, look, don’t saddle us with a security requirement. I mean, look how hard it was to do what we’ve done. It’s like—it’s like saying to the Wright Brothers that their first plane has to carry 20 passengers for 50 miles. Let’s take this one step at a time, and in the meantime it’ll take decades for the Russians to do something like this. And it did. It took two-and-a-half, three decades. In the meantime, whole systems and networks had grown up with no provision for security whatever.
So I see this as sort of the bitten apple in the digital Garden of Eden. It was there from the very beginning.
And, by the way, Ware knew that this would happen because, as I say, he was doing things with the NSA. He knew that we were hacking into everybody else’s—they weren’t computers then, they were phones and radio transmissions, and that they could do the same thing to us someday. And you had this going with computers which were tied in with classified and unclassified systems, it would be bad.
Now, the other thing—the one—the other crucial thing happened—and this was really one of the most surprising things I found—in the first weekend of June 1983, Ronald Reagan is up at Camp David and he watches the movie “War Games.” Remember “War Games”? Matthew Broderick plays this teenage whiz kid who unwittingly hacks into NORAD’s main computer, thinking that he’s come across a new game called “Global Thermonuclear War,” almost sets off World War III. So Reagan’s back in the White House the following Wednesday. There’s a big meeting, not about this; it was about the MX missile, actually—to get nostalgic for a moment. (Laughter.) And at one point, though, he puts down his index cards and he says, has anybody seen this movie “War Games”? OK, nobody had seen it. It had just come out. He launches into this plot description, this very lengthy plot. People are looking around like, where’s—what’s going on here? He turns to General John Vessey, who was the chairman of the Joint Chiefs of Staff, and he says, General, could something like this really happen? He goes, I’ll look into that, Mr. President. You’ve heard those words before. Comes back a week later and says, Mr. President, the problem is much worse than you think. (Laughter.)
This leads, a year later, to NSDD 145, the first presidential decision directive on telecommunications and computer security, which reads a lot like every paper you read today: our computer networks are vulnerable to electronic interception, and, you know, by terrorists, by foreign governments. Except it took a little spin. The NSA basically took over this directive, and they wrote it in a way that the NSA basically controls and sets the standards for all computers in the United States, everybody’s. So some people on the Hill didn’t like that, so they rewrote it. So the NSA gets to secure dot-mil. Commerce Department does everything else. OK, Commerce Department doesn’t know anything about this. They can’t do anything.
NSA at the time had no interest in protecting dot-mil. If they found a breach in the system, they were going to exploit it. They’re not going to patch it. So for another decade this problem lingers.
It’s been known about. It started getting serious—real attacks started happening in the ’90s, and you can read the book for that. But this is a problem, in other words, that has been known about on some level for almost 50 years. It has been known at senior levels of government for more than 30 years. There have been actual cyberattacks for 20 years. And yet, in terms of figuring out what to do about it, we’re still—you know, it’s the equivalent of 1948 in terms of nuclear weapons.
Q: Kim Davis from Charles Bank Capital.
Could you talk about proportionality in the sense of conventional responses to cyberattacks, both in terms of how we’re thinking as to how we might respond and how we might think a South Korea might respond to a North Korean attack? Because it seems to me that’s something that doesn’t get a lot of talk. It’s cyber versus cyber, but that’s not the only way that people are thinking about it.
KAPLAN: Yeah. Well, the U.S. government has said that we reserve the right to respond to a cyberattack with non-cyber means. But you’re right, it’s not just a—we’re still at a very—and by “we,” OK, they, the government, is still at a very primitive state of thinking about this. As I say, there’s a Defense Science Board panel going on right now talking about cyber deterrence, and this is one of the things they’re talking—to tell you how primitive it is, let me just tell you a story.
I was interviewing this guy. This is like my third interview with him. Pretty senior in the intelligence world, especially NSA. I sit down and he goes, what do you know—what’s your thoughts about cyber deterrence? And I said, well, I don’t know, that’s what I’m talking to a lot of people. Nobody seems to know. I don’t know. And he goes, ah, I was hoping you would have thought about this, because I’m on this—(laughter)—I’m on this DSB panel on cyber deterrence. I thought you might want to join. So I’m thinking, if they’re asking me—(laughter)—to join, boy, they’re really—(laughs)—in bad shape. So they don’t know.
I mean, you know, the Sony attack by North Korea, this is the first time that a president said we are going to retaliate at a—at a time of our choosing. Three days later, North Korean Internet was unplugged. They thought we did that. I know for a—I’m pretty confident we had nothing to do with that, the U.S. government had nothing to do with that. North Korea would think that we did, though. But what—you know, Jesus, there are like a thousand IPs in all of North Korea and they’re all hooked into one server in China. What do you do—you know, what is proportionality? Nobody has figured—I mean, it’s hard—they really haven’t figured it out completely even with nukes, for all the decades we’ve been thinking about this.
HAASS: To talk about proportionality, is there any relationship between the traditional laws of war and what we’re talking about here? Or is this sui generis?
KAPLAN: Well, Robert Gates asked that question when he started doing all—getting all these cyberattacks. He sent a letter to his counsel and said, at what point do these kinds of attacks constitute an act of war? It took two years for the—to come back with an answer. And it wasn’t really an answer; it was like, yes, under certain circumstances, it might.
And there are things like the Tallinn agreement. They talk about, you know, does it kill people, does it damage—does it do significant damage to property. One could make the case that Stuxnet is an act of war, is a violation of international law. It certainly is an attack on critical infrastructure, as was the Iranians’ retaliation, which was the Shamoon virus, which wiped out 30,000 hard drives of Aramco and planted an image of a burning American flag on everybody’s computer screen.
HAASS: I would have thought Stuxnet was simply a classical case of a preventive attack.
HAASS: It was to interfere with what you might call a gathering threat. Wasn’t yet imminent, but it was gathering.
KAPLAN: It was, but it was also an attack on critical infrastructure. It was an attack that did damage to physical property. So, you know, it’s—
HAASS: Is that Mr. Mudd? Dan?
Q: Dan Mudd at Paladin Capital.
So, given all that, and given the intersection between the private sector and the military, how prepared are we to start forums, discussions, hotlines, protocols between the public sector and the privates sector in an event?
KAPLAN: They’ve tried. I mean, when the Department of Homeland Security was created, that became the go-to agency for defense of civilian and civilian government cybersecurity. As you might expect, given everything else we know about the Department of Homeland Security, it hasn’t worked out too well. They do have something called USA-CERT, Computer Emergency Response (sic; Readiness) Team. It’s a lot better now than it was a few years ago. But because the elements of critical infrastructure themselves are in—most of them are in private hands and they have resisted anything that bears the slightest resemblance to regulation, much less actually hooking up their network to something that the government can sit on and monitor, it’s—as I said, it’s still in a very primitive state. And if there were some massive attack—and they’ve tried to wargame a few—and it falls apart pretty quickly. With the military side, it’s much better, but—in terms of protocols and what you’re—what happens if this happens and who you call if that happens.
You know, I mean, President Obama signed an executive order last—order last year, and it went to a certain degree—it did some things that previous orders like this hadn’t done; however, there was, in the middle of this, one line which says nothing in this order should be interpreted as a mandatory regulation. So it’s like you—voluntary. Do this if you want, we’ll help you, but we’re not going to make you. So it’s a tough one.
HAASS: As you can tell from the questions and the answers, we could go on for a very long time. And I’m always—it used to be said by the chairman of the Federal Reserve Board that his job was to take away the punchbowl just when the party was getting going. The purpose there was to avoid inflation. The purpose here is to allow those of you who have made other plans to do them, to allow people to have a reception, to allow people to buy the book—important—
KAPLAN: I will sign it.
HAASS: And get it signed. And right now, though, the important task is to essentially thank Fred for producing a totally interesting, timely, and engaging book. Congratulations.
KAPLAN: Oh, thank you. Thanks very much. (Applause.)