Experts discuss the current threats and vulnerabilities in U.S. cybersecurity and the level of U.S. preparedness in responding to the next cyberattack.
The Malcolm and Carolyn Wiener Annual Lecture on Science and Technology addresses issues at the intersection of science, technology, and foreign policy. It has been endowed in perpetuity through a gift from CFR members Malcolm and Carolyn Wiener.
TEMPLE-RASTON: Good afternoon. Welcome. Welcome to today’s Council on Foreign Relations Malcolm and Carolyn Wiener Annual Lecture which is titled “Cybersecurity Threats: How Vulnerable Is the United States?”
I’m Dina Temple-Raston and I’m the counterterrorism correspondent for National Public Radio. And I’ll be presiding over today’s discussion.
And you have the bios for our panel here today, so I thought what we would do is just sort of set the table a bit.
Oh, you guys are in an absolute—let me do this.
Let me set the table a little bit.
And I wanted to start with you, Anup. Could you talk a little bit about the changing landscape for cybersecurity?
GHOSH: Sure, glad to. And thank you all for coming to this lunch.
So I’ve been in cybersecurity for a while since the late 1990s. And we’ve seen the threat evolve significantly. And probably more recently, you know, I would say from the 2008 to the 2014 timeframe, most of the cyber threats that we were tracking were nation-state cyber threats that were really focused on stealing intellectual property. And so you would see these operations against specific industry sectors where they were getting in under the radar with the express purpose of stealing data. And they would have all kinds of funny names, either APT1 or APT28, Fancy Bear, and so on. When we were tracking them, their express purpose was to be low and slow, under the radar, stealing valuable data either to support their own industries in their countries, or to hold valuable information, so it was collection espionage.
What we’ve seen over the last two years is really the monetization of hacking for the express purpose of revenue generation. And part of that has been an evolution of malware itself towards destruction. And this is a very worrisome trend.
So, for example, I’m sure many of you—we’re going to talk about WannaCry in a little bit—but many of you have heard about ransomware. And the whole idea behind ransomware is to immediately monetize an infection versus, say, stealing credit card data, getting mules, cloning, and so on. That’s a longer process with a lower conversion rate. If I can immediately monetize an infection with a reasonably high conversion rate, I can make money almost instantly. So that’s one category of destructive malware.
And I think a little bit later we’ll talk about more severe categories of destructive malware that can actually bring down things like power grids.
And, Adam, can you talk a little bit, just building on this about the big actors that we’re seeing now and how they’ve changed?
SEGAL: Yeah, I think we have seen four major trends. The first one is that the Chinese seem to have reduced their theft of intellectual property and business strategies for commercial reasons. So President Xi and President Obama signed the agreement in 2015, the Chinese then went off and signed an agreement with the U.K. and the G-7 and the G-20. And so far, the public evidence is is that seems to be holding.
TEMPLE-RASTON: Sorry. Why is that?
SEGAL: Well, there’s, I think, two possible explanations. There’s an internal explanation, which is that China is undergoing an anticorruption campaign, and it may be that President Xi got tired of individual actors using state infrastructure for their own personal gain. Also, the Chinese are consolidating their cyber forces under the Strategic Support Forces so they bring them into more political control. And you probably don’t want the PLA hacking for commercial theft, you want them thinking about how you would use cyber as a military tool, and so there would be reasons to do that.
And then there was the threat of sanctions. Clearly, the U.S. in the runup to the summit was leaking lots of stories about how they were going to sanction high-level Chinese officials or state-owned enterprises, and that seems to really have gotten the Chinese attention.
Second and mentioned, you know, we’ve seen North Korea move from being mainly a disruptive attacker, so lots of DDoS attacks, the attack on Sony, destructive attacks on South Korean infrastructure, to one that’s now using cyber for economic—illicit money. So, of course, the hacking of SWIFT, and the theft of 80 million (dollars) from the Bangladesh bankers, and then it seems as if they’re involved in this WannaCry ransomware.
Third, you know, what we’ll probably spend a good time of our discussion, we knew that Russians were spying on political organizations, we knew they were conducting information operations through traditional measures, we didn’t expect them to combine the two against the United States. So I think that’s what we saw, of course, this summer.
And then finally, we saw some new actors, right? So just a couple of weeks ago, we saw this story about Vietnam hacking Philippines and releasing some of those data publicly. So some smaller actors who wouldn’t have thought of as being players getting into this space.
TEMPLE-RASTON: And if you were to rank these players, how would you rank them?
SEGAL: I mean, from a U.S. perspective, I think, you know, we’ve always said that the Russian hackers were the most skilled and the most dangerous. I think that is still true. The Chinese have reduced their numbers, but are still, I think, out there. And most worrying are going to be, you know, North Korean and Iranian hackers because their capabilities seem to be rising, and we don’t really understand if there’s any way to deter them or at least to engage in some type of sanctions or other things that may raise the cost for them.
TEMPLE-RASTON: And did we underestimate the North Koreans? I think we have this image of them sort of being in the middle of a dirt field with one PC. They’re a lot more sophisticated than that?
SEGAL: Yeah. So, you know, there’s about 1,024 IP addresses total in North Korea. You know, there’s more than that in a three-block radius here. (Laughter.) The vast majority of North Koreans don’t have any access to the internet. I think there was a sense that, oh, all right, they could do distributed denial of service attacks, they weren’t going to be a particularly sophisticated adversary in this space. But what we see is that if you start focusing resources and attention to it, you can move up fairly quickly in this space.
TEMPLE-RASTON: And, Kiersten, just to make you part of this conversation, sorry about that, can we talk a little bit? We mentioned WannaCry, the ransomware. Can we talk a little bit about WannaCry, legacy software, and how that became such an enormous issue?
TODT: How WannaCry did?
TODT: Yeah. So, I mean, I think what WannaCry prompted is sort of an age-old discussion in cybersecurity, and we were talking about this a little earlier, between compliance and risk management.
TEMPLE-RASTON: Sorry, I can I stop you for a second?
TEMPLE-RASTON: Could you give us, like, a two-sentence description of WannaCry for anyone who isn’t following that?
TODT: So WannaCry was a breach that essentially happened because computers were not updated. It was a Microsoft operating system. And what happened was, in March of this year, Microsoft identified the vulnerability and issued a patch, which is essentially a remedy for what had happened on the operating system. And about six to eight weeks later, those people who had not patched the systems, it opened up a tremendous vulnerability that led to issues in the U.K. in the hospital system all over. I think probably Anup has the statistics on it, but the reach of that breach was significant in that type of vulnerability.
And what it did was it really called again into the limelight this challenge between compliance and risk management. And so we often talk about cyber hygiene, and that’s becoming a phrase that’s getting so overused that it’s sort of lost its meaning. It’s these very basic things that need to happen on operating systems in order to keep them up to date.
And what the challenge is right now is, when you are running an operating system as part of an enterprise, if you are not updating and keeping your system current, that is really just—that’s low-hanging fruit, and that is a serious failure on the part of the operating system and a part of the technology engineer, whoever is responsible for it, the administrator.
And so the fact that we saw so much of that getting exposed for those individuals who had purchased it in the free market, not on the black market, they weren’t bootleg copies, those have their own issues, but the fact that we saw this calls into question again this idea that we are missing basic-level approaches to cybersecurity.
And this goes really to what we’re looking at now in risk management. So I know we’ll talk about this in a little bit, but the Trump administration executive order on cybersecurity has an overarching theme of risk management. And what is notable in that, it’s the first time that we’ve actually seen a federal policy that’s focused on how do you manage risk the way you would do for an industry. We can’t manage government any differently in cybersecurity than we look at business practices.
And what this also sheds light on is the fact that we now are truly starting to appreciate and understand that cybersecurity is a cultural issue, it’s not an issue that is reserved for IT people, for engineers. If you are holding a phone in your hand, if you work off of a computer, you are in the cybersecurity realm.
Dina and I were talking a little bit before. Workforce is a huge issue in cybersecurity. I get a little frustrated when people talk about you’ve got to be really focused on cybersecurity workforce for certain people because we’re all part of the cybersecurity workforce. I mean, you’ve walked into the store around the block, and you’re doing, you know, the Foursquare and the different technologies that they’re using just to swipe a credit card. And so we have to be approaching these issues from the risk management perspective and be able to identify, first, the low-hanging fruit, but then truly understand the analysis for each enterprise and the safety and the security for the enterprises.
TEMPLE-RASTON: In the end, is the riskiest things about cybersecurity carbon units, meaning us?
TODT: Absolutely. I mean, so we talk about the pyramid. You know, it’s people, processes, and technology. Oftentimes, we think technology is going to prevent and make us most secure. And you still see company CEOs going and saying give me the technology that’s going to make sure I’m never breached. If that’s the approach you’re taking, and we’ll talk a little bit more later, but if that’s the approach you’re taking, then it’s almost like the Titanic mentality. You will sink. And if you don’t have the right number of lifeboats, you’re going to be in a lot of trouble.
The challenge with cybersecurity is you have to accept that a breach is not necessarily a failure of what you’re doing, it’s about managing and making that breach most contained and ensuring that your operations aren’t disrupted, and you get those operations up and running as soon as possible.
Just one quick thing, when you asked about ransomware, I think it’s important obviously to be up to date with what’s happening. Ransomware, though, it’s a flavor of the month. I mean, we’re going to have ransomware, we’re going to have another type of attack. At this point, it’s looking at how you monetize these, but it’s also just understanding that you can’t always be chasing the current event, you actually have to have a much more approach that really looks at what the cause is and not always addressing the symptoms.
GHOSH: Just a couple more points on WannaCry because it’s such a fascinating attack. And I bet everyone here heard of WannaCry, right, which speaks something to its media coverage as well.
TEMPLE-RASTON: How many people heard about WannaCry? OK, so 85 percent.
GHOSH: Right. So a couple of interesting things about WannaCry. It’s a little bit of a throwback. If you remember in the early 2000s, there were worms, like Code Red and Slammer, right? So WannaCry, unlike other ransomware-based attacks which required a human to click on a link or open an attachment, it would just find these vulnerable machines that were on the internet and go spread through them, right, from one machine to the next, but it would hold them hostage, right? It would hold them hostage for a ransom payment.
Another interesting thing which I think we’ll touch on is this question of vulnerability equities, right, vulnerabilities the U.S. government discovers, right, and holds close versus notifying industry, like Microsoft, right? Now, the U.S. government’s hand was forced in this particular case because of the theft of a cache of vulnerabilities by a shadowy group called the Shadow Brokers, right, who ultimately released it, and Microsoft released the patch, and it didn’t get patched on a lot of systems. So I think we’re seeing this trending now of old-school, worm-style, destructive attacks with a monetization component.
And now leading to the third main point, which is IoT. We’re going to see these types of worm-based, destructive attacks in random IP devices like video cameras like we saw with Mirai botnet.
TEMPLE-RASTON: So IoT, Internet of Things. Go ahead.
GHOSH: Thank you, yeah, Internet of Things, like the video cameras that monitor the building or your house, which can now be used against other entities for distributed denial of service attacks.
And on the EO, I thought it was interesting, they specifically called out distributed denial of service attacks via botnets. This is considered to be a huge problem. And actually, leading up to the election last November, there was huge concern from the White House that DDoS botnets would take down the internet right prior to voting and cause significant disruption. There were other things—
TEMPLE-RASTON: How many people know what a botnet is? OK, could you explain what a botnet is?
GHOSH: So a botnet is really a collection of machines that have been compromised.
TEMPLE-RASTON: They’re like zombies.
GHOSH: Like zombies that get entered into a network that can then be commanded and controlled by a single entity.
TEMPLE-RASTON: And you usually don’t know your computer has been taken over or is part of a botnet, if it’s done well.
GHOSH: Yeah. So I think WannaCry has all these characteristics and makes for a fascinating case study for almost throwback attacks as well as where things are going in the future.
TEMPLE-RASTON: There was something you brought up. And in the project that I’m working on now, I’m talking to a lot of hackers. And you mentioned that there was this monetization issue, but is this really about the money, or is this about something else?
GHOSH: You know, WannaCry, this is such an interesting point, WannaCry made almost no money for the people that released it. And it’s not that they weren’t trying to make money as much as the feeling was the code was really inept in the sense of monetizing. And so a lot of people, security researchers conjectured that they were really experimenting and it got loose, like they hadn’t quite matured the software. But because it has a spreading component, once it can see the internet it’s going to go. It’s like having a level-four, you know, virus accidentally get released. I think they hadn’t worked out the monetization infrastructure well enough for WannaCry, but it really shows the potential for how bad these things can get.
TEMPLE-RASTON: In other words, they hadn’t worked out the payment system yet.
GHOSH: They hadn’t worked out the payment systems, they hadn’t worked out the right price point.
By the way, ransomware, just like a lot of business, it’s all about figuring what’s the clearing price by which someone will pay to get their data back, to get their system back. Because your clearing price could be as low as $200 to be worth the trouble to get the key, right, so you’re back in business, particularly if you’re a hospital and your systems have gone down. You pay the $15,000 ransom, you’re back up again, right?
TEMPLE-RASTON: Is that what happened?
SEGAL: Well, that was part of the problem with the code, is they couldn’t actually identify who had paid the ransom or not. (Laughter.)
GHOSH: Small problem.
SEGAL: So the way that the code was spread, they didn’t have enough addresses to actually be able to identify who had paid and who hadn’t. So this was kind of part of the breakdown of a ransomware system, is you have to have a trust that once you’ve paid the money you’re actually going to get your system back. They didn’t have that developed yet. And given how widely it spread, that was, I think, part of the problem.
TEMPLE-RASTON: So did everybody get back on, whether or not they paid the ransom or not?
TODT: I think they did, yeah.
SEGAL: I think they did, but I think also partly because people had actually developed ways to—
TODT: To figure out the path and to—
SEGAL: —to figure it out. Well, there was a, you know—
TEMPLE-RASTON: Wasn’t there a backdoor option here?
TODT: —counter-engineer it.
SEGAL: There was a way to decrypt, yeah.
TODT: I just want to make a point. So the monetization question that you asked, it’s important, but I think I’ll go back to this other point that we can’t get too wrapped up in one symptom of what’s happening. Because as we saw with the election and in general, it’s often about impact. And so when you have capabilities and intent, you’re looking for the greatest impact. And so in certain cases, I think we’ve also got to look at data manipulation, data theft and data destruction, as well as the monetization. It’s not to say that one isn’t more important than the other, but I think the point is that we can’t get too siloed into focusing on what the impact, a specific type of impact, because we’re seeing broad ranges of impact that are boggling our minds.
And, well, I won’t jump too far, but on the elections, you know, we talked about information warfare. But it was data manipulation of voters.
And I know we’re going to talk about the EO, but there was a very specific reason actually why the botnets were called out in the EO, which we can talk about later or talk about now, because it’s a good point.
TEMPLE-RASTON: Let me just back up for one second.
TEMPLE-RASTON: I mean, one of the reasons why the Office of Personnel Management hack was so scary was not just because everybody grabbed documents and background information. Isn’t it also a big concern that that data could be changed and you won’t know about it?
TODT: Absolutely. And it was actually that breach which was the sole driver for the creation of the commission that I served as the executive director on last year because there was this concern of the broad range of impact that one type of action could have. And looking at what could be done to personal records, but then also how that could affect government operations in different ways because you also have the direct impact, but then there are these secondary impacts that you can’t always anticipate.
TEMPLE-RASTON: Well, we’ve been showing leg on the election stuff now for a couple of minutes, so let’s turn to the elections.
And I’ll start with you, Anup, and then feel free, Adam and Kiersten, to jump in.
So Bloomberg reported that 39 out of 50 of the states were hacked, and voter registration databases, I think, were targeted. Can you talk a little bit about that and sort of bring it into this broader conversation we have about the new kind of hacking that we’re seeing now?
GHOSH: Yeah. And this was a late disclosure. At the time last summer, DHS, the White House, and other government agencies were tracking the Russians trying to hack our democracy essentially. A lot of it was done through fake news, which is hacking the electorate, the way we think about things. Some of it was tactical, meaning they were targeting very specific voter election systems, right? And we knew at the time—
TEMPLE-RASTON: Contested areas, was that the idea?
GHOSH: Well, I don’t know if we know, have that level of detail. But at the time last summer, we knew Illinois and Arizona were both targeted. And oftentimes, as you mentioned, they were voter registration databases. And you ask, well, so what, it’s not the voting machines themselves, right?
Well, there’s a lot you can do with the voter registration databases. You can manipulate information on specific voters. You can delete records, right? So when you go to vote and you’re not in the database, you can’t vote, right, which creates long lines and huge disruption, which might mean you can sway an election one way or another on a given district.
A couple of weeks ago, we saw someone arrested, Reality Winters, for leaking a classified document to Wikileaks, right? And this was an NSA document on a Russian hack of a voting software company, right? Now, we don’t know, you know, it’s not been confirmed whether this was fact or fiction. But let’s say the Russians did hack a voting software company, well, what do you do with those bits then? Well, you can now exploit and actually change votes, right?
So things that we do know are 39 out of 50 states were hacked or attempted hacked.
TEMPLE-RASTON: In some way.
GHOSH: In some way during this whole election process. The U.S. government knew this. The Trump administration transition team knew this. We got through the election. I think we all know now that Wikileaks was a key part of the information operations, right, of finding information, potentially even changing information, and leaking it, right? So this is all, and, Adam, you mentioned earlier, none of this is new as far as the Russians go. I mean, they’ve been practicing this for decades. It’s how they’re doing it now, it’s the putting together social networks, right, with hacking and influence operations to reach and impact a desired end goal.
SEGAL: The other thing about the revelations in the Bloomberg stories, if they turn out to be true, is that we also had a narrative about the Obama administration response, which is that, you know, the Obama administration saw the hacking early in the summer, and then they didn’t officially—we didn’t make an official designation until October, right, was when the DNI and the DHS officially say the Russians are behind it. And we don’t have the sanctions until January.
And President Obama himself argued, well, you know, one of the reasons why we didn’t respond more aggressively is because I warned President Putin personally at this G-20 meeting that if he tried to do anything more than these influence operations, then they will have a real problem. And he said it worked, right? They didn’t try to attack the electoral system, they didn’t try to attack our election infrastructure.
So, if in case the Bloomberg story turns out to be true, then that falls apart, right, there was no deterrence in that message, whatever was sent. Whatever President Obama said to President Putin, other than cut it out, did not seem to work.
TEMPLE-RASTON: Wasn’t enough, right, right.
TODT: So I’ll add to that now the consequences, right? So when we started to hear about what was going on last summer and then in October, there was this immediate discussion, and we had this discussion on the commission, OK, so does this mean that elections should be part of the critical infrastructure? How are we looking at this? What needs to happen?
And I think what it calls into question and what we have to start being aware of is the environment of interdependencies, and I guess part of the Internet of Things, but this environment of interdependencies that we are now embarking upon. Because I actually—I struggle right now with the definition of critical infrastructure and what it means. Because when people were saying elections need to be part of critical infrastructure, well, what exactly then does that give them? What exactly does being called critical infrastructure give election systems on the part of government?
Government has absolutely fallen short in supporting critical infrastructure in ways that are truly effective. When you look at the fact that the private sector, you know, owns 85 percent of critical infrastructure, when we look at cyber as a domain, cyber is the only domain where we’re asking private companies to defend themselves. And so, in a world right now of Internet of Things, of all these interdependencies, the issue isn’t so much about the label or how we look at it. The issue is how government needs to be understanding those systems.
And we now are just looking at elections as a new thought of critical infrastructure. Rest assured, there are going to be others that we’re not even thinking about today because of how these interdependencies are coming into play. And so we have to understand how government and industry need to be working together.
And public-private partnership, we were just talking, is, you know, it’s an overused term. And when we talk about information sharing, because there was this discussion around that for elections, information sharing is often a destination that we’ve talked about between government and industry. But information sharing is actually a byproduct of trust, and the only way that you build that trust is through pre-event collaboration.
Government does incident response really well. We’ve got crisis management, crisis response down pretty effectively. Our challenge is, how do you actually work together to ensure that you’ve trained, you’ve exercised, you’re looking at these issues, scenario planning so that when the event actually happens, government and industry are already working together? And we don’t have that effectively. And I think for this issue, it’s going to call into question those things.
TEMPLE-RASTON: But isn’t one of the problems with that, though, or what has stood in the way of that actually happening is that companies don’t really want people to know they were hacked?
TODT: So I think that actually there was a time when the FBI coming on your doorstep to help you was not actually a good thing because there was no boundary. So what happened was companies would get breached, the FBI would come in, and then all of a sudden they saw their stuff on the front page. I think those issues, again, are symptomatic. It’s not that they don’t want to know that they’re hacked.
One of the biggest arguments is, why am I going to share this with government because I don’t know what I’m getting in return? And there is a value to government information. Government knows nation-state activity across the board better than any one company, and so there is this value. The challenge right now is that those bounds haven’t been actually defined.
And I’ll just make one other point, because oftentimes when we talked about information sharing, companies would come to us and say we’ve got to work on the classification system, the classification system is really—it’s just holding us hostage. The classification system, I believe, is an excuse not to truly be organized in how government understands information. Because I have, you know, friends who are in industry who then got, you know, sort of the curtains opened and they’re talking to Oz and they’re saying, well, we already know all this stuff. And that’s the challenge, is we actually haven’t created the right structure for how to share information and where the incentives, the liability protections, and all of that lie. And that can happen, but it requires a proactive approach.
GHOSH: And building on that, it’s interesting. You’re right, people used to always talk about, well, why would I admit that I got breached? Bad for shareholder value. There’s a whole parallel industry to the intelligence community that’s developed in the private sector, threat intelligence. You’ll hear of companies working in the dark web. Well, you almost can’t hide anymore. You know, if you have been breached and if the breach is of any consequence, all of these dozens of small little boutique dark web/threat intelligence companies, they know about it, right, and they will tell the FBI. Or your banks will know about it based on transactions that happen. So that seems to be less of an issue today than it was a few years ago just because it’s out there.
And then second, you know, this whole public-private thing, which I agree hasn’t happened because the private sector typically owns and runs the critical infrastructures. And even in the case of the electoral system, the federal government has almost no say-so at all, right? These are all run, not by states, but really by the municipalities, right?
TODT: Which saved us in this election.
GHOSH: Yeah, because they all buy different systems, right? So it’s kind of hard to hack them all.
But I do think there’s less of a need now than there was before for the private sector to go hat-in-hand to the government and say, hey, please share with me your intelligence. We were never getting it in the first place. It was always reverse sharing, right? Hey, tell us what you see going on.
I think the emergence of the threat intelligence sector has really kind of given the private sector the ability to do things on its own up and to the line of I can’t defend power grids, I can’t defend the entire internet. I might be able to defend a silo. This is really where you need true private-sector-to-private-sector collaboration as well as private-to-public-sector collaboration.
TEMPLE-RASTON: And there’s no liability now, right? If you’re Microsoft or you’re some company and you’re breached and a lot of people who use your software are hurt by this, you’re not liable for that, right?
TODT: Well, the Wyndham case was actually the first case a couple of years ago where Wyndham resorts wasn’t taking precaution. And they had a bunch of their customer information stolen, and the courts actually found that they were responsible. But to your point, the bounds on this and the precedent is still very much being formed.
One of the things that came up in the Commission, then Secretary Penny Pritzker, secretary of commerce, came up with a concept called reverse Miranda, which is anything that you say won’t be held against you. And this goes to this idea of how government and industry can work together more effectively. And I think that being able to create liability protections incentives, but also penalties in a structured way has to happen, but it’s not easy to do.
SEGAL: But you could also imagine that software liability will change, especially as we move to the Internet of Things.
SEGAL: Because Tesla is running as much software as Microsoft is, basically, and if a flaw in Tesla and somebody hacks it and causes a death, you can imagine that that would cause some types of liabilities. And then the manufacturers of IoT devices are going to turn around and say, well, why are we liable and Microsoft isn’t? So you can imagine that over the next several years that that will change.
GHOSH: And the insurance industry will have to adapt. Because when we get fully autonomous vehicles and I’m not behind the wheel and there is an accident, I’m going to say, hey, it wasn’t my fault. Whose fault was it? Was it Tesla’s? Was it something else?
TODT: And then the example of the driverless car goes to the interdependency issue. Because what we talked about last spring with the Commission is this idea of life-affecting devices, like pacemakers and driverless cars, need to have basic standards. And when we saw the automobile industry say yes, we agree, give us some standards around this, then Dyn/Mirai happened and all of a sudden you saw, but wait, the baby monitor can actually connect to that life-affecting device. And so we have to be looking much more comprehensively at IoT standards across all devices. And how do you create and build security into the design of these devices which are proliferating at—
TEMPLE-RASTON: Well, anybody who was watching “Homeland” knew that the pacemaker could be a problem because they hacked into that.
GHOSH: Right. And by the way, back to monetization, St. Jude Medical, whose device, pacemaker, was found to be vulnerable, there was a firm that shorted the stock with the knowledge that they had the vulnerabilities in hand, and that apparently it’s been found to be legal. So you can—another way of monetizing is finding critical vulnerabilities in devices, informing the manufacturer, seeing nothing happen, well, now as a threat researcher you say, well, I’m going to go to a major trader, a hedge fund trader, and go short your stock and put out a press release.
TODT: I believe in human nature more than that. (Laughter.)
TEMPLE-RASTON: Well, I mean, it’ll focus minds in the C suite for sure.
TODT: He’s writing the next episode of “Homeland.”
TEMPLE-RASTON: Clearly, OK.
So, for Anup and Kiersten and Adam, I wanted to open this up to questions for our members to join our conversation.
And I just want to remind you that this meeting is on the record. And wait for the microphone, and speak directly into it. And if you could keep your questions short so we could go to as many members as possible, that would be lovely.
Yes, there in the grey.
TEMPLE-RASTON: Would you introduce yourself?
Q: Yes. Ginger Turner from Swiss Reinsurance Company.
My question is about the French presidential elections. My perception from the media coverage was that there were some attacks that were successfully thwarted, but I’m curious about the panelists’ opinion. Was that luck, or was that a really successful proactive strategy, particularly from Macron’s team? And is there anything we can learn from that?
GHOSH: There is. Do you want to take the first swing at it, Adam?
SEGAL: I mean, they had the advantage of being after us, right?
SEGAL: So they saw a lot of the things that happened. There was some what’s called active defense. There seemed to be some putting fake data in the databases or some fake emails to distract the hackers. I don’t know how successful that was compared to the fact that you can’t report on the French election for 44 hours before the election, so the fact that the dump happened, you know, the Friday beforehand and the French press couldn’t report about it. The French public knew about what happened in the United States, so I think the ground was much less prepared than it had been in the United States.
But we see, you know, the Germans are preparing for it, the Brits are preparing for it. The Canadians just released a report last week saying they’re expecting it for the next election. So I think everyone is thinking about it moving forward. I think you’re going to see a lot more active defense kind of measures, so people both raising their own kind of defensive capabilities and also thinking about, you know, fake information or distracting the attackers there.
GHOSH: It was interesting, a great question, too, because not only did Macron plant deliberately fake emails within the entire stream, and the key, of course, is to have a cheat sheet that says these ones were fake, so when they’re released you know that you were hacked, and then you can go to the media and say, oh, by the way, these ones were hacked and they’re fake.
But also the fact, to your point, I mean, credit either the election rules or the French media who refused to report on what data was dumped, right, until after the election. And I think we could all learn from that in this country. The media is being used, right, for someone else’s purpose. And, of course, monetization, the media is always trying to get more eyes, right? And the more sensational the news, the better, right, and the more eyes you’ve got. So I think there are lessons learned there.
The active defense stuff was really interesting. Supposedly, some of the information was planted, too, by the Russians, right? So that goes towards data diddling, which is changing information to serve a purpose and not knowing what’s truth and what’s not.
TEMPLE-RASTON: It’s called data diddling?
GHOSH: Well, that’s what we used to call it. (Laughter.)
TEMPLE-RASTON: OK. So I had a little extra question on data diddling, actually. When you talk about voter databases and registration, so what you would change in terms of information maybe is an address so that someone can’t prove they are who they are? What would change within a voter roll to frustrate you, aside from not being able to find yourself at your polling station?
TODT: You mean, other than—I mean, I don’t understand. So as a voter, how would you be frustrated?
TEMPLE-RASTON: Well, in other words, so they went into these voter rolls in 39 of the 50 states, right?
TEMPLE-RASTON: And there was a possibility that they changed information to make it—
TEMPLE-RASTON: So what would they have changed? An address so that your driver’s license doesn’t match?
TODT: Well, I mean, and it also depends on the process, right?
TODT: I mean, this goes, I think, this is where the locality system helped us. Because if you were to have a regulated system, it’s just like now boarding an airplane. If your name on your license doesn’t match the name on your ticket, you’re not getting on the plane. And so I think it depends upon what the process is, but certainly it’s the idea that you can’t vote.
I think the other thing that I heard was, the other element was, in electronic voting they were able to get into the system and cancel out votes.
TEMPLE-RASTON: I see.
TODT: And so it’s all different places in the system, but the idea is that you can take someone’s vote away without them knowing about it, or you can make it difficult for them to vote. And I think the former would obviously be more effective because, just like data manipulation, it takes a lot longer, then, to figure it out. If I go and I find out I can’t vote, I’m going to say, why can’t I vote? If you take my vote away, but I don’t know it, then just like any type of data manipulation it’s going to take a long time for me to figure that out, if ever.
GHOSH: I mean, if you think about it, most of this country is already known when it comes to going to elections, like, 13 states that matter. And then you keep winnowing it down and there’s a few states that matter and there’s a few districts that matter in those states. And it’s all about voter turnout, right? So typically, the game is about suppressing voter turnout if you’re in the one party who’s going to lose, or try to get your voters out. If you can frustrate voters, right, when they show up to the polls, if they can’t vote because they’re not in the registration database because they’ve been deleted, they may not wait two hours to vote. They probably won’t, right? So I think that’s a kind of calculus that’s being planned on for 2020 is, what are the key districts where you’re likely to see hacking to frustrate voter turnout?
TEMPLE-RASTON: Got it.
The gentleman in the back in the white shirt.
Q: David Gruppo.
Just to that point, I vote in Westchester. And I’m a registered Democrat and I’ve voted in the same place for many years. When I went to vote in the primary this year, my party affiliation no longer showed and I was not able to vote in the primary.
Does that strike you as something which should raise your attention? Or there could be a thousand reasons why that would happen and it’s not a big deal?
GHOSH: Yeah, I mean, you can never underestimate the incompetence of local government, right? (Laughter.) Many times, it’s a squirrel that brings down a power system, more often than a cyberattack, so we try not to be too quick to jump to conclusions.
TEMPLE-RASTON: That’s not a hacker term, right? You mean a real live—
GHOSH: That’s not a hacker term. It is a literal squirrel. (Laughter.)
TODT: A real live squirrel.
TODT: But also, remember, I mean, we’re talking about international elections. Remember that we have tools in the United States that can be used as well. And so, I mean, we’re talking about how what’s happening in France, what’s happening in Russia, our own party system is fully aware of those issues also. And so that is why creating the right resiliency on all parts becomes really important across the board.
TEMPLE-RASTON: This young lady in the tan and black.
Q: My name is Bhakti Mirchandani. I work at a hedge fund. Thank you for this insightful discussion.
If you were to think about international norms for a response for both these sub-use-of-force situations, like election hacking, which is significant, but not use of force, and also use of force, what would those norms be in terms of response?
SEGAL: So there is a process at the U.N., the group of government experts, that’s trying to develop norms for—mainly for the use of force or armed attack. In 2015, 20 countries, including the U.S. and China and Russia, agreed to four norms—a norm of state responsibility; a norm of coming to assistance to another state that’s being attacked; not interfering with or disrupting critical infrastructure during peacetime; and then not attacking another country’s computer emergency-response team.
Those are—you know, that’s a pretty low bar, attacking another country in peacetime. You’re not supposed to attack countries in peacetime anyways. But—and the U.S. has consistently argued that the law of armed conflict applies in cyberspace. So, you know, you don’t know for sure when the use of force and armed attack happens in the real world or the kinetic world also, so it’s going to be true for cyber as well. Some attacks are going to be very obvious. Something that causes death and destruction, you could easily say, is armed attack. Something like the stock market going down for two days may not be, but for five days may be, right. Those are all going to be political decisions.
I think it’s very unlikely we will ever get norms on political influence or interference. You could, under international law, talk about noninterference or sovereignty. But given the different conceptions between the West and the authoritarian states more broadly, I don’t think, is very likely.
When I was—I was in China in December, and on the U.S. side we were all complaining about the Russian hacking. And the Russian—excuse me—the Chinese counterparts had this big smile on their face. They’re, like, great; we would love to talk about political hacking. You guys support Falun Gong. You support internet freedom and other types of things that we basically see as the same thing as interference in our political affairs. So I don’t see how we have those discussions about what interference means.
TEMPLE-RASTON: The young lady in the back with the glasses on her head.
Q: Thank you very much. Leah Pedersen Thomas, VitalPet.
My question is in regards to sort of more positive hackers, if you will, those trying to protect consumer rights and others. And what comes to mind is Anonymous and how they may be changing as more nefarious players come on line. And will there be a place for hackers who are trying to sort of effect positive social change?
TEMPLE-RASTON: Do you all agree that Anonymous is effecting positive change to begin with? Is that—
SEGAL: I don’t think so, no.
TEMPLE-RASTON: Is that no across the board?
GHOSH: Not Anonymous, but certainly other—
TEMPLE-RASTON: No. OK, yeah. So let’s—maybe we should use a different example than Anonymous as a benevolent hacker, and—
TODT: Hacktivism, and it comes in different formats.
TEMPLE-RASTON: Yeah. Yeah.
TODT: I mean, I think that you can always—and I think that though you have to be careful when—even if, again, the impact is good, if you’re working outside of the bounds of the law, it creates chaos. And so that’s the challenge with our digital infrastructure is it does allow so much freedom without consequences. And so we have to create the laws. I mean, there’s an interesting discussion going on right now that’s not necessarily new, but how does technology—how does the law keep up with the technology? And so we have to ensure that our legal system, our justice system, is evolving to align its laws with what’s happening in the cyber realm.
GHOSH: Yeah. And just a thought. I mean, so if you’ve seen Mr. Robot, you know, it’s the end game of, like, an Anonymous. And it’s bad; like, take down infrastructure for a social purpose, right.
TEMPLE-RASTON: But they were taking down a company called the Evil Corp.
GHOSH: The Evil Corp. Yeah, it sounds like a good line. But so I think that we should understand technology can be used for good and for bad, right. And just like we saw fake news propagate through Facebook and Twitter to affect people’s way of thinking, there’s no reason why people can’t use other tools or the same tools to spread a different message, right, and to spread awareness to people who are ignorant.
So I like to think, sure, hacking in general isn’t bad. It’s come to mean bad things. But you can certainly hack for good as long as you’re not breaking laws, like breaking into other people’s systems without authorization. But certainly using tools to spread a good message, whatever that message is, I think, is a positive side of technology that we should explore more.
SEGAL: What I think Anup mentioned is getting the incentives right. So—
SEGAL: —the earlier example that Anup gave is, you know, I’m a hacker. I notice a vulnerability in a pacemaker. I tell the company about it. They don’t do anything about it. What do I do next? Right. So now there’s a whole growth industry for bug bounties, right? So I provide financial incentives for you to discover that vulnerability and to report it to me. So we see that now the U.S. government is doing it. The Pentagon is doing it.
And we also have to think about some of the laws, right, in particular the Computer Fraud and Abuse Act. How are they written? Do they punish hack—those kind of white-hat hackers and other security researchers? Do they limit them, or can we write those more broadly to give them more freedom to do sometimes the hacking that they need to do to discover those security flaws?
TODT: I think there’s also a problem. In one of the projects that I’m working on now, we’re talking to hackers. And there’s not a really good way to channel a good hacker these days in the United States. There’s not a normal path for a good hacker. So they tend to sort of go to the dark side because that’s where their skills are appreciated.
And in Israel, one of the things they do is they find hackers very, very young, in middle school and high school, and they channel them into their tech units. For good or evil, they channel them into their tech units so that they are channeling those hacking sensibilities in a positive or a positive way, as opposed to letting them go to the dark web. And they have very much fewer black-hat hackers there than we do here for—
GHOSH: And a great industry as well.
TEMPLE-RASTON: And a very good tech industry. You could argue about where some of the stuff goes to, but the tech industry.
Yes, in front here.
Q: Yeah. I’m Raghida Dergham of Al-Hayat and Beirut Institute.
And I have a question for you, Adam. You said that the Iranian hackers are—their capabilities are rising. Are they more governmental or just, you know, sort of freelancers? And why did you say that it’s hard to deter them?
And for you I have a question, please, about the U.S. governmental capabilities. They are seen on the other parts of the world that, you know, that we in the U.S. are the best hackers, and we don’t talk about it. So is that correct? Do you want to just put this just to end? It’s just an accusation.
TEMPLE-RASTON: Here and now, put it—
Q: There’s two—one on Iran, one on the U.S.
SEGAL: So what we know about the Iranian hackers is that there are state hackers and then there are proxy hackers, and then they overlap sometimes. If you look at the indictment, seven Iranian hackers were indicted for distributed-denial-of-service attacks on U.S. banks in the summer of 2011 and for hacking a dam actually out in Rye Brook in Westchester.
Those hackers—two of them worked for Iranian companies that provided services to the Iranian government. So there’s—with most states except for the liberal democracies, there seems to be a huge amount of use of proxies, either criminals or hacktivists or political actors who seem to go back and forth inside of the government’s orbit, depending upon what they need and how they use them; not deterrable in the sense that for everything under the use of force, to go back to the earlier question, nobody seems particularly deterrable in this space.
And so, so far, the Iranians and the North Koreans and everybody else has managed to stay under this line of creating an attack that causes physical destruction or widespread economic dislocation, and none of that seems deterrable. And the North Koreans and the Koreans (sic; Iranians) clearly see this as a useful tool of statecraft, and I can’t see that there are any costs right now for them to continue to use it.
TODT: And then the question you were asking me, it was are the U.S. hackers the best?
Q: The impression is that the government is involved in hacking. They are accused of—
TODT: Oh, that the U.S. government is involved in hacking.
Q: We say the Russians are hacking, not individuals as Putin said. We think the government, you know, is behind the Russian hacking. There’s accusations that there is U.S. governmental involvement in hacking other parts of the world. I just want to know—
TEMPLE-RASTON: So is the U.S. offensive hacking the best in the world?
TODT: So I’d love to claim some great classification. I mean, we did have Saudi Aramco, where we—or, I mean, sorry, not Saudi Aramco—Stuxnet. (Laughs.)
TEMPLE-RASTON: They’re right next to each other.
TODT: There we go. (Laughter.) Off the record—(laughter)—where we did see that the government was engaged in that. And I think—I think—
TEMPLE-RASTON: Although the government has never officially—
TEMPLE-RASTON: —said that it was behind Stuxnet, with the Israelis.
TODT: Right. Yes. But—
SEGAL: Well, and we have the—we have the Snowden documents. The U.S., according to the Snowden documents, was involved in, you know, numerous, numerous offensive intelligence-gathering operations.
TODT: And I think the answer—so this again goes to the world in which we live, which is just as we were looking at how to acquire data, intelligence, and then how to use that for offensive capabilities as a government. You know, early on we had different capabilities to do that. Now cyber is a tool. To what Anup was saying earlier, it’s a tool. So I would think that anybody would be naïve to think that any government isn’t using cybersecurity in their arsenal of tools to help make their government more secure, whatever that is.
TEMPLE-RASTON: But the distinction that’s drawn is Clapper, James Clapper, drew this distinction that the OPM hack was, hey, good on you. If you were able to get in there, that’s espionage, and that’s what we expect, and we should have been better at it.
But when you get to something like Stuxnet, you’re actually destroying something physical. And that’s a distinction that we’re only going to have to start having to make. It’s very different grabbing information for espionage purposes versus destroying centrifuges.
TODT: Well, I mean, I think that’s an interesting line, because if you’re—it goes to what Adam was talking about, international norms. You’re affecting critical infrastructure. And so where are we drawing the lines about consequence, impact effect? And again, it’s—we’re trying to draw these lines around technology that don’t actually exist. And it is very much a tool that is, you know, running in the wild and is being used for all sorts of reasons, which is why, if we do try to make this binary black-and-white definition all the time, we are going to always—again, the Titanic—we’re going to sink every time.
We have to be thinking about this from a much more resilient perspective and frame, and also understand again that if you’re doing what you’re supposed to be doing—and we—you know, that’s another discussion—that the breach isn’t the failure. The failure is if you actually aren’t prepared for it. That’s when your enterprise is truly failing. And how do you do that? And then, as a government and governments that we’re responding to, we have to assume that that tool is being used against us as well.
GHOSH: Just on the topic of deterrence—I think it’s a fascinating topic, right—you can’t have a deterrent effect if there are no consequences to your hacking actions, right. And I think one of the things the U.S. government is trying to come up with is what are the—what is the level of escalation in a cyberattack that we can actually go to? And probably the most relevant example were the sanctions from the Obama administration in January against Russia, which did include named—I don’t know if they’re indictments, but named conspirators in the Russian government, that included evictions of Russian embassy personnel and confiscating their facilities, right; so real consequences, right?
TODT: Well, that happened in the PLA case. There were named—
GHOSH: They were named and indicted.
TEMPLE-RASTON: This was a PLA case for hacking that was brought out of Pittsburgh, actually, by Dave Hickton, the U.S. attorney there, who—there was a great deal of consternation in the Justice Department and the State Department as to whether or not names should be named. And they finally decided, exactly to your point—
TEMPLE-RASTON: —to name specific PLA officers and bring sanctions against those specific people and indict them, so that there would be consequences.
GHOSH: Right. So I think the government holds some of these things, and they’re ready in context to apply them. And if the actions have no consequence, there’s no deterrence. And I think our policy of the U.S. government is we’re not going to restrict our consequences to the cyber domain. We’re willing and able to use diplomatic, economic, military action in response to any kind of cyberattack. And by keeping all options on the table, you could actually—you need to begin to enforce them to have a deterrent effect.
TEMPLE-RASTON: This gentleman in the blue shirt, please, right there at the second table.
Q: Thank you. Mitch Rosenthal.
I really want to thank the panel for such a brilliant presentation and for so effectively raising my anxiety level. (Laughter.) But my question is, what keeps you up individually at night?
TEMPLE-RASTON: We can just start with the global view.
SEGAL: I mean, I have to say I am most worried about attacks that would data-diddle, although I don’t ever use that phrase.
SEGAL: Anup just gave it to me.
SEGAL: So I would be—I’m not worried about destructive attacks, quite honestly, right now. I’m not particularly worried about radically empowered non-state actors or individual hackers. I am worried about an attack that changes data in a major financial institution or some other institution that’s designed to undermine trust in it. That, I think, is going to be very easy to do. And increasingly I could see nation-states having the interest and capability to do it.
TEMPLE-RASTON: And do you think blockchain will stop that from happening?
SEGAL: No. Like, you know, with all these things with technology, certain technologies will serve certain technological problems. But there are lots of things that I’m not going to want to put on the blockchain, right? Blockchain is a—for those who don’t understand it, and I’m one of them, it’s basically a public ledger that could be encrypted. And everybody can see what’s on there and how transactions are made. But I’m not going to want everything on the blockchain. So to think that blockchain is going to serve all of our cybersecurity problems—it might serve a certain—may solve a certain narrow slice of them, but not all of them.
TODT: And I think data manipulation to me is very scary, because you just don’t know. I mean, already, when a malicious actor has been on a system, it’s usually up to a year before they’re identified.
But I also think what is concerning is this continued cultural ideology or philosophy that we don’t have to be aware of technology or I’m happy to post a bunch of things on Facebook, but then you tell—but then I’m going to tell you why are you invading my privacy? And we have not moved our thinking as a culture up to where we’re expecting technology to function for us. And we have—there has to be a real paradigm shift in how we look at all of this and how we understand technology.
And every individual has a responsibility in technology that they have to take on and not expect—you know, there are a lot of—there are a lot of ways to be secure. So much of what we see is—80 percent of it is pretty low-hanging fruit, you know, from changing default passwords and other things that, as a nation, we have to be more educated and we have to do a better job. And the government has to do a better job with industry to educate so that we’re not—because the worst is when it’s something awful that happens that’s preventable.
GHOSH: So I love to quote Secretary Mattis and say: “Nothing keeps me up at night. I keep people up at night.” But I can’t. (Laughter.)
You know, I think there is a set of checks and balances involved in free markets that deter China or Russia from bringing down our economic system, because it affects them as much or more than. So things like global trade are really positive for things of—you know, cyberattack on a broad scale.
The exception to that, of course, is what you see with terrorism, where you have small, small groups or individuals that have asymmetric advantage. And I think unfortunately the complexity of our systems favors the asymmetric advantage, right, so individual actors who can take advantage of vulnerabilities in systems and bring them down.
Having said all that, I still think we’ve got resilient systems to a large degree. And when we see a major attack is when we actually take action to do things like build more resilient systems. So I don’t worry about too much, to be honest. I’ve been around this field for a while, and every year is worse than the last year. That is true. But I also think that we learn from these attacks, and we can get government to actually begin to take action and say, guys, let’s get our act together, right. Let’s develop proper standards. Let’s put in place insurance that can help transfer risk, right. So I’m generally pretty optimistic about where we’re going.
TEMPLE-RASTON: And I would say we should also be optimistic—because I want to end on a happier note—that we’re actually discussing this and that it’s in the consciousness now? And I know there are lots of people who’ve sort of been rattling the cages about cybersecurity for some time. Do you think it’s in the consciousness now enough that something will be done about it?
GHOSH: Well, look, we had President Trump sign an executive order. That says something, right? So I think, yes, we are all more conscious. That helps. The boards of companies are now keenly aware of what’s going on in the economic-impact companies, which causes them to cause the CEO to take action and invest. So I think there’s a lot of positive things there.
Also, in emerging technologies, we’re seeing a renaissance period of machine learning and artificial-intelligence technologies which will lead to significant changes in cybersecurity defenses and make this problem much more scalable than we have today with an overreliance on humans.
TEMPLE-RASTON: Great. Well, it’s 2:00, so I’m sorry to say that we have to end it here. If you can thank our panel. (Applause.)
Anup, Adam, Kiersten, thank you very much. (Applause.)