The Hacked Elections, Online Influence Operations, and the Threat to Democracy symposium, held on December 6, 2017, featured four panels of policymakers, business executives, and other opinion leaders in discussion about the cybersecurity threat to democracies, particularly to the election systems themselves and the subsequent attempts to shape the public debate through mass disinformation and online commentary. This symposium is made possible by the generous support of PricewaterhouseCoopers.
This is the first session of the Hacked Elections, Online Influence Operations, and the Threat to Democracy symposium.
The panelists provide an overview of the complex problems facing election cybersecurity and offer recommendations to protect the integrity of elections from cyber threats.
This symposium convenes policymakers, business executives, and other opinion leaders for a candid analysis of the cybersecurity threat to democracies, particularly to the election systems themselves and the subsequent attempts to shape the public debate through disinformation and online commentary.
SEGAL: Good morning. I’m Adam Segal. I direct the Digital and Cyberspace Policy Program here at the Council, and I just want to say a few words of thanks and a few words of welcome.
First, let me thank Marisa Shannon and Laura Bresnahan and Alex Grigsby for helping put this together and making sure that it runs smoothly. We want to thank PwC, which has provided some funding to help us run this and a number of other programs in the Digital and Cyberspace Policy Program.
Like many of the other think tanks, we are doing more and more work in this space, on information operations and Russian information operations in particular. Just this month we published a “Cyber Brief” by Keir Giles of Chatham House giving some policy recommendations to liberal democracies on how they can counter Russian information operations. So if you haven’t seen that, please take a look. It’s available on the website.
Also, last month we ran—we rolled out a cyber operations tracker. That’s the website you might have seen when you walked in. It is a list of publicly-revealed state cyber operations, so espionage, doxing, more disruptive and destructive attacks. It goes back to 2005. We have approximately 200 known incidents, and the plan is to add more as they happen and as they become more known to us. Please check that out. It’s updated every quarter. If you have an incident that we don’t know about, please let us know and we will—we will go ahead and add it.
Please find me today if you have ideas and suggestions about how CFR can be helpful in this space, what we should be doing. If we’re doing something that we shouldn’t be doing or there are things that we should be doing that we’re not, please come and find me.
And thanks for spending the day with us today. I think it’s going to be a great discussion.
FEIST: Good morning, everybody. I’m Sam Feist. I’m the Washington Bureau Chief for CNN.
And we’re going to have a conversation this morning on hacking into our election systems. The next panel, the 10 a.m. panel, is going to focus on hackers and other—how hackers and other mischief-makers are trying to influence public opinion and influencing the public, and how that might change votes. But this panel is going to focus on how mischief-makers might actually try to change votes and hack into our election system. So we’re going to talk about the security of our election system, voter-registration system, tabulation systems.
And if you think back and think, well, they might be able to hack here or hack there, but can they really impact an election, we can just look to the election that happened last night in Atlanta. If you haven’t—if you’re not familiar with what happened, it is yet undecided. The Democrat is ahead at this moment by 729 votes in one of America’s largest cities. So any mischief could affect any election. In fact, one of our—one of our panelists, who happens to be Connie Lawson, the secretary of state of Indiana, told me about a race this morning in her state that was decided by just one vote. So these things do matter.
So I’d like to start by introducing our panel. I mentioned Connie. Connie Lawson is the secretary of state of Indiana, and is currently the president of the National Association of Secretaries of State.
LAWSON: Good morning.
FEIST: To her right is Matt Blaze. He is an associate professor of computer and information science at the University of Pennsylvania, recently helped organize the DEFCON voting machine hacking experiment to test the vulnerability of our election voting systems.
And to my right is Michael Sulmeyer, who is the director of the Cyber Security Project at the Harvard Belfer Center. Michael previously worked as the director of plans and operations for cyber policy in the Office of the Secretary of Defense.
So thank you all for joining us. We’ll spend about a half an hour visiting and having a conversation. And then, at about 9:15, I’d like to open it up and have you all ask questions for our panel, and we’ll continue the conversation until about 9:45.
So I wanted to start with Connie. Connie’s responsible for the voting systems in the state of Indiana, but also works with secretaries of state across the country. So would you describe our voting system, in your opinion, as currently safe from hackers and mischief-makers, or are you particularly concerned? Where do you fall on the continuum?
LAWSON: Well, first of all, you know, obviously—and I know people in the audience have heard this before—there’s no evidence that any votes were tampered with in the 2016 election.
I think that election security has always been a priority of secretaries of state. And I think that the email that every chief election official received in August or September of 2016 changed the way we do business. And so we’re making especially cybersecurity a priority. And we’ve done a number of things, working with the Department of Homeland Security, the FBI, and—to become—to make sure that we get the information we need.
So the number-one activity since the 2016 election for the National Association of Secretaries of State and the secretaries of state has been to improve the communication between the intelligence agencies in the United States and we as chief election officials, so that we can get the information we need in order to prevent and/or react quickly if there should be a cyberattack.
FEIST: So just going to put you on the spot: Are you comfortable at this point—knowing that no system is perfect, but are you comfortable that we have done as much as we can do? Are you comfortable that if there were an election tomorrow in the state of Indiana that it would be safe?
LAWSON: Well, about the time you say you’re comfortable, that’s when you should be worried. So I’m never going to say I’m comfortable about it, but I’m always going to say that I’m going to be very vigilant. But I do believe that we are doing everything we possibly can in Indiana to make sure that our elections are safe.
I am very fortunate. Not every state has the support from their General Assembly. My General Assembly appropriated $1.4 million so that we could make sure that our system is secure. We migrated our data. We’ve done a number of things to secure our outward-facing websites. We certify our voting machines for use in Indiana. We have what we call the Voting System Technical Oversight Program. So we know where every machine, every type of machine, every serial number, every tabulating machine, we know where it is and when it’s in use in the state of Indiana.
So I feel good about what we’re doing. And we’ve been told by DHS in Multi-State Information Sharing and Analysis Center that we’re doing the right things.
FEIST: Matt, do you feel good?
BLAZE: Well, you know, I mean, I feel good that Connie is doing the best that can be done, but what I worry is that the best that can be done is almost certainly either not good enough today, or the honeymoon is going to end very, very quickly.
So, you know, a little bit of background from my perspective. I’m a computer scientist. I’m a technologist. In 2007, I led teams that were contracted by the states of California and Ohio to do a top-to-bottom review of their election system technology, including the voting systems and the backend systems from the vendors used in those states, which turn out to be the same vendors used in the other 49 states. And what we discovered in 2007 was that these systems were riddled from top to bottom with exploitable security vulnerabilities in virtually every component of the systems. And the—some of those vulnerabilities were sort of coding errors, you know, bugs in the programs that could be fixed. Some were more architectural, particularly in the so-called DRE systems, direct-recording electronic voting systems, the touchscreen voting machines that record voter selections electronically in their internal memory and the systems that process those.
You know, interestingly, you know, we know that those can be exploited. In many cases they can be exploited with no more physical access than you would need as a voter or as a poll worker at a precinct, but there’s been no evidence that they’ve actually been exploited in any election. So, you know, we have to kind of walk a fine line between saying, look, this technology very desperately needs to be improved and, you know, falsely telling people that our elections are illegitimate. So I don’t want to—you know, I don’t want to say that our elections are illegitimate, but I don’t know how to prove that they aren’t, because in some of the cases the technology that we’re using doesn’t really tell us. And that concerns me greatly.
FEIST: I want to bring Michael in. What do you think the biggest vulnerability of our voting system is?
SULMEYER: Well, first, thanks to the Council for putting this on and for having us here.
I was at DOD, so I never feel good about anything. (Laughter.) I never feel comfortable.
The challenge is one, it strikes me at least, of risk reduction, not elimination. So you have to set your standard, your objective in some way that’s reasonable. And so you’re always going to have some level of uncertainty here. The challenge and the opportunity is to reduce that risk as much as possible. It’s nice to hear the General Assembly in Indiana wants to help you do that with some appropriation.
The challenge that I really see is it doesn’t take much to have an effect on the vote count. You don’t actually need to have national-wide intrusions, right? Reducing the risk of gaining unauthorized access, that’s what—the risk we’re trying to reduce, gaining unauthorized access. You do that in a couple key jurisdictions and get the timing right, you can change a count. You can make things a lot more difficult for the folks who are trying to make sure that our elections are conducted in a way that’s as high-integrity as possible. You can really complicate that effort in just a couple key ways.
So that’s my perspective on it from my experience. It doesn’t take much, but we’ve got to reduce that risk as much as possible.
FEIST: Yeah, Connie.
LAWSON: Well, I just want to make sure that everybody understands that the last election that we questioned was the 2000 election, when we were virtually using paper and punch cards. And so, if you think about the way we do elections today, you know, I’ve been a county clerk, and I’ve been on the ground, and I’ve run elections. I did that for eight years. And I will tell you that there are security measures that our local election administrators take that make it nearly—well, make it very impractical for someone to get to our—to our voting machines.
First of all, these machines are kept under lock and key, and most of them have a visual scanning of the facility. So we know who comes and goes. They use logins. So we know, again, who comes and goes. We do public tests. And once those public tests are run before an election and, you know, we know that the votes are recording properly and that there are no votes that will be present on election day before someone actually comes to vote, those machines are sealed. And when a bipartisan team arrives on election morning, they cut the seal from the machine and they record the number. And one of the first things the election administrators do at night when they get the results from the—from the precinct level or the vote center level is they look to make sure that the serial number on the lock that was cut off the machine is actually the serial number that was placed on the machine after the public test. And so—and a bipartisan team, again, delivers these results.
So is it possible? Yes. Is it practical? I would say no. And there are many physical aspects of these voting machines and tabulation machines that take place and have taken place for years that it just seems—I mean, we don’t put them out in the middle of the courthouse and say have at it.
FEIST: So I covered the 2000 election and the Florida recount. Those are 35 days I will never get back. (Laughter.) But as a result of the Florida recount, the federal government—and state governments, but federal government—spent billions of dollars to help replace our—many of our election machines. The Florida system used punch card—paper ballots that had punch cards that were, as we know, sometimes not always easy to read, and that was one of the issues. But we replaced them with, to a large extent, these electronic touchscreen ballots that didn’t necessarily have paper records at all. They were completely electronic. In fact, in Indiana, what percentage right now are those machines?
LAWSON: We have 92 counties, and about—there’s 50-plus that use the DREs. However, the DREs that we use do have a(n) audit trail, a paper audit trail, inside the machine. It’s a—it’s a mirror image of the ballot. It’s not a voter-verifiable paper trail, but there is a paper trail.
FEIST: So did we make—did the Florida debacle make things worse, Matt?
BLAZE: It made them different. It essentially shifted us from a system in which you could have—kind of very vulnerable to small-scale retail mishaps to one in which small-scale retail mishaps probably have become less critical since the Help America Vote Act, but the—we pay for that by exposing ourselves to catastrophic failure in ways that we previously weren’t. Our elections are far more dependent on the integrity of software. And that’s something that we simply don’t know how to do.
FEIST: So if we were—if we had all the money in the world to design our system today, what sort of equipment, machines, system—what would you—if you were in charge of voting in the United States of America, how would you have Americans vote to get us the safest possible outcome so that at the end of the day, the day after or the week after election, the losing candidate or anybody else can’t come and question it and say: This vote’s not right?
SULMEYER: I’d hire Matt.
FEIST: OK. (Laughter.)
BLAZE: That’s a fine idea.
SULMEYER: Thank you. Yeah. (Laughter.) The two things I would just say is you’ve got to have paper backups, some way to an audit trail on every machine. And you’ve got to have a way to turn off the WIFI on some of these devices. I’m with you on physical access. I don’t have concerns about anybody rolling into the courthouse and having at it. But wireless access through a network is a problem. Some of the machines we looked at in a report called Hacking Chads we found couldn’t even turn off the wireless. It was not possible to turn it off. That’s a security problem.
FEIST: Connie, if the legislature gave you $20 million, $200 million, what would you buy?
LAWSON: I have no idea. I need—I need the experts. But I could certainly be doing, you know, a lot of research. I would say, you know, the most important thing is education. For our local election officials. We—you know, in a number of states our governors have set up cybersecurity councils. We have one in Indiana and we’re working with our local elected officials. We’re running phishing email campaigns so we can educate them on, you know, what to notice, what not to notice, or what to click on—what to notice before they do that. we are working on multifactor access so that, you know, their passwords are stronger. So those are the things that we’re doing in the state of Indiana. And I think most secretaries are doing that as well.
But I would just say that the very first election I ran as an election administrator was in 1989 in Hendricks County, Indiana. And we used lever machines, which are pretty—they’re probably—there might be one in a state museum now in Indiana. So I’m dating myself. But I will tell you that it wouldn’t make you feel very well if you saw the way those results were taken in. You know, we would get a written total from the precinct. And you have a tally sheet. And I remember sitting on the floor, with this huge tally sheet, and numbers get transposed and you’re adding all these up. I mean, this—it was a disaster. (Laughter.) It really was. I mean, I think we finally ended up with a result that was fair and correct. But 2:00, 3:00 in the morning you’re still working on these paper tallies.
People are not that patient today. I think the worst thing that could do would say that we would have to go back to all paper. What we need to do is think about how we can make our technology work the way we need for it to work.
FEIST: If you were—if you had billions of dollars, what would you do?
BLAZE: So, you know, it’s funny. I’m in the one branch of computer science that has—most of my time is spent pointing out how terrible computer science is at building reliable things. (Laughter.) And we really are truly terrible at building reliable software systems. It is literally the first problem of computer science is, you know, we don’t know how to build programs that don’t have bugs in them. And that, you know, may at some point in the future, there may be some breakthrough that makes that less of a problem, but it has not yet happened. You know, arguably this problem is getting worse rather than better as we build larger, more complex systems.
So what’s the solution? Well, the best solution that anyone has come up with for elections is a concept invented by Professor Ron Rivest at MIT called software independence. That is to say, you know, we’re going to use software. It has all sorts of benefits to have computerized election systems. But we don’t want the integrity of the election to depend on the integrity of the software, because that’s simply a Herculean task. So the technology that exists today that has this property of software independence is a combination of two existing things that we can do today.
One is what’s called precinct-counted optical scan ballots, that is ballots where the voter marks a ballot, or maybe uses a ballot-marking device to create a paper optical scan ballot that’s fed into a reader at the polling place that records the selections and keeps a tally, and then captures the physical ballot and stores it in a locked box. And that technology has the advantage that it maintains an artifact of the voter’s choice, that the voter actually marked.
The second thing you need to do is make sure that the software that’s doing the tallying is—has not been tampered with or doesn’t have bugs in it. And that can be achieved with a technique called risk-limiting audits, where you do a statistical sample of the polling places, do a manual count of the paper ballots, and ensure that that matches the electronically recorded results. If it matches, great. If it doesn’t match, then you know you’ve got a problem and you have to do more of the recounts. You know, the combination of doing both of those things properly gives you this software independence property that just eliminates a wide swath of potential vulnerabilities that are really hard to counter in any other way.
FEIST: But why—I watched the British elections this summer, OK? Reasonably well-developed nation, the United Kingdom. Held an election for parliament this summer. They use paper ballots. They’re tabulated at each constituency. The people who voted for Candidate X, there’s a pile there. There are poll watchers that are checking. They count, the recount, they write them all down, and then someone stands at a microphone and reads off the results without ever touching a computer. The only one who seems to add them up are the television networks back in London, where they literally add them up and do the arithmetic, but that’s it. What’s wrong with that? Isn’t that—isn’t that foolproof? Why do we have to get all fancy? Seriously?
BLAZE: So, you know, there’s nothing wrong with that. But the United States has the—
FEIST: Are we just impatient?
BLAZE: Well, we are impatient. You know, we’re Americans, and, you know, we’re an impatient people. But the more serious problem is the U.S. elections are the most logistically complex in the world, right? We vote on more contests on a single ballot. We have more different ballots. You know, we have school board elections and the dog catcher election and referenda and, depending on where you are, you know, bond issues, and so on. So, you know, in England they’re voting for a—it’s a parliamentary democracy. They’re voting for a single representative in general in these elections, or maybe one or two issues. You know, here we might—you know, I vote on about 20 different things in Philadelphia.
FEIST: All right, Michael. You work for the Department of Defense. The word we have not said—Adam mentioned it earlier—we have not said the word “Russian” on this panel. But that’s the backdrop for this, at least right now. Do you believe that the Russians—or any other bad actors, but we’ll use the Russians for this—tried to hack our election, want to hack our election, are actively trying to break through all of Matt’s fancy systems? Or is this really just something that we’re—a problem we’re overstating?
SULMEYER: Do I believe that foreign intelligence services would love to gain unauthorized access, or hack, into systems that would reveal information? Absolutely. Would foreign intelligence services love to be able to gain access to systems to try to change tallies? I think in their dreams, they’ve love that ability. It’s hard for me to see a proposal being discussed in the Kremlin and the security services and they say, no, no, let’s let that one go.
FEIST: We don’t hack—we’re not trying to hack their elections, are we?
SULMEYER: I think that—who knows. (Laughter.) But the point is that I think it’s—that they may want to be able to achieve these outcomes is predictable and understandable. Being able to see the causation from intent to actually being able to realize an objective, that’s the tricky part. It’s not always that there’s an Dr. Evil plan hatched and then everything falls perfectly into place. It’s usually let’s see what happens if we try moving some pieces around the chessboard, right? Send a bunch of phishing emails, see who clicks, see what that reveals. Once you’ve gained unauthorized access to one system, what does that open up? A lot of times you have to play on the line of scrimmage and audibles, and it doesn’t always work according to a playbook.
FEIST: The voting machines—we’ve spent most of our time so far talking about the voting machines. But I just want, Connie, if you can walk through, for those of us who don’t count votes at a local and state level, just walk me through when if I go into—if I’m voting in Indianapolis, and I’m a voter, I go into a polling station, I push my vote on a machine. What happens between the time I vote and the time that the secretary of state’s website reports the total? Just walk me through the—who is in control of those numbers and how does the vote—the information about that vote move from my finger all the way up the line?
LAWSON: Well, once the vote is cast, obviously it’s up to the election officials—a bipartisan team of election officials at the precinct level in Indianapolis, to bring the results back to the county level. And then the county totals—
FEIST: And how do they do that?
LAWSON: They will do that depending the type of machine. On a DRE, it would be a recording device. I believe on the optical scan it is as well. I’m not as familiar with those. But they bring those results back.
FEIST: Is it in a key fob or do they write a number down?
LAWSON: It’s in a—it’s some sort of electronic device that they bring back. And then it’s run through a tabulation machine. There’s a machine that reads the device, the USB port or whatever. And then the precincts are totaled together. And then the counties call the results, fax the results, we call the counties—
FEIST: So they get a total—
LAWSON: We don’t—it’s not connected to the state in any way, each county.
FEIST: So a million, 200,000 votes in this county. And somebody—well, that’s a lot, but anyway—
LAWSON: That’s a lot. (Laughter.)
FEIST: Yeah. Anyway, and someone calls your office on a landline?
LAWSON: Yes. Yes. But the results are not final for 10 days. So, because remember, folks are able to cast a provisional ballot, if they got to the polling place, for example, and they forgot their photo ID. We have a photo ID requirement in Indiana. They have 10 days to go to the clerk’s office on the county level, take their ID, and the provisional ballot is then counted.
FEIST: But then someone on the telephone back in your office hears the votes and they type them into a computer and the computer does the math, and then it gets published on the website.
LAWSON: That’s right.
FEIST: And that’s how the world knows about it.
LAWSON: That’s how the world knows about it. But, again, the counties have the opportunity to do their audits. They make sure that the results are final. And they don’t actually certify the results to the state for 10 days.
FEIST: OK, so, Matt, if Michael’s friends in these foreign intelligence services are trying to make some mischief, what are the points of failure—we talked about the electronic voting machines. But what are the points of failure in any secretary of state’s system, from vote to report?
BLAZE: So I worry less about the secretary of state-wide system than I do about the counties. There are 3,000 counties in the United States, roughly. About 2,500 of them have responsibility for running elections. Which means that we have somewhere in the neighborhood of 2,500 to 3,000 different local election administrators. Some of them are quite good at protecting their systems. Some of them are less good. You know, there’s pretty wide variance among them. You know, and this has nothing to do with intentions or goodwill. You know, this is simply a matter of, you know, very widely different capabilities.
To the extent that our voting systems have been secured—and, you know, we’ve seen horrible—when we look, we see really horrible, exploitable vulnerabilities in them. But to the extent they’ve even been designed against a threat, it’s a threat of conventional corruption. Somebody trying to, you know, get themselves elected mayor or, you know, sell votes, or what have you. Nation-state adversaries were not even in the threat model that these systems have been designed against.
And so when you think about the capabilities of a national intelligence service, like the GRU or, you know, really of any country, they have capabilities that certainly include everything that a corrupt candidate may want to do, but also, you know, they’re going to have additional capabilities. They’re going to potentially do supply-chain attacks, where the equipment that gets shipped may be tampered with even before it’s received. They may do attacks against infrastructure that’s being used. So they have additional resources and capabilities.
But that’s actually not the most serious problem. The most serious problem is they actually have an easier problem to solve than someone who wants to cause a result to go a particular way. A state adversary may be absolutely satisfied with simply disrupting an election, casting doubt on the legitimacy of the result, causing chaos on election day. And that is significantly easier than causing a determined result. So they both have more capability and a wider range of things that may satisfy their goals.
FEIST: OK, so Michael, elections are run by states. They’re run by counties. They’re run in towns. Is this a policy issue for the federal government in the same way, after the 2000 recount mess, the Congress got involved and we had billions of dollars appropriated for new election machines that are now causing another problem? But is this a federal issue? And, if so, what should the federal government be doing to address this before perhaps the next presidential election in three years?
SULMEYER: The federalism questions here are thorny. There’s no doubt about it. I think that for the federal government to say federalism is too difficult, so we’re out—good luck to the states and locals—I don’t think the federal government can take a pass.
What I would like to see is some sort of a playbook, right, that the federal government be able to put together for best practices and counsel. My colleagues at Harvard put together a playbook for campaigns. I see no reason why the federal government couldn’t provide an updated playbook for states and local authorities on these issues as well.
FEIST: So we saw at the end of the 2016 election, or late in it, the Department—President Obama’s Department of Homeland Security seemed to get involved in this game in a different way. Do you feel like the current President Trump’s Department of Homeland Security is interested in this issue, as interested in this issue, taking a leadership role, continuing Obama’s? How do you see them playing right now?
LAWSON: Well, there’s—they’re playing a huge role. Department of Homeland Security is working with the secretaries of state and other chief elections officials on a number of items.
First of all, every chief state election official is going through getting their security clearance so that we can get up to secret, not top-secret but secret, information. I—for example, I’ve just received my interim clearance. Then the tier two of that clearance will be our staff, the staff that needs that information. That’s number one.
Number two, we created a Government Coordinating Council so that we can determine what level of—what the definition of the critical infrastructure means for states. So we have the Government Coordinating Council, which is a large group of NASED people, National Association of State Election Directors, the secretaries of state, the local election officials, the Election Assistance Commission, the DHS, NIST. All of those agencies are involved.
And we’re talking about all sorts of things; for an example, communication. We’re—there are seven pilot states that are—MS-ISAC, Multi-State Information Sharing Analysis Center, out of Albany, New York, is giving these states monitors so that they can be monitoring internet activity on our election systems. Not all election systems are on the state system. And so it becomes a little more complicated than what you might think. And so we’re doing the seven pilot states, and hopefully by the primary of 2018 every state will have a monitor on their internet activity so that we can be informed of that.
FEIST: Michael, do you have a sense that the Trump administration is making this a priority? Are they doing what they need to do? Or, you know, how would you grade their efforts so far, knowing that we’re still three years away from a presidential election?
SULMEYER: It’s hard to tell on an administration level. I mean, just hearing some of the specific DHS people talk about how they want to be helpful, I think those folks who are in the specific offices that would work on this, they certainly see it as a priority.
The question about how fast they can push through clearances, I’d rather they be able to do a drug deal with the rest of the intelligence community to just declassify certain information and call you rather than work through an entire government process to figure out how to give clearances to everybody. But life’s imperfect.
FEIST: What do you think? How would you grade the federal efforts? And what should the federal government be doing now, while we have time?
BLAZE: Well, you know, first of all, we’re three years away from a presidential election.
FEIST: One year away from a—
BLAZE: We’re 11—right. We’re 11 months away from the midterm elections. So there isn’t, in fact, a lot of time for the next nationally significant election. There are extremely capable people in DHS and NIST. You know, obviously I can’t speak to the administration’s posture on this, but certainly there are very, very capable people who need to be empowered to assist.
FEIST: Do you think they’re empowered?
BLAZE: I have no opinion.
FEIST: OK. On that note, I want to open it up to questions from everybody here. So raise your hand and then wait for a microphone. The microphone will come your way. And then also say your name and your affiliation, please.
Q: Hi. Russell Wald with the Hoover Institution.
I have a policy question that’s probably better for a later panel. But since we have everyone up here, I’m curious. Given the ubiquitous nature of the vulnerabilities, it seems to be that deterrence by defense does not seem really feasible.
So this question is probably more likely for you, Michael. What is a good policy that says to adversaries U.S. elections are sacrosanct and the risk is too high for another nation-state becoming involved?
SULMEYER: The challenge is, if you’re going to leave bags of money on the lawn overnight, and then try to deter and talk tough about, hey, don’t come take that money, and then you’re stunned the next morning when the money’s gone, deterrence is not quite the model there.
I think there actually is a lot that we can do across the board to just bring the money inside, and forget locking the door for a minute. No defense is perfect. I completely agree, right. But before you talk about deterring and imposing costs, I think you do have to look at are we really just talking about not wanting to put more resources into architecture and very unsexy things that would actually make it much harder to hack?
FEIST: In the back. Yes, ma’am.
Q: Great. Thank you. I’m Elmira Bayrasli. I’m the CEO of Foreign Policy Interrupted.
We haven’t talked about what happened—we’ve been talking about the actual voting, but we haven’t talked about what happens before the voting, and particularly Facebook and social media and how people are influenced by—
FEIST: So I’m going to just push pause only because that’s the next panel. So we’ve been—this panel is really focused on voting, voting systems, and such. And the information war is—
Q: But I do have a question for Connie, because I think that this is something that, not only on a Facebook level or social-media level, but what are states doing? Are they looking at this? Is this something that states are getting involved in?
LAWSON: In the information—
Q: In looking—in looking at the influence of how elections are being influenced.
LAWSON: No. I don’t think the states are involved in that. Obviously, we’ve known for years that foreign nations have tried to influence people’s opinion here in the United States regarding candidates and how they should vote. And so, I mean, obviously we do voter outreach. We encourage the accessibility of our voter registration, our, you know, accessibility to—I think Indiana may be the only state in the country that has an app. It’s called WAYEO, Who Are Your Elected Officials? And you can find out from school board to president how to contact your elected officials.
So, you know, we’re doing everything that we possibly can for people to get the correct information, but I don’t have control over Facebook or Twitter or someplace like that that puts out the wrong information.
FEIST: Yes, sir, in the third row. Just wait for the microphone. Thank you.
Q: Mike Mosettig.
In addition to security, the other election issue going on at the moment is suppression. To what degree does the risk of technology or technology-imposed risks in terms of wiping people off registration rolls and some of these other things that seem to be going on at the moment? I mean, in other words, before people even get to the voting booth.
FEIST: So this is actually—we haven’t talked about—we talked about the voting systems. We haven’t talked about the voter-registration rolls. So let’s talk about that for just a moment.
Either. Go ahead. And how safe—how safe Connie’s systems are? I’m sorry, you happen to be here, so you’re the example. But how safe Connie’s systems are so that when someone walks in to the voting booth their name is on the roll and they’re actually allowed to vote, that a mischief-maker hasn’t erased my name before I get there?
BLAZE: So, I mean, you know, this—I have no idea what’s going on in Indiana. You seem great—(laughter)—and I’m sure your systems are terrific.
FEIST: Pick Colorado.
BLAZE: But, you know, this is definitely a point of vulnerability, particularly for a nation state interested in disruption. In many states and many jurisdictions, the poll book at the polling place is an electronic device. Those devices are often—often have security weaknesses in them. If your name isn’t there, you’re, at best, going to be casting a provisional ballot.
FEIST: How often are, for either of you guys, how often is there a paper backup? If I walk in, even though the poll worker, you know, use an iPad to check my name, does he or she have a book under the table in case everything goes haywire? Likely or not likely?
BLAZE: There are 2,500 different answer to that question in different counties.
LAWSON: They should, yes. Yeah. Let me just say that when we were notified last fall before the election that there were two IP addresses that had been responsible for getting into the Illinois voter registration system and getting into a small county in Arizona that had actually then had allowed that IP address to have access to the Arizona Statewide Voter Registration System, so we checked our voter registration system from January 1 until that date.
And so, you know, Indiana has 92 counties. We have 6.7—I see Professor Mike McRobbie is here—we have 6.7 million residents in the state of Indiana, 4.8 million registered voters. We checked 15,500,000 logins into our system. And the reason we had to check that many is because that’s how busy the counties were. They were looking at petition signatures. They were, you know, registering voters, candidates were filing their declarations, all these activities, absentee ballots, everything was going on. So that’s how busy these systems are. And that’s why states are looking at things like multifactor access into the Statewide Voter Registration System.
We are implementing a timeline. So, for an example, if it’s after midnight, maybe it’s just the supervisors of the elections that have access to the Statewide Voter Registration. All those things we’re looking at.
FEIST: In Indiana, though, if I were to go vote, before I push that button in Indianapolis, if I walked up and their iPads quit working, is there a paper book that—I keep going back to paper, paper sounds so sophisticated right now.
LAWSON: Yes. There is a—they would have a poll list, yes.
BLAZE: So, you know, I’d just like to add one thing to that. You know, the data breaches, on a large scale, are literally daily events, right, they don’t even get reported unless they’re on the scale of Equifax or the Office of Personnel Management.
The only reason we haven’t seen—if we have not yet seen a large-scale data breach of voter registration databases, it’s because no one has seriously tried. You know, the individual states and counties, whatever their best efforts are are going to be no better than what the Office of Personnel Management or Equifax or any of the long list of equivalently complex systems that have been, you know, catastrophically breached are.
And, you know, we may be in a honeymoon where it hasn’t happened yet, but it’s only a matter of time.
FEIST: On that comforting note, let’s take another question.
Yes, ma’am, right here in the second row. Just wait for the microphone, please, and state your name and affiliation. And here’s your microphone.
Q: Oh, here we are. Hi. Can you tell me how many states have that commendable independent system that you talked about or how many are working on it? And conversely, which states, in your experience, are the most vulnerable?
BLAZE: So there are a few states that are using both exclusively precinct-counted optical scan plus risk-limiting audits. Virginia just decertified all their DRE machines. They have risk-limiting audits, but I learned recently that they actually happen after the certification period rather than before, so there are some adjustments that need to be made. Colorado has made significant headway there. So there are a few states that are starting to pick up on this, but they’re the exception rather than the rule.
LAWSON: But I will say that NASS, the National Association of Secretaries of State, have a winter meeting and we’ll be talking about risk-limiting audits. And we’ve got the Belfer Center coming to talk to us about tabletop exercises and incidence response and cyber. I mean, all of those things are on the table. It’s not like anybody is ignoring those. So I want you to know that those have been a definite priority of chief election officials prior to 2016, but it’s been a heightened priority since that time.
FEIST: Michael, when you go to an event like that, the Belfer Center, for example, what is it that—what’s the most important thing you’re telling the secretaries of state? What are you saying, either that you have to do or you’re trying to scare the daylights out of them? I mean, what’s the message?
SULMEYER: Well, fear is always a great motivator for pretty much anything. But I think in this case, there’s enough awareness, you know, among the secretaries of state you don’t need to really do that. You’ve got to give news you can use. I think everybody now is geared up to the reality of what’s at stake. So the most helpful thing that I’ve seen my colleagues bring to the table is, again, something like a playbook, something like an implementable, actionable best practice that the secretaries and other colleagues can use.
FEIST: Yes, sir, second row. Wait for the microphone, name and affiliation.
Q: Good morning. Adam Ghetti with Ionic Security.
I’ve spent the better part of the last six years of my life doing nothing but data security and data integrity for some very highly targeted clients, including the federal government. One thing we’ve been discussing all morning is the security of the voting systems and the voting process. But the intent should be that the constituencies of our democracy trust the results. Ultimately, that’s the goal, right?
So while there’s a lot that can be done and is being done to ensure the security of the system, security of the process, what I haven’t heard discussed this morning is, is there a way to have a common, private, only-voter-verifiable and reconstructable audit overlay on top of the results that can ensure the trust and integrity of the outcome of all of the things, such that any of the lack of integrity and the rest of it may not necessarily cause the outcome that a nation state adversary might be seeking, a la voter disruption or changing of a candidate? The audit is independently verifiable.
And I know, Matt, really, particularly in your case, I didn’t hear you mention anything about secure multiparty computational overlays or oblivious pseudorandom function overlays to do some of these things. So I’m interested as to why.
FEIST: Just do it in English. (Laughter.)
BLAZE: Yeah. So, you know, the basic problem is that the systems that do that are extraordinarily complex and they essentially make our elections more dependent on the integrity of underlying software systems, particularly as part of the vote-casting process. So you may actually have the effect, when you look at the overall usability, of decreasing the confidence in our elections if the vote was counted rather than increasing them.
The other problem is it’s a really heavily over-constrained problem. We have—you know, we want elections to have transparency and we also want them to have a secret ballot. We want it to be impossible for somebody to learn how someone else voted. We also want it to be impossible to prove how you voted, right, because, you know, we don’t want people to be able to be coerced into revealing that.
That’s a pretty difficult set of things to achieve. Instead, you know, I think we have to rely on making systems simple and publicly auditable, you know, with processes that include, you know, the chain of custody of the ballot, you know, a public ritual where we do the risk-limiting audits and so forth. And in practice, that’s likely to do much better than any fancy cryptography. And I say that as a fancy cryptographer.
FEIST: Matt, just—you’ve used the term risk-limiting audit a few times. So just for those of us who don’t live and breathe this stuff, just explain that a little bit.
BLAZE: So the basic idea is that you—once you’ve captured the paper ballots and electronically counted them, you want to make sure that the software that counted those ballots hasn’t been tampered with or doesn’t have bugs in it that’s reporting an incorrect result.
So you sample the precincts and the various races, do a manual recount in every race of a statistically significant sample and verify that what you hand count matches what you’ve electronically reported.
FEIST: Do it every time.
BLAZE: And you do that every time.
FEIST: And you do it every time before the votes are certified by the secretary of state.
BLAZE: That’s right. And if you discover a discrepancy, then you have to do more hand counting.
FEIST: Is this happening in Indiana? And if so, how does it happen and what triggers it?
LAWSON: We don’t do risk-limiting audits right now, but we have our VSTOP, our Voting System Technical Oversight Program, working on that right now. We’ve had conversations with Colorado. And I just heard on a call this week that New Mexico is working on that as well. So we are going to be doing that. That’s something I definitely support, I think we should.
I will say that after the 2016 elections, we’ve had a congressional district and a state senate race, both were in a recount. And we recounted an entire congressional district. And the results were the results.
FEIST: Exactly the same.
LAWSON: They were the same.
FEIST: Exactly the same. People in Atlanta may be calling you for advice if it’s still 729 votes when we finish this session.
Yes, ma’am, in the very back, go ahead.
Q: Hi. Astri Kimball from Google.
When you talk about the federal policy responses, do you think the United States is doing enough to invest in training the workforce of the future to design the systems that are going to be the most secure? And what can the U.S. government do to get the workforce to do this?
LAWSON: I can’t answer.
FEIST: And, Michael, you want to take a stab at that? You’re Mr. Policy Guy.
SULMEYER: Thanks, Astri. Great question.
No, the government is not doing—I don’t think the government is doing enough. It’s not that, though, there’s ignorance about it; I think the biggest problem is that there’s no singular set of skills that will solve everything or equip everyone to do everything technical.
There’s a lot of this kind of information that you can learn just factually, dare I even say free YouTube videos to just learn about how different systems work.
FEIST: You said that for her, YouTube?
SULMEYER: Yeah, YouTube, yes, exactly. Yes. So I think, you know, there are some types of skills that, yes, require a large type of federal investments to steer people towards, but that actually doesn’t need to be the way to really solve the STEM crisis as a whole.
FEIST: Question? Yes, sir, front row.
Q: Alan Raul, Sidley Austin.
First, a comment on the thorny question of federalism as a possible constraint on the federal government’s responsibility here. I would note that Article 4 of the Constitution obligates the federal government to guarantee a republican form of government to every state. So if the elections are compromised, I think that would be called into question.
But my question concerns—the discussion has centered around whether there’s any evidence of impact on U.S. elections in terms of hacking the voting or the results. What do we see internationally? I mean, there are a lot of other elections held and presumably many of them are subject to electronic procedures as well. And the nation state adversaries might have an interest in chaos, uncertainty, and maybe even in installing particular candidates in other countries. Do we see any evidence around the world that’s relevant to us?
BLAZE: So probably the largest country that uses electronic voting systems is India. They have a custom-designed voting machine. Questions have been raised about the security and integrity of the design they’re using. It is a paperless DRE system. But, you know, to a large extent, the U.S. has been on the leading and bleeding edge of rolling out computerized election technology post Help America Vote Act. And so, you know, I think we’re seeing—you know, we have to look inward as well as outward to see what’s going on.
And I will also say that the situation very much reminds me of internet security in the 1990s when, you know, technologists were basically warning that, you know, these systems that we have on the internet, in general, are going to be—you know, are insecure and going to be attacked. And for a while, you know, people were saying, oh, you’re just, you know, Chicken Little saying the sky is falling. And then, you know, sure enough, the sky fell and, you know, we really haven’t been the same since.
The situation with electronic voting systems is remarkable reminiscent of the situation with general internet security in the mid-1990s.
FEIST: The example you mentioned that there were issues in Illinois and Arizona in the last election, did we ever actually learn who the hackers were, who was trying to penetrate those systems? Have we actually—has that actually been discovered and announced?
LAWSON: I have information on that, but I don’t think I can say.
FEIST: Oh, you don’t have your clearance yet. It’s OK.
LAWSON: I don’t have my clearance, yeah.
FEIST: Just kidding. Sorry, whoever is watching in Russia. (Laughter.)
LAWSON: Yeah. No, I can’t say.
SULMEYER: It’s a question by Alan. And I think, A, you had me at Article 4 of the Constitution, that’s good. And then in terms of where else, you know, you can look at where are things vulnerable, which is India, and then you can look at where is an attractive target for people who want to be in this business, and this is Eastern Europe. I mean, the Russians have an interest. And so not surprising that they would be poking around.
I think if there are some aspiring grad students who are watching, a good master’s thesis, you know, could be trying to do some comparative studies in the Baltics, looking at Ukraine, some recent elections, to go get some travel support money and investigate.
FEIST: So we’re not done yet, but I have yet to be convinced that any of these systems are superior to paper, so far. I mean, this is—as we have been talking for the last 45 minutes, paper sounds better and better and better. I mean, the idea that many or all of the systems in India are DRE systems in the world’s largest democracy, anyway, I’m troubled by it. But that’s neither here nor there.
Yes, sir, in the middle.
Q: Thank you. Fred Roggero from Resilient Solutions, a drone safety and security company.
I’d just like to ask the panel questions, either, about, if there are similarities from the financial markets and cyber precautions we take there or the coming autonomous transportation markets that we’re going to be looking at? Are there any lessons learned from those that could be applied to the voting cyber issues?
SULMEYER: Well, for my money, the financial sector, I think, has invested the most for the longest amount of time because they realize they had money to lose, right? So they took it upon themselves to defend themselves. And I think the federal government has actually had the best type of relationship with an information-sharing-analysis organization that is managed by the financial sector. So there’s definitely some good lessons to be learned there.
The question is, though, you start getting across different industries, what’s been the government’s role in requiring different kinds of transparency, reporting about intrusions, right? There is some defense authorization language requiring defense contractors to report to the military when there have been certain kinds of intrusions. We can start to think about how transparency and reporting can lead to better practices going forward.
BLAZE: You know, I’d also point out that in the financial industry there’s a really straightforward feedback mechanism that tells you how much you should be spending on security because, you know, we know how much we stand to lose. And you can do pretty straightforward risk calculations that tell you what your budget for a, you know, given exposure is.
In the case of, you know, the integrity of elections, that feedback system doesn’t really exist. So, you know, unfortunately, we spend far more on election campaigns than we spend running elections themselves. And, you know, we have—most election operations are within counties. The budget for voting machines and running elections competes with the budget for fixing roads and building fire stations. And so, you know, probably the most important lesson we can take from this is we need to think about how much we value the integrity of election systems in understanding, you know, how much work to put into this.
FEIST: Connie, you mentioned that Indiana has invested some money recently, which is, I presume, a good thing. Do you sense that the other 49 secretaries of state are in a similar situation to you? Or are more of them underfunded or not funded either at the state or the county level? Is this really where the rubber meets the road?
LAWSON: I would say I’m very fortunate as a secretary of state of Indiana to have the support of the general assembly. I don’t know that it’s common practice across the states to get the appropriation that I was able to get to modernize the system. But I think, as the attention continues to be drawn to these issues, that the states will step up and fund. But, obviously, there are some folks who would like to see the federal government step up as far as the funding goes as well.
FEIST: Article IV.
Q: Dee Smith, Strategic Insight Group.
I guess this is really a question for Matt. To what extent would we know if these things have occurred? I mean, there are cases of, you know, system intrusions hiding out for 500 days in the—in the corporate sector. And that’s question number one.
And question number two is, if we put in all the various things we’ve been talking about, are you convinced that that would secure against this sort of thing?
BLAZE: So the answer to the first question is it depends. Unfortunately, in a lot of these systems the audit trails are just as vulnerable as the other aspects of the system, so there may not be good forensic evidence of a successful intrusion. In other cases there may be signs. We saw in 2016, certainly, there were indications of attempts.
And, you know, I would say that we can’t—with the current design, we cannot be universally confident that it hasn’t happened. And it’s probably only a matter of time before it will.
The combination of risk-limiting audits and, you know, an optical-scan paper artifact of the voter’s record gives us a pretty good assurance, you know, within a statistical certainty, that the count of votes cast is accurate. It doesn’t help us with the other piece of that, which is the disruption piece, voter-registration systems, and so on. Those we have to do the same hard thing that we do with any other online system: training, put resources into it, keep systems up to date, monitor them, and so on. And, you know, we just have to love them enough to pay enough attention.
FEIST: Yes, in the back. The microphone’s right beside you.
Q: Thank you. Tim White, Spectrum Group.
Is there a critical mass of public concern about the issues that you have been explaining this morning? Or, in fact, does the public, generally, either not care, or believe that the tradition of electoral systems in this country have always kind of been nibbled around the edges and, well, this is just another way to do it? Because without that critical mass of public awareness—not press awareness, and not an awareness on those involved professionally in the process. But do people really care that much?
FEIST: What do you think, Michael? No?
SULMEYER: I don’t—you know, I wish there was a broader mass. It’s stunning, I know, to most I guess who were at Harvard; I guess the rest of the world isn’t like Cambridge, Massachusetts. There, everyone’s focused on it. (Laughter.)
But I’m concerned also about a leadership deficit on this topic. No one wins political points by talking about it. This is not a partisan thing, but you need a leadership level of it to generate that mass. It can be bottom-up, but it could also be top-down. So, to talk about how an election is rigged before the election, right, is difficult when you’re trying to actually improve people’s confidence and get to ground truth on this.
LAWSON: I think they need to—you need to come to the Indiana Secretary of State’s Office and answer the telephone, and you would understand that people are concerned about it. We get calls every day.
FEIST: What do they say?
LAWSON: You know, what’s going on? I just read—I just read in the paper about, you know, the Russians hacking the state of Illinois. Is Indiana good? Is there voter registration fraud in Indiana? You know, just whatever is in the news. We get calls. We get a number of calls. I get a report every week on the types of calls that come into the office. And this is an off election year, and I will say I have had just as many calls regarding the security of our elections this year, this off election year, as I have ever had. I mean, I’ve had more. I said just as many; I’ve had more. So I think the public does care.
FEIST: Front row.
Q: Audrey Kurth Cronin from American University.
Following up on that question, is there not a sense that there is a politicization of this issue, where some people feel that concern about elections reflects more upon a particular type of election, or our latest presidential election, or a particular election locally? How can we truly remove this issue from the partisanship and the polarization right now that we have domestically?
BLAZE: Yeah, I mean one—I think we’re in a rare moment where it’s more bipartisan than it has been in the past. You know, in every election there’s a most recent loser, and the risk is that it looks like the people complaining about it are simply upset about their candidate losing. You know—but, you know, after the 2000 election, we saw bipartisan interest. It led to the passage of the Help America Vote Act, which arguably, you know, had some significant problems with it, led to some bad technology, but it was at least a bipartisan effort. I think we’re—I’m optimistic, which is rare for me, but I think we are at another one of those moments, or approaching another one of those moments.
LAWSON: You have somebody over here that’s been raising his hand for a long time. (Laughs.)
FEIST: Yes, sir. Thank you.
Q: It’s actually quite an unimportant question with the time. Edwin Williamson from Sullivan & Cromwell.
I just wanted to go back to the—to the comment that was made a little earlier about the great vulnerability of data breaches. Are these just information theft issues? In other words, is the risk just a risk of loss of privacy? Or does the—in the data breach, do you also get an ability to manipulate or disrupt?
BLAZE: So the goal of particularly a nation-state adversary who attempts to breach a backend system, particularly one with voter-registration information, is—would be disruption. And so they’re likely to want to, first of all, ensure their future and continued access to the system. They may want to delete—they may try to delete legitimate records. They may want to add spurious records to make it appear that there has been widespread registration fraud, disenfranchise selected voters in some way that advantages them, or simply, you know, cause havoc and delete everything, or cause systems not to be ready on election day. So, you know, particularly when we look at this from the perspective of a rival intelligence service attempting an information operation against us, we have to look at a very broad spectrum beyond merely leaking data.
LAWSON: I would just like to add that in June, at a U.S. Senate Intelligence Committee, one of the undersecretaries of DHS stated that 21 states had been targeted. And so that was a surprise to all 50 states because we had not been told that we’d been targeted. And so the language that we use is so important, because what does “targeted” mean? And so then just this last week, I believe, Chris Krebs from DHS said that you need to remember that just because someone targeted or tried to get in state data doesn’t mean that there was an actual breach, and that when we use the word “targeting” it really meant they were scanning, they were trying to get in.
And so it made us all very concerned. Some states—I mean, Indiana was not a state that was targeted, even, but it took three or four months after that to find out the 21 states that had been targeted or scanned. But we know of no additional states, besides Illinois and Arizona, who had been—had gotten a breach.
FEIST: Make sure I didn’t miss anyone over here. OK. Yes, sir.
Q: David Martinez, State Department.
I agree with the moderator that I fail to be convinced thus far that the safest and most secure way of guaranteeing the integrity of the results would be a return to paper ballots. But I also worked briefly in state government for my home state of New Mexico, where I was able to observe polling in practice and saw that that also presents some vulnerabilities. But I was wondering if, Connie, you might be able to expand a bit on what some of the costs—either political, logistical, or financial—of a return to paper ballots would be. I think you made a very important point about how, in a 24-hour news cycle, we simply don’t have the patience to wait for that counting. So that may be a political cost. What are some of the other costs of a return to such a model?
LAWSON: Well, I can’t imagine. Obviously, I think it’s a balancing act. You know, we need the technology, but we also need to verify what we’re doing. And I couldn’t answer the exact question regarding, you know, the financial impact, but it would be—it would be a large one, I would think. I know the hours that it took me to—and my staff to count that first election that I ever administered, in Hendricks County, Indiana, and that was a small special election. So I can’t imagine what it would be in a county like Marion County, Indiana, for an example, our largest county—in Indianapolis, Indiana—can’t imagine what it would be. They count their absentee ballots centrally, and they have over 300 teams of people, bipartisan teams, who count just the absentee ballots. So can you imagine what it would take in addition to that to count the actual paper ballots?
FEIST: At the end of the day, paper ballots may make it more complicated for the Russians, but they don’t necessarily lead to a more accurate count, or do they?
BLAZE: So when you say paper ballots, I mean, I—
FEIST: Going back to the Stone Age, as—
BLAZE: Well, so you’re talking about 100 percent hand-counting.
FEIST: The British system. The British system.
BLAZE: One hundred percent hand-counted ballot(s). So, I mean, I think everybody in the, you know, election security world is pretty uniform in advocating paper ballots. The question that we’re talking about is whether or not they’re completely hand-counted. You know—
FEIST: As opposed to the optical scan with a paper backup.
BLAZE: As opposed to—right. So I think, you know, we need—optical scan has the benefit that the—you get some assurance that your ballot is being fed into the system and a record of it is being made as soon as you submit it, and that’s an increase in integrity. The problem is we are also now dependent on software. And so that’s why you need the risk-limiting audit backup behind that.
FEIST: Right. So we’re just about out of time. We are at the Council on Foreign Relations, where we always end on time. I want to try and end with a simple question and as close to a one-word answer as possible. I’ll even give you your choices. Looking ahead to 2018 and 2020, are you optimistic or pessimistic that the system will be materially safer than it was in ’16? I’ll start with Matt, and then I’ll come down.
BLAZE: I’m convinced that the threat actors will be emboldened.
FEIST: Mmm. (Laughter.)
LAWSON: I’m optimistic.
FEIST: Well, all right. Perfect. (Laughter.)
On that note, thank you very much. Thank you for coming today. (Applause.) The next panel, if you’re joining, starts at 10:00. So thanks very much.
This is the second session of the Hacked Elections, Online Influence Operations, and the Threat to Democracy symposium.
The panelists will explore how the United States and the tech community can respond to foreign actors' use of online platforms to propagate disinformation and amplify specific viewpoints.
This symposium will convene policymakers, business executives, and other opinion leaders for a candid analysis of the cybersecurity threat to democracies, particularly to the election systems themselves and the subsequent attempts to shape the public debate through disinformation and online commentary.
KORNBLUH: All right. Can we have everyone sit down, please? Can we have everyone sit down? Thanks. Thank you.
Hi. Welcome to the second session of today’s symposium. This is titled “Combating Online Information Operations.” I’m Karen Kornbluh, senior fellow for digital policy at the Council.
And we’re very lucky to have these experts with us to discuss this issue. We have Renee DiResta, Thomas Rid, and Clint Watt(s). I anticipate a very lively and fascinating conversation.
I wanted to start with Thomas. When the Supreme Court in the U.S. decided Citizens United back in 2010, it predicated the whole idea that corporations should be able to spend money in elections because the internet, you know, which was then seen as this great engine of transparency and democracy—the Arab Spring was going on—that the internet was going to bring full transparency to American elections, and so that was going to be the magic bullet. And, you know, at the time I was in the Obama administration. We were really taken with the idea of internet freedom. But it seems that since then the openness of the internet, which we had hoped would solve a lot of political problems, which would undermine authoritarian governments, is almost being used to undermine democracy by some authoritarian governments. And I wonder if you could give us a little bit of history about information operations, what is it that we’ve been missing, and what do we need to be paying more attention to.
RID: Yeah. So thank you. I’m happy to try to provide some history. I’m writing a book on the history of disinformation right now, so please stop me if I start to skip into too much detail there. (Laughter.)
But disinformation—or active measures, to use the old Soviet term of art which emerged in the early ’60s—is, of course, a very old phenomenon. And if we go look at the Cold War, we literally have hundreds, more likely thousands, of examples of small individual active measures and disinformation operations.
I interviewed a few people who actually worked in active measures for their entire career. As you may be able to hear in my funny accent, I’m German. So I recently interviewed some Stasi—a former Stasi disinformation operator, which was an extraordinary experience. And from one of them I have this—got this great line that they think that—they thought the best mix between truth—of truth and fact, of fact and forgery, of truth and lie is 80/20—80 percent true, 20 percent false—because that makes it really hard for journalists or for experts like us to tell what’s actually true, what’s factual, and what’s not factual.
So let’s make an example of a particularly vicious operation from the year 1960 that was revealed in a congressional hearing in the mid-1980s. In 1960, this—the context here is decolonization and many African countries, newly independent, wondering whether they should join the West or the Soviet bloc. And in that context, suddenly a pamphlet appeared. A 16-page pamphlet appeared in 15, I believe, different African countries, in French as well as in English. And the pamphlet contained pictures and text, and it was about—it was titled “to our dear friends.” And it was on the face of it written by an African-American in the United States—an African-American organization to Africans in Africa, explaining to them the true ugly face of American culture at home. And it was full of racial discrimination, you know, lynchings in the South, police violence against African-Americans. Now, I checked and went through the press reports at the time, and almost every single detail in those 16 pages is completely accurate, down to very gruesome details that I’m not going to repeat here. But this is an example of an active measure that was a real headache for the State Department, very difficult to counter because it was based in true facts, but at the same time under a false cover of a nonexistent organization. So here’s just one of literally hundreds of examples that I think highlights some methods of operation that we still see today.
KORNBLUH: So, Clint, you’ve been talking about how the U.S. has to respond for quite some time, and we see that in some other Western democracies there is a response to some of the information operations. Can you talk a little bit about what seems to be working, what might be some interesting models?
WATTS: I don’t know that anything’s working yet. I mean, there is some—there’s the defense and then there’s the sort of countering portion.
And so the Europeans get it because they’ve been in this game much longer than we have in the United States. There was a two-part failure, you know, in the United States with Russia meddling. One, we didn’t understand that hacks were being used for influence. We were looking at it as investigations. And the second part was this was already going on in Eastern Europe, Ukraine, places like that, Brexit. You know, when we saw them, we didn’t think it would happen in the United States. We were arrogant to this, that it would never come to our shores. But they are more in the trenches on this. They’ve been dealing with it for a long time.
And so the number-one thing that they’ve done over probably a 50-year period is called education. We don’t invest in it in the same way here, but they very much put forward what their stance is on information, how to deal with it, what they believe.
And the other thing they’ve done is they’ve started to go ahead and acknowledge when these untruths are being leveraged towards them. And in certain places—Czech Republic’s got some of this together; Latvia is one that’s gone way out front. I was at the launch in Helsinki of the Hybrid Centre that they put together. They are organizing.
Now, for them it’s a challenge because they have different audiences. If you want to understand Russian active measures, it’s about language, not necessarily about culture, because that’s how you communicate in social media. So, if you want to track an influence campaign, you just need to look at the languages they’re using and the way they’re narrating on that.
But what’s interesting with all of those countries as opposed to our own is, you know, the basic rule you’re taught in boxing, which is you don’t punch back until your feet are on the ground. And so they understand what they want in their country and what they’re defending, what their policies are, and then can move to counter the influence narrative. We have failed in this for a decade now, whether it’s been terrorists or the Russian disinformation, in our counter influence because we don’t really know what we believe in and we don’t know what we stand for. You cannot counter back, whether it’s online or on the ground—a counter-influence campaign—unless you know what your nation’s policies are, what your belief systems are, and what you’re going to push back with.
If you look at the Cold War, whether it’s a European country or here at home, we were pro-democracy, we had nationalism, we had things that we, you know, were trying to advance around the world. Right now I am not sure that the Russian message is different than our own here at home. And so you can’t do counter influence or counter active measures until there’s some consensus at home about what we believe in here, what we’ll defend, and what we’ll promote overseas. You know, the narrative that we saw rise around the election was anti-EU, anti-NATO, let’s work together to kill ISIS, be a nationalist not a globalist, you first the world second. How do we counter that? It sounds pretty familiar, based on what I see.
KORNBLUH: Just to follow up—
WATTS: So I’m just saying, in terms of—whether we—you cannot move forward. The way the Europeans are moving forward is, even in their own countries, they have a baseline from which they are standing in their counter-influence campaigns, and they have some consensus around it. They know who’s in charge. I don’t think we have that here. We got rid of the U.S. Information Agency. So, both structurally and in terms of message, they’re just much more grounded. They can punch back.
KORNBLUH: So just to pick up on that, I mean, one thing I’ve heard people talk about is just that it’s just so much more—it’s just much easier to be negative.
KORNBLUH: You know, if you’re—if you’re about tearing down, if you’re a nihilist, it’s much easier to get your message out than if you’re in favor of something.
WATTS: That’s right.
KORNBLUH: But what you’re saying is that there’s been some success with people who at least have a better ability to articulate what democracy’s about.
WATTS: Right. It’s not just about democracy. It might be using the nationalist message in certain European countries, say it’s about us first and not, you know, our adversary. But they have a clear way of communicating to their publics, both from a leadership perspective, and through their media and public affairs, where they communicate out. I think Finland, Sweden, the Scandinavian countries, you know, are great examples. They communicate out to their public very clearly this is what we stand for and this is what we believe in.
KORNBLUH: And so it’s a positive message. It’s not just an anti-Russian, for example.
WATTS: That’s right. And it may be nationalist, but it’s also about what their values are. And our biggest challenge right now, it has been—I will be coming up on the four-year mark, the first time I talked about this in government audiences. It was late spring/summer of 2014. The last government group that I talked to was three or four months ago, and I have the same deer-in-the-headlight look whenever I talk about this stuff. You know, I—not because they’re doing anything wrong. There are—there are agencies in the U.S. government that want to do things. But the way our system works is policy sets requirements, the requirements set funding. This is how we move, you know, our organizations. And I’m not sure anyone knows what their role is in countering influence online or who would have the ball.
I made specific recommendations. They’re pretty easy, actually. You know, FBI should look at investigations of hacks now for how might this be used for influence later, and that’s inoculation strategy. DHS and State Department—DHS at home, State Department abroad—should refute falsehoods, you know, almost immediately. We did this in Iraq, actually, against terrorists, and we’re pretty good at it. And in the intel community we have to decide what our strategy is around information and influence. But no one really knows, or at least I don’t know, who’s in charge. And it’s been a year since this happened now, and I haven’t seen a lot of gears moving, you know, in any direction at this point.
KORNBLUH: Well, let’s come back to that.
But, Renee, I want you to take us to the private sector and talk about the platforms. They’ve been doing a lot. Some of them have been doing more than others, I think, but putting in more people to review accounts, to review posts, to take away the monetary incentive for fake news. Talk to us a little bit about where the incentives of some of these platforms are, and to the extent to which they have the incentive to clean it up versus there’s a tension between their economic model and cleaning up disinformation.
DIRESTA: Sure. So there’s a—I want to first kind of piggyback on this idea that no one’s in charge, because that’s the problem in the private sector, too. Because these platforms are competitive with each other, because they all monetize, their business models are based on attention. They’re selling ads. They want to keep you on their platform. Each one wants you on their own platform because they want to be the one to serve you the ads, because that’s how they earn revenue.
So there’s a fundamental business case that’s underlying why these types of things are—you know, one of the kind of fundamental challenges here is doing things to make you happy on the platform is such a core part of the business, and that’s why it’s so personalized. You see the things that are likely to make you happy, that are likely to keep you on the platform. And so when that intersects with an influence operation, it’s very carefully tailored. Influence operations have been around for decades as, you know, co-panelists have said, but the vectors of dissemination have changed. The ability to personalize that content has changed. The ability to target individuals with exactly what is going to work for them, based on a corpus of data that the platforms have accrued about each one of us over years and year of use and feedback loops—what did you click on; that tells me something about you. If it doesn’t tell me something about you directly, I have a correlation to someone who is like you, so I can target you through what is known as a lookalike audience or a custom audience in which I can—you know, anybody running an ad or growing an audience on a—on a platform like Facebook is reaching people who are predisposed to be interested in the content. That’s why it’s such an effective means of delivery. So that’s the kind of base framework.
So the problem is, if—you know, this used to be—10 years ago now, the concept of the filter bubble became popular, the idea that the platforms were showing people what they wanted to see, and that was kind of creating these information siloes. When you look at what has to be done to break people out of that or to say these people are more likely to be predisposed to disinformation content, the platforms are not coming back and telling people who viewed this content that they were targeted. So right not a lot of the conversations we’ve been having is: What are the responsibilities of the platforms? Can we ask them to act against their own economic interests in the interest of society? And that was a theme that was underlying the hearings.
The way an information operation is conducted on social networks, though, is it’s not—it’s not unique to one network. So you might start, if you wanted to seed a story, by putting it on—by writing an article, creating what’s known as kind of a content farm or a blog or—you know, anyone can write anything on the internet. This is—this was supposed to be a great—a great advantage because we all have the opportunity to make our voices heard and to get information out there. But I can write something on my blog, and I can post it to Reddit, and if I post it to a Subreddit of interested people, maybe they upvote it. You know, and I can do this with tons and tons of content, and I can see what gets lift. I can see what resonated with the audience that I’m trying to reach, because I can see the ranking of what’s moving up the page. It’s being voted on by the readers. They’re endorsing it. So then I can take the content that plays really well and I can move it over to Facebook, and on Facebook I can use an ad campaign to grow an audience. But once I have some audience, then at that point I achieve what’s called organic lift. And that’s the idea that, rather than having to pay to serve content to somebody each time, my hundreds of thousands of people who have begun to follow my page or who have joined my group are going to push that content out for me.
So Facebook has a much larger audience than Reddit. So what I’ve just done is I’ve tested the content on Reddit. Perhaps I’ve tested it on 4chan. Perhaps I’ve tested it on Imgur. There’s a number of these kind of platforms where I can see the reaction of the community I want to go for. Then I can move it to Facebook, where I can have people begin to do the sharing work for me, which actually brings down the cost to run one of these campaigns because at this point I have hundreds of thousands of people disseminating my propaganda for free.
Also, I can take it to Twitter. And what I’m going to use Twitter for—because Twitter has a much smaller audience than Facebook also—is Twitter has a high concentration of media users. There’s a ton of journalists on Twitter. There’s a ton of influencers on Twitter, millions and millions of followers. Donald Trump, excellent example: 45 million, I think. At that point, I can kind of cross the Rubicon. And if I can make something trend on Twitter or if I can make a high-value influencer retweet my content and retweet my article, I can at that point, well, pretty much guarantee that there will be some media coverage of it.
And the media coverage might debunk it, but it doesn’t matter because even in the act of debunking it it’s still continuing to keep it in the public consciousness. The media can cover it uncritically, which is—you know, we’ve seen happen. We call them hoaxes, but it’s a very quaint term. Really, we should we using the term disinformation campaigns, you know. Or, if the media doesn’t cover it, I can start a conspiracy theory about why didn’t the media cover that trending topic. So I’m going to win either way if I can get a sufficient amount of attention on Twitter.
And so this is the way that somebody interested in conducing a campaign will do it in a—in a cross-platform strategy. And there is no one really responsible for shutting it down, because the platforms, I am told they have some kind of backchannel information sharing, but we didn’t see anything really remarkably effective in 2016. And we have continued to see some interesting hoaxes take place, you know, with regard to the Alabama election right now, ongoing.
KORNBLUH: So, Thomas, talk to us about this concept of organic and how bots play in. You know, what’s the role of the bots in what Renee was describing? And what’s the—what’s the nature of the problem?
RID: Bots are—bots are certainly an important problem. But before we talk about some of the more technical aspects of amplification operations on social media, I think we should take a small step back and speak about the role of the press and the role of journalists for a short moment, I think.
Because, again, historically, there’s this great line from Rolf Wagenbreth. He was the head of Stasi disinformation for the entire time of the—more than 30 years. It was brilliant. It was a brilliant—Stasi was better at this than KGB because the main target was West Germany, so they spoke the language. They were close to their targets. They literally could sometimes listen to them. You know, they could make German jokes and West Germans would laugh about them—you know, as much as Germans joke. (Laughter.) And so Wagenbreth had this line, “What would be the active measures operator without the journalist?” So the journalist is an integral part of disinformation.
And we saw that at play in 2016 in the U.S. election interference in a new way. Let’s just tease out how it was new. Active measures—I mentioned this particularly bad one from 1960—back in the day were artisanal. You needed to know what you’re doing. They were—they required craftsmanship from intelligence operators. Today, or rather in 2016, the active measure was very much industrial scale. They hacked a lot of data, put the data into the public domain through WikiLeaks and other fronts. And then it was the journalists of the victim society, of the victim country—in this case, the United States—that actually created the value in terms of the damage done, because they went in, looked for the gems and the nuggets, and reported them out, and ignored the source.
Now, every journalist, or everybody really, who thinks, well, now we certainly understand the risks. We wouldn’t do the same—make the same mistake again, I think we all have to think again. Two weeks ago a little thing happened in Germany which is remarkable. Two weeks ago Der Spiegel ran a story about Germany’s U.N. ambassador, the former national security advisor, Christoph Heusgen. And Der Spiegel reported that Heusgen had spent an email to the U.N. secretary-general asking in a somewhat improper way to create a job for his wife. OK, he probably shouldn’t have done that. But Der Spiegel quotes from that email that Heusgen sent to the U.N. secretary’s chief of staff. And Der Spiegel doesn’t say where they got the email from.
Now, the next day anonymous German sources tell another German newspaper: whoa, wait a minute, we know that APT28, and they explicitly identify that as Russian military intelligence, has hacked U.N. systems. They found the email. Gave it to a Spiegel journalist. And he ran the story, for the second time. He had done that already a couple months prior, knowing that he probably advances the interests of a Russian intelligence agency. And I think we underestimate the competitive—the rough, competitive nature of journalism in a crisis that is actually created by these social media companies. So we have the perfect storm for active measures.
KORNBLUH: Yeah, Clint, would you pick up on that? And you know, sometimes it’s the competitive forces. Sometimes it’s ignorance, right? And sometimes they feel they have no choice. Some things become, you know, trending. It’s—the bots are pushing it. The president has talked about it. What can be done? And if the government—if there’s a limit to what our government can do, civil society in other countries is taking measures to push back, aren’t they?
WATTS: Right. So I mean, he’s exactly right. Competition is one of the motives that makes it super easy to get active measures to work. The other one is fear. If you can scare a population, which the Russians and the Soviets before them were very smart about doing, calamitous messages. You hit them with fear, and then you load up a political message right behind it, they’re more likely to fall for it as well. And you see that with Benghazi conspiracies that would be pushed around, some of the things that we observed in social media space. And people would grab them. You know, very few, oftentimes, but it only takes a couple. And those with more followers, of those that are they key mavens in their social media networks can spread it much more quickly.
I think what—there are a few things that we need to think about. The internet and anonymity. Everyone comes to the internet or social media with the best of intentions. And those with the most resources, time, and worst intentions ultimately take control of it. This is—I mean, you can look at criminals and hackers. What happened to Anonymous, by the way, and LulzSec? Aren’t they going around the world making us all transparent and free? Anybody wonder what happened to those guys? You know, the big and the powerful ultimately come to learn how these things work. And if you aren’t under the rule of law, if you don’t have to worry about civil liberties, if you don’t have to worry about a free press checking you, you’re going to use this system. And it’s happening, you know, around the world today. I think Myanmar is a great case study of how this has just been duplicated within a year. All political parties will do this over the next two to three years if they don’t feel constrained. And I think we’re seeing this playing out in elections even today.
So things we can do. One is authenticity of authorship. Is it a real person that’s behind a social media account? There are ways we can protect their identity, there’s ways we can protect anonymity, but there are public safety factors. We always say First Amendment doesn’t protect the right for you to yell “fire” in a movie theater, right? We saw disinformation networks pumping conspiracies around, hey, JFK has been evacuated. Maybe it’s a terrorist attack. Maybe someone was shot. Maybe it was this. The truth—where’s the truth in that? It never—it never comes back. People believe the first thing they read. And it’s very hard to refute those things. So there is a public safety component to this that goes well-beyond just the political component of it.
The other thing is how do you deal with the news issue? And the social media companies initially jumped out to try to do factchecking. That was always going to be a giant waste of time. I can make fake news way faster than you can check it. If you want to stop an artillery barrage, you silence the guns. And to do that, you have to go after the outlets that are mostly producing this sort of information. So we had talked about a rating system, essentially nutrition labels for information, which would be kind of like sweeps for television. And it seems like, I think, Google, and maybe Facebook, with some of the media companies, are now on board. You know, they’re going through—at least trying to come up with a system to figure out what’s—who’s doing 80/20. Maybe if you’re a mainstream outlet and you’re doing 80/20 it will hurt you, the rating would, and that would be OK too.
The idea is to improve everyone’s journalism and reward those that are doing good journalism over time. And this will prevent those fake news outlets that we talk about, which kept popping up, from popping up so quickly and gaining so much trends. We were tracking into ’14, ’15, and ’16 the growth of outlets that suddenly would pop up in Eastern Europe and then wanted to talk about how the Federal Reserve was terrible and should be destroyed and gotten rid of, all day long, or the middle of the night where they were writing from. So how do you stop that? You know, you’ve got to put some sort of metric or challenge on it. But ultimately it comes down to public education around understanding information sources and understanding what they’re—and we got to put it back on the consumer.
That’s why I like the nutrition labels idea. Make the consumer decide. Don’t block the content from them. Don’t squash the outlet. If they want to write garbage and someone wants to read garbage 90 percent of the time, then fine. It’s like your crazy uncle who sends you the weird emails all the time and you go, uncle, go to, you know, this factcheck, and this is a false story. So let’s push it back to them, and let’s empower them. We have had a public that has come into social media that was never reading newspaper. Do you understand, like, how this happens? Like, people have jumped over. And they’ve gone from assessing news from their friends to assessing 1,000 inputs a day from social media.
This is a huge mental leap. And we are going to fail. Everybody falls for fake news once in a while. The more real the medium, the more you will fall for it. So we’re seeing now fake audio, fake video coming out. You know, this will make this even more dynamic. So we’ve got to inform our public and help them make better decisions on their own and empower themselves so they’re not pointing to social media companies, they’re not pointing to politicians, they’re not pointing to journalists. They got to be responsible for their own information consumption. And that’s really what the Europeans had done, you know, over the last 50 years. They’ve been much better about educating their public on it.
And we’re seeing a major shift, you know, even when you look at France and Germany. Part of the reason why—there’s lots of structural reasons—but they also consumed far less news on social media than they do from traditional news sources and even from friends and family, if you look at the actual numbers. But that will change over the next 10 to 20 years. I mean, you’re seeing the younger generation moving to this. So I think it’s super important that we sort of work on the public for them taking responsibility for themselves, but also help them understand the dangers.
You know, like, if you buy—we had this with consumer reports, you know, and bad products in the ’70s and ’80s. If you buy the Chinese import that is 75 percent cheaper than the good that it’s competing against, it might burn your house down. You know, that could happen. But that’s on you. You know, that was your choice, to purchase that. So informing the public and helping the public make better decisions I think is something that’s good all around for a country.
KORNBLUH: So on the nutrition labels or the factchecking, some of the ways in which the platforms have asked journalism or, you know, others on the outside to find their problems and help them correct that. To some extent, I keep thinking of your expression about artisanal. That feels sort of artisanal, whereas the bad stuff is coming at a much faster, more industrial rate. And, Renee, I just wonder, you know, what ways can the algorithms be used to fight back? People keep talking about this. How can the algorithms be used to—not to substitute for public education, obviously we need to do that, but to bat back some of the more dangerous things, given the First Amendment protections?
DIRESTA: You know, there are some interesting—it’s challenging, because algorithms are written by people, and so there are biases inherent in the algorithms. One thing that comes to mind when you ask the question is Facebook’s recommendation engine. So the recommendation engine is, as I said a little bit earlier, designed to serve you thinks that you want to see so that you stay on Facebook, so that it can continue to drive engagement. So if I like a page, here’s a very specific example. If you are prone to conspiracy thinking, actually, the greatest predictor of belief in a conspiracy is belief in a different conspiracy—within another conspiracy. It’s well-documented in psychological literature.
So if you like a page on chemtrails, or you like a page on—an anti-vaccine page, Facebook’s recommendation engine actually takes that as an input and begins to serve you content related to other conspiracies. And one of the things that we saw in late 2015, early 2016, was we began to see Facebook’s recommendations engine recommending Pizzagate—the conspiracy that Hillary Clinton ran a vast underground sex ring out of a D.C. pizza place—to anti-vaxxers and chemtrail believers and, you know, these sorts of things. And so it’s taking people who have belief in sort of pseudoscience and health-related conspiracies and then pushing them down that rabbit hole into antigovernment conspiracies, or other types of, you know, bizarre—the moon—the moon landing was fake, 9/11 was a hoax—you know, the kind of truther community. So there’s this weird intersection. And it’s actually because the recommendation engine is serving that content to people.
So this is an interesting problem, because from a Facebook business standpoint it’s giving the people what they want to see. But this is where we ask the question—and one of the conversations happening a lot in the Valley right now is what’s the kind of ethical design there? There’s a concept called sort of choice architecture. You don’t give people who are hungry—you know, if you show them the doughnuts first versus the salad, they’re going to eat the doughnuts. If you serve the salad, if you put the salad out there first, they’re more likely to make the choice that’s potentially better for them, from a health standpoint. So we think about how what are the unintended consequences of the algorithms? How are we thinking about what we’ve created and might we make more ethical decisions that don’t necessarily negatively impact profit, but do things that are better for people. So this is a kind of undercurrent in the Valley right now.
I think, you know, it’s not censorship to not suggest some of this content. If someone wants to go to Facebook and type in Pizzagate and join Pizzagate groups, that is Facebook’s decision to decide what remains on its platform under First Amendment protections or, you know, information sharing, information availability. But when you make the decision to serve something up, that’s a proactive action by a platform. And this is where things, you know, kind of get into a little bit of an area where we could potentially see the platforms make some design decisions that could have potentially quite a powerful impact.
RID: Can I tack on a comment there?
KORNBLUH: Yeah, sure.
RID: So it’s a comment on Twitter. And I think the design decision that some people who follow the Twitter abuse by bots, especially at—wondered why has Twitter not done this? Just an example, to make it very concrete. Some of you here in the room may remember when Twitter had egg profile pictures by default. You know, the eggs, there was this joke about eggs, and usually eggs didn’t provide interesting content. So you could opt out of eggs for a while. When you signed up for a new Twitter account, you could tick that box saying: I don’t want people in my feed that still have an egg picture as their profile picture. That was possible for a while.
Now, why is it not possible to opt out of bots? It’s possible to opt out of eggs. Why is it not possible to opt out of bot traffic? Because Twitter claimed in these hearings several times that they sophisticated machine-learning mechanisms in place that can automatically recognize bots. So why don’t they give you this—the opportunity to click—you know, to tick a box and have no more bot traffic? I’d say it’s probably because they would then, you know, cut down their entire active user base by doing that, by a significant order of magnitude.
DIRESTA: The notion of opt in versus opt out is quite profound. We’ve seen it outside of the digital world in organ donation, right? Do you voluntarily opt people in and make them decide not to participate, versus making them check the box? So this is an interesting thing with bots. And I will say, with Twitter the blue checkmark accounts—when I got my blue checkmark, which is a verification marker, I was scrolling through the new settings and it had something that said, turn off low-quality accounts. And I thought, on my goodness, they’ve had—this has been available the entire time. (Laughs.) So they have a sense of what is a low-quality account, and they’ve given—you know, blue checkmarks used to be only for famous people. And they’ve given celebrities and famous people the opportunity to not see them—(laughs)—for years. And that’s a decision that rather than creating this pleasant experience for everyone, that’s something that really took years to get to, the idea that maybe people would want to opt out of bot content, so.
KORNBLUH: Well, let’s open it up to members for questions. I want to remind everyone that this is on the record, and ask you to wait for the microphone, speak directly into it, stand, state your name and affiliation.
And I think we have a question right here.
Q: Thank you very much. Jill Dougherty from the Wilson Center.
I wanted to ask—and I don’t really care who answers it—but this controversy of having RT and Sputnik register as foreign agents. You know, the rationale behind that, obviously, is a law that was passed in 1938 to protect Americans from propaganda by the Nazis. And I’m just wondering whether that type of law really has any relevance today? Because how can you protect people against something that every minute something is coming into their box from one account or another? Is that law obsolete? And what’s your opinion on forcing RT and Sputnik to register as foreign agents? Thank you.
WATTS: Do you want to go first?
RID: Go for it, yeah.
WATTS: I mean, it’s great that they did that. It won’t affect anything, you know, that I see on social media. Most people that have sent me RT, and this has happened quite a bit, this is—I mean, in 2015 we were—I was receiving Russian propaganda from friends who were then arguing with me that I didn’t know what I was talking about. So I was like, OK, I’m glad in Missouri you don’t know what RT is. Do you know what RT is? Yeah, it’s RT. Well, like, OK. (Laughter.) I mean, people don’t assess their sources now, right, because you trust your friends and family who send you things more than you trust someone else.
So part of RT’s methodology, which was very brilliant, was, hey, we can’t beam in satellite television into every home. But we can put stuff on YouTube. And then we can have our producers and reporters share it with likeminded people. So by the time it moves along, you know, you don’t know where it came from. And this is part of the problem, regardless of RT or Sputnik News. They will just say, oh, it’s all propaganda. It’s your propaganda. NBC, CNN, Fox, it doesn’t—oh, that’s propaganda. It’s all propaganda. Which is, oh by the way, very much the Russian world of information in Russia. It’s your PR, their PR.
So we’ve lost that sort of bearing about reporting versus opinion and fact versus fiction. That has sort of gone sideways. And I don’t think declaring a source now—I think it’s way too late—as propaganda really helps the public. I don’t think they’ll know even—they’ll have read no story that said that RT or Sputnik News had to register. And even when they receive it, as long as it appeals to their preferences, they’re going to consume it. And so it’s good that we do that just so that there’s awareness around, OK, this is a state-sponsored news outlet. And there are many other state-sponsored news outlets from around the world. You know, we see this with all authoritarian regimes.
But how the public consumes, as long as it makes them happy they’re going to keep filling their belly and their noggins with whatever you keep feeding them on social media. And so any outlet, whether it’s U.S. or overseas, knows that’s the formula really for their content dissemination.
KORNBLUH: I think we have one back there.
Q: Hi. I’m Craig Charney of Charney Research.
Unlike most of the people here, who I think come from the foreign policy or tech communities, we do survey research for campaigns and marketing, as well as foreign policy issues. We worked in public diplomacy a decade ago. Now we’re working on these issues.
Two questions: One just came out of Clint’s comments. You know, the reason why RT looks so good, and so professional, and persuasive, is because it’s not designed in Russia. Their content is designed by Ketchum in New York, one of our best PR agencies. So one question is—
KORNBLUH: I’m sorry, I’m going to have to limit you to one question.
Q: OK. Well, I’ll stick with one then, since I started it, would it make sense to oblige American companies and organizations who are professionally assisting foreign influence operations to declare themselves foreign agents?
WATTS: Yes. I mean, that’s a simple answer for me. We haven’t put boundaries around it. The reason Russian active measures worked and Soviets’ didn’t is three parts. One is analogue versus digital. You can just do it a lot faster in the digital space. And I shouldn’t say it didn’t work. In the analogue space, they had great successes too, but it took much longer. The other part is what the Russians have figured out for Americans is that too much information is worse than no information. So they’ve taken the envelope and they’ve sort of opened it up, and then they’ve saturated. They’ve gone from we’ll try to control all information to I’ll bomb you with so much information you don’t know what’s true or false, which is very brilliant.
You know, the other part of why it works is because their economic—there’s enough economic openness that you can actually run a ground lever along with the virtual. So this is what Americans completely miss in all of our—we love our social media, so we keep talking about social media. The reason it has worked is because they take physical things, real-world things, facts, and then they use that and either manipulate the truths or other falsehoods to push the conspiracy. There are physical actors. Just like you mentioned, they have physical partners that are also helping them.
And if we’re going to be upset about this sort of influence, then we’ll have to look at, how do we characterize agencies like that if it is starting to break up our democracy? That’s really what is starting to happen now. We’re seeing diversions at such a level that I think we’re much closer to real breaks in the United States than people really understand at this point.
But if you have someone doing that kind of stuff, then the question will be, what if U.S. companies are doing it on behalf of the United States overseas or the U.S. is enlisting it? So it’s a two-way street. So as a policy question, it’s going to get super, super complicated, I think.
KORNBLUH: Thomas, did you want to add to that?
RID: I would just add a cautionary note. One of the things that makes this country so great and sort of still extremely attractive for the rest of the world, let’s just spell this out for a moment, is the First Amendment and the strength of the First Amendment.
So as soon as we start messing with the notion that we can declare certain forms of speech because they come from foreigners in a way that could be hostile, that are not OK anymore, you’re sort of crossing a line somewhere. I just would like to sort of, you know, call attention to that.
WATTS: Can I add to this?
KORNBLUH: Yeah. And I think of the—one of the lines that has been drawn, though, is on foreign interference in elections.
KORNBLUH: You know, because that’s different than foreign speech.
RID: Fair enough, yeah.
KORNBLUH: But I do—but you’re right to draw that.
WATTS: That’s exactly what I wanted to zero in on, is we’re talking about an attack on the United States, it was an information attack, and so in that context then you have to look at repercussions that are about that attack.
What ultimately will come out is this counterinfluence thing, is the U.S. isn’t going to be able to do much of anything to counterinfluence. And so you’re going to have to pull a different strategic lever against an adversary. The U.S. should never repeat what was done to it to another country. I would be very upset if we hacked into thousands of people’s emails and dumped their personal information, of any country, out on the internet. I don’t want to see false journalist stories. I’ve seen some of that nonsense talked about the past week in the news, planting of news stories, you know, discrediting outlets. I’ll be very upset in our country if we do it.
There are some simple things we could do against Russia if we wanted to go in a counterattack, but it’s not to do their playbook back against them. It undermines our values, it hurts us as a country, it violates free speech.
And so with that, I think my answer of yes was we just suffered a major information attack that affected our elections, and we have got people in the United States that don’t believe their vote counted still. We just heard that in the previous panel. So we’ve got to come up with some sort of response.
KORNBLUH: I’m sorry, we’re going to have to move on.
We have a question right here, the lady in blue.
Q: Lilia Ramirez with Smiths Group.
It appears to me that we’re going to have to be more deliberate in our education system so that the youth, they’re more critical thinkers. So what would you recommend we do to improve the public school system, because I don’t believe that it’s going to be helping the situation?
WATTS: Well, I can speak to this a little bit. I mean, are we going to do public education anymore? I’m not really sure in this country, right? We’re going in some weird directions on public education. I went to a public school both, you know, in high school and for college, I went to the Military Academy. But, you know, one of the classes that we teach in the intelligence community ironically is called evaluating information sources. It’s a set curriculum, and it’s really good. And it was super helpful for me, you know, when I had it.
There are ways you can actually water that and boil that down, you know, for a high school curriculum that I think would be super valuable. And the European countries have done this in a lot of ways. I believe it’s Sweden has done this sort of thing, which is helping their people understand or think about ways to evaluate information sources without going into political biases and getting crazy with it. It would be hard to implement in the United States because of our state, you know, delivery of education services.
KORNBLUH: Renee, I think Italy just did it. Are the platforms doing—
WATTS: Who did?
DIRESTA: Italy. Italy just—
WATTS: Italy, yeah.
DIRESTA: —put out a curriculum specifically for this. I don’t know the specifics, but it was announced a couple of weeks ago.
KORNBLUH: I don’t know if you all remember. I remember the whole public education around subliminal advertising, which turned out studies later said was not really such a big threat, but we all were educated about it and scared by advertising for a long time.
Q: Alan Raul, Sidley Austin.
The discussion of education and Mr. Rid’s comment on what makes America so attractive to the rest of the world, First Amendment, free speech, really raises the issue. Maybe instead of focusing on educating the public about evaluating information, we should reemphasize teaching the values that make America great and are fundamental principles. We’ve been distracted by conspiracy theories, exaggerated news stories and so on, but what we don’t hear is promotion in the U.S. of First Amendment and due process and, you know, first principles, Constitution. We’ve moved away from that and teaching civic education. Maybe that’s what we need to reemphasize in order to bring the country back to, you know, kind of a reasonable appreciation of information.
WATTS: I think Europeans are, they are doing it, the European countries are. I mean, I just think it’s absolutely unreal that in 1980—you know, I watched the Olympics, you know, against the Soviet Union, and then in Charlottesville they’re chanting Russia is our friend. And, you know, it’s just the weirdest, like, 30 years’ transition. (Laughter.)
RID: One of the most fundamental—and this is almost a political, philosophical discussion to be had is about deletion. Twitter epitomizes this problem. We all have the same, probably the same intuition that Russian bots as well as presidents should not be able to delete tweets because it’s on the public record. We also have the same intuition that 16-year-olds who tweet something stupid should be able to delete tweets. How do you reconcile the two? That’s a fascinating question and I think we should pay more attention to it.
DIRESTA: Well, I can say that one of the things we were arguing for, as a researcher who looks at Twitter data and then the terms of service of the Twitter API are that if a tweet is deleted you’re supposed to no longer use it, which is why ahead of the hearings, Twitter, you know, compiled and then released its list of accounts. But at the time it was made available to the public and I believe to the Senate perhaps, they had already deleted all of the content.
So one of the things that we face as researchers is the platforms, they have a vested interest in not sharing that information. And so we’re trying to sort out things like this. You know, is there, you know—I’m saying, why do Russian bots have privacy rights? Because the justification for a right to be forgotten and the 16-year-old being able to delete her tweet is sort of a personal privacy thing. I’m saying these are fake accounts, these are manipulated accounts. It’s ludicrous to think that we are giving privacy, you know, privacy considerations to fake people. But that is the state of the conversation as it stands today.
KORNBLUH: Yeah, why don’t we go here.
Q: David Ensor of the G.W. Project for Media and National Security.
Panelists, Clint in particular, but all panelists, I guess my question right now on the topic of this panel is, can we trust Facebook and Google and others to get the problem that clearly emerged in the last election under some kind of control? Or, I mean, at what stage does there need to be regulation of our social media companies in order to prevent their platforms from being used to change the results of elections?
KORNBLUH: And I guess I want to add something that I feel like we haven’t talked about. We don’t want to be battling the last war. So in terms of both where the threat is coming from, it may not just be—may not just be Russia and, secondly, the different kinds of tools that’ll be used, what can—Renee, do you want to start? Like, what can social media platforms do and what should the government be doing?
DIRESTA: Sure. So I think with regulation you have a couple of different avenues. You have market-promoted regulation, which is where users get very, very angry and inspire the companies to change their behaviors to keep users happy. And that’s something that the media often helps push. Or you have self-regulation where the companies decide kind of as a consortium, as an industry, that this is something that’s worth their time. And then the third is government, which takes much longer. And I don’t think we’re going to see that happen by 2018, which is, you know, of course, a source of major concern for people who pay attention to this problem.
I think that, you know, we saw with ISIS a few years back—so this is not—Russia is not the first time that the tech platforms have had a disinformation and propaganda problem. It took several years to get the tech companies to kind of come together on the idea of creating this global internet forum to counter terrorism. I imagine you know a little bit more about than I do and perhaps can speak to it.
But I think that was about three years from the identification of the problem and the request that something be done to this organization being stood up to do something. So in many ways, I think we’re going to be dependent on media or researchers and people putting out, you know, much like what we’re seeing with some of the disinformation around the Roy Moore campaign in Alabama, hey, you need to look at this, we need to get the story out there, we need to have Twitter responding to researchers rather than attempting to diminish and discredit the work that independents are doing right now.
Maybe you want to add to that.
RID: So bots and abuse is a threat to Facebook’s business model because Facebook is ultimately about authentic, human accounts. And as a result, Facebook is trying to tackle the problem, and they’re throwing money and people and resources at the problem. And I think they’ve made some right moves. Getting a lot of bad press for it, but they’ve made the right moves.
Twitter, the opposite applies to Twitter. For Twitter, bots are not a threat. They’re actually helping Twitter’s business model because they make it appear larger. So from Twitter, we can expect the opposite. In fact, I wouldn’t be surprised if some Twitter engineers have literally left Twitter and move to Facebook to fix the problem at another company. So I think Twitter deserves right now a lot more attention and a lot more criticism than it is getting.
I will just highlight a thing that is technical, but I’ll put it in plain English, and I’ll use an analogy. Imagine the Economist or whatever, The New York Times decides, well, we should give our readers the ability to unpublish letters to the editor from our website. They could do that, right? Fair enough. That’s what Twitter is doing.
But Twitter is doing something else. Twitter is also saying we should give our readers the ability to unpublish letters to the editor, not just from the website of The New York Times, but also from the Library of Congress. And that is just not OK. If we have something on the public record from people who have chosen to put something on the public record for an effect, not necessarily the 16-year-olds, then they shouldn’t be able to remove the record from a nonpublic, sometimes a nonpublic, repository because the effect is that they make history and in fact the news editable as a result.
KORNBLUH: And you’re seeing that from foreign actors.
RID: How many—I mean, let’s just be—make this a little edgier. How many retweets that the @RealDonaldTrump account receives when he tweets about Russia, how many of the retweets are actually bots versus human beings, the retweets or likes? Answer, and it’s really an uncomfortable answer, answer is we don’t know and maybe Twitter doesn’t even know and couldn’t even find out as a result of these policies.
KORNBLUH: Because of the deletions.
WATTS: I’m not—
KORNBLUH: So what are—so that’s a policy idea. What other policy ideas?
You’ve talked about public education, nutrition labels.
WATTS: Yeah, I mean, I don’t really see—in terms of regulation for policy, let’s focus just on elections and politics, whatever the standard is for advertising on any other medium should be the same in social media. I don’t know why we treat it differently. That will help at least inform the public so they can make better choices, again, about what they’re consuming, they know what an ad is and where it’s coming from or how it’s being repurposed.
The other part is, we’ve seen political groups repeatedly use Russian disinformation in social media over and over again. They know that we’re emotional, we want to win, so, you know, that’s a big part of it.
I am not going to hold my breath for the social media companies to figure it out. And I’m not going to beat them up, either. They’re there to provide a service. And they’re a business. And when bad things start to happen, I expect them to move forward and try and make corrections. They’ve all been slow.
I’m, you know, overwhelmed by now terrorist videos are being taken down. This was an issue that emerged in 2005 and it only took us 12 short years to get really on top of this. So I have a very low confidence that the social media companies will save us.
With that, I would say both Google and Facebook have moved deliberately over the years to improve threat detection along with technical detection. So, I mean, there have been times where I’ve gone to social media companies and said, hey, here are thousands of disinformation accounts, and they go, yeah, we don’t care, AI, machine learning, we’ll sort it all out, they’ll just figure it out. We’ve got a big machine that’s so great, I’m much smarter than you. You know, look at me on my skateboard in the office or whatever. (Laughter.)
And so that sort of arrogance has gone away over the last 10 years or so and it has become, OK, I need to understand these threats, like terrorism or disinformation or whatever it might be, and they’ve got to pair that with the technologists. And I know that’s happened at Google and Facebook and they continue to expand that out.
At the same point, they can’t cover every issue in the world. So, like, who’s the person covering Myanmar at Facebook right now? I mean, this just emerged. So they’ve got to come up with a system where they can go out to people that study these issues and understand the problems and quickly put machine learning, AI and the technologists along with it.
And that used to be what my job was at the Combating Terrorism Center 10 years ago. You know, for terrorism we were pairing industry and research and academics with the government to come up with solutions. So they’ve got to do that a little bit better.
But ultimately, this problem comes down to leadership. So, you know, our country has to decide, and it doesn’t have to be elected leaders. It can be civil society. You know, what do we want? What do we want our world to be like? Just imagine this in 2020. Everyone adopts, every political campaign adopts the Russian playbook and uses it on social media. I’m not talking about a foreign influence operation. I’m talking about every country in the world saying, you know what I want to happen in America? The following: boom, bots, ads, doing it on scale. Now add domestic political parties onto it, every candidate running one of those. Guess who’s going to lose?
If you think you’re going to be able to run for an election as a person who’s, like, a schoolteacher or you’ve got a $25,000-a-year job and you’re going to run for elected office in the United States against a bot machine and political campaigns and, you know, parties, political parties? You’re insane. This will quickly become just those that have the resources, those that have the time to manipulate and shift the information the way they want.
And I think that’s what I’m—I know we talk about Russia a lot and I talk about it a lot. But I’m more worried, is this the world you want to live in where it’s just a cacophony of noise coming from social media? Because I think a lot of Americans will just walk away, they will—they’ll be apathetic and just say I don’t even want to participate in this.
KORNBLUH: There’s a woman in the back.
Q: Thank you. Alina Polyakova, Brookings Institution.
Clint, your idea on labeling, so this has been discussed a lot in all these various working groups. But my question to you is, this is kind of the Big Mac theory, right? That if you tell somebody something is bad for them, it’s actually going to change their behavior. There’s no evidence for this that’s compelling when it comes to actual nutritional labeling. So people are not eating less Big Macs basically because they know there’s 2,000 calories in them.
WATTS: Actually, they are eating less Big Macs.
Q: No, well—
WATTS: I mean, people are making—the decision is on the consumer. That’s what I want.
Q: But this is what I want to—
WATTS: I don’t care what they eat.
Q: Can I challenge you on this? Because it doesn’t seem the consumer responds to labeling is the point I’m making. And second of all, in the nutritional world, there are federal agencies that regulate not just nutritional content, but also products that can have some hazard to human life. So wouldn’t we need a similar agency to actually implement and punish—
WATTS: No government agency should do it.
Q: OK, so there’s no government agency, then why would labeling actually work when consumers don’t respond to labeling when it comes to other products?
WATTS: But they do. Let’s go to Amazon. You’re mixing a lot of different, you know, analogies together. Let’s go to Amazon in terms of rating systems, right? Does anyone buy the one-star-rated product with two reviews? Generally, no, right? So it’s a system that comes up over time. Will someone buy the product with one star and two reviews? Absolutely because it’s $5, right? So someone is going to buy it.
I’m not trying to win over the whole world, but I want people to have responsibility for the decisions they make about the information that they consume. And so we have told you, this outlet puts out 70 percent false information, 20 percent manipulated truth, 10 percent truth. This is where that outlet is based at. Did you know that this is a state-sponsored outlet or it’s an outlet that’s based in Bulgaria that suddenly popped up six months ago? Are you aware of that? That’s up to you if you still want to read it because you think they’re informed. Give them that information, it will chip away.
I have no doubt about it. If you rate that—look at Rotten Tomatoes. Rotten Tomatoes is another rating site where this has happened. Now actors are complaining that if they get bad Rotten Tomatoes reviews before anything comes out, you know they’re not getting a chance. It’s reversed almost on itself.
KORNBLUH: So I want to unpack what you’re saying, though, a little bit because you’re—
WATTS: It’s not just about putting a nutrition label on it. It’s about telling people here’s what the source has been putting out, reporting versus opinion, fact versus falsehoods, and this is a little bit about that outlet, now you make your own decision. That’s what I want.
KORNBLUH: So you’re talking about a couple of different things. One is more transparency—
KORNBLUH: —so you actually know some context about the source. And obviously, that’s important because, otherwise, people wouldn’t be trying to be somebody they’re. So that’s part of it.
WATTS: Well, let’s go back to this. So when you buy a newspaper, OK, in the analogue days, do you know something about the newspaper when you pick it up? Yes, you have it physically in your hand. The reason you get duped by news that’s shared with you is because, who does it come from? Your family and your friends, people you trust. So you take the trust of your family and your friends on your social media feed over the actual outlet that’s out there. You’re not really assessing the outlet, you’re assessing the story.
I want them to assess the outlet, not just what they’re getting from their family and friends. I want them to go, OK, I know my family and friends have strong opinions and they look at good news sources, but I’m also going to assess what the information source is that it’s coming from. That’s what I’m seeking.
KORNBLUH: And then the other piece, the nutrition labels, I mean, it seems like there is some efforts by the platforms to work with these fact checkers, like Snopes and others, and, thereby, get some assessment about whether or not an outlet tends to produce disinformation or fake news, and then that can be fed back in. Slow process relying on outsiders, but still—
DIRESTA: Yeah, so—
WATTS: Can I ask that to Renee?
This is the new effort, right, that they’re trying to do?
DIRESTA: But that’s actually an effort that seems to have been largely unsuccessful. There was the kind of most recent stories about it. So there’s a couple of components.
So one thing is the platforms like to keep costs down, and their business is to automate things. Any time you have a human component involved in a system, which in Facebook’s case of flagging fake news was relying in part on people reporting things, what you really these brigading and mass reporting wars. So you have people deciding I don’t like this thing, you have actually pages, you know, calling to action other—there are members saying you’ve got to go report this story, it says something unfavorable us. So it turns into this mass nightmare of, you know, one army of opinions versus another army of opinions.
And I want to say, on Amazon, this is not really—hasn’t been written about quite as widely, but Amazon, the battle for reviews is actually kind of the new SEO because Amazon shapes consumption. Amazon’s search bar—
KORNBLUH: Why don’t you spell out what SEO is.
DIRESTA: Oh, search engine optimization. Sorry. It’s the way—it’s little tricks that you could do to make your website rank on the first page of Google’s search results. Now we see it all over Amazon, because if I go type in blender, I want—you know, I’m going to likely pick something from the first page of Amazon’s search results. And so doing everything I can to get my blender, you know, to that first page of results is potentially millions of dollars of revenue or not.
So Amazon has a very serious review manipulation problem that they are somewhat aware of, but also relying on things like algorithms to try to figure—to identify instances of brigading where people will say, OK, you know, I’m going to send an email out to my mailing list asking everyone to go leave a five-star review on my blender. This is a big problem.
So this is where we get to the—if there is a crowdsourced element of it, it is being manipulated. And this is—it’s very interesting because we all thought that crowdsourcing was going to be this magical way where people would participate and we would—we would take the wisdom of the crowd and we would turn that into really surfacing the best content, the best products, the things that you really needed to see. And it’s—this is the problem with algorithmic manipulation, is it’s manufacturing consensus, it’s gathering critical masses of people together in a manipulative way that fundamentally shapes, creates a false notion of how popular something is, whether that’s a product or a story or a person.
A lot of these bot accounts, these fake accounts, have hundreds of thousands of followers. They look very legitimate and so people don’t dig it. So I think—
KORNBLUH: I’m going to have to—
DIRESTA: I know, sorry. So what I was going to say was the platforms really do have to, at this point, take it on themselves to say we are going to have an opinion, we are going to hire internal people, and we cannot relegate this task to crowdsourcing and the assumption that—
WATTS: By the way, I—
DIRESTA: —people are going to—
WATTS: —don’t want it to be crowdsourced, just to be clear.
WATTS: Information, consumer driven.
KORNBLUH: I’m being told that one of the Council’s sacrosanct rules is that we have to end on time.
DIRESTA: End on time.
KORNBLUH: So I want to make sure that we honor that. But, you know, I think we’ve fleshed out a lot of the challenges here, and hopefully we’ll continue the conversation about how to move forward because we obviously have to come up with some solutions.
Thank you all very, very much. (Applause.)
This is the third session of the Hacked Elections, Online Influence Operations, and the Threat to Democracy symposium.
This panel examines what foreign policy responses are at the United States’ disposal to respond to Russia’s interference in the 2016 U.S. election, and how it could learn from countries that have faced a similar threat.
This symposium convenes policymakers, business executives, and other opinion leaders for a candid analysis of the cybersecurity threat to democracies, particularly to the election systems themselves and the subsequent attempts to shape the public debate through disinformation and online commentary.
FARKAS: OK. Good morning. Good late morning, everyone. This is—I’m going to venture to be bold and say this is going to be the most exciting panel of the day—(laughter)—because we’re all about solving the problems that you guys have been hearing about this morning.
And I’m very happy to have an international panel. Unfortunately, it’s shrunk by one individual because Professor Angela Stent is ill, so she couldn’t join us. I will try to channel her as best I can, with my own personal twist. But we, again, have an international panel that I’m very excited about because they have thought long and hard about how to counter disinformation, especially in the context of dealing with Russia.
So, to my right, to my immediate right, I have Ambassador René Nyberg. He is a very distinguished former Finnish ambassador to Russia, which is of course relevant here, and former chief executive of East Consulting. He’s also served as Finland’s ambassador to Germany—for four years, actually, from 2004 to 2008. He also was head of Finland’s delegation to the OSCE, which obviously also deals a lot with these types of issues, misinformation and ethnic conflict, peace operations, et cetera. He was—and then he was, of course, also Finland’s ambassador to Austria. He has a long, distinguished career. I won’t list all of the other things that he’s done and all of the other accolades he’s received, but he is well-poised to help us think about how we in the United States and other countries can deal with the Russian challenge in particular.
To his right, we have Dr. Ashley—Professor Ashley Deeks. She has a long career now working in the legal area with regard to cybersecurity and other issues. She’s a professor of law and a senior fellow at the University of Virginia’s Law School at their Center for National Security Law, and that’s—that is her area of emphasis. She’s focused on international law, national security, intelligence, and laws of war. She also served as an academic fellow at the Columbia Law School for two years. She’s served in the government—so she has the academic and the government expertise—in the Department of State, in the Office of (the) Legal Adviser for a year, 2007 to 2008. And, of course, she’s also a CFR alum. She was an international affairs fellow in—let’s see, in—
DEEKS: That was 2007-2008.
FARKAS: 2007—OK, so she did the State Department work there. She’s extensively published, again, in the area of international legal security.
So what I would like to do is essentially ask the panelists to give us a little bit of prescription. We heard a lot this morning about the threats that we face. You both know very well from firsthand experience and from research what the threats are. Can you—I would like to hear from each of you what your prescriptions are, what you think we should do, what the United States should do as a matter of foreign policy. And then I will try to add to the discussion by talking a little bit about how I think the Russian government might respond to your prescriptions.
I will start by saying my perspective on the Russian foreign policy—I’ll lay it out a little bit just so that there’s a framework through which you can—or a lens through which you might listen to their prescriptions. From my perspective, the Russian government is not likely to respond to a soft request for negotiation on the legal front. They are not likely to respond to anything except for a real strong, firm policy. And so I think that it’s important to note that, from the Russian perspective, if they think that what they’re doing is bringing them success, they will continue doing what they’re doing until essentially we force them—and I don’t mean by force, but we essentially have to force them to recognize that the price is too high for their existing information operations against the United States and our allies, and that they need to change course somehow.
And, obviously, there are a number of ways we can do that. We’ll hear about at least two of them from the panel here. And then, obviously, we welcome your input with regard to other ideas you might have. But I think I wanted to just lay that out as a foundation, that from the—from the perspective of dealing with Russia it’s really important to raise the price for them, whether it’s economic, political, or otherwise. To raise the price is, I think, what we’re aiming to achieve.
So, if I could, Ambassador, start with you, please give us your best advice.
NYBERG: Well, thank you very much.
I’m a little bit reluctant to come with advice concerning the United States and American policy. But I can talk politics, and I can talk—I can talk Russia. I can—I can reflect on what is going on and what they’re doing, how they’re doing that.
But let me start with a—with a political comment, which is—which doesn’t have anything to do with cyber. And that is that, if you think about the presidential elections in the United States a year ago, there were three people who were stunned by the outcome. The first one was Donald Trump. He didn’t expect to win. The second one who was stunned was Hillary Clinton, who expected to win. And the third one was Vladimir Putin, who was absolutely sure that the devil he knows would win—that is, Hillary. So I think this is the—I mean, this is really basic, and then cyber comes after that. And so it’s not all—it’s not a technical issue. There’s a lot of politics there.
Now, I’m reluctant to speak about the Americans, I mean, what you should do. But if I’m looking at and listening to the last—the panel before us, I very much agree that it’s a question of resilience. It is a question of education. It is a question of understanding what all of this is about.
Now, I had dinner the other night with a New York Times correspondent, and I surprised him by telling him that a couple of days back quite a famous German prize called Marion Dönhoff Prize—she’s a famous—a famous German journalist, already dead 10 years ago—her prize was awarded to The New York Times. I’ve never, ever heard that the—that a German foundation, which usually awards prizes to individual journalists, awards it to a—to a newspaper like The New York Times. And this actually reflects the interest in countering everything which we don’t—everything which is false, that you have a—you have serious newspapers, you have serious news sources.
And it is a fact that the Europeans do read more newspapers. There are—the public education in this sense is more geared to be—to be kind of a—kind of a—more immune, but not totally—more immune to false and fake news. And actually, an EU study about superstition and the level of superstition in different countries, and it turned out that my country, Finland, turned out to be—had the lowest level of people who are superstitious. (Laughter.)
FARKAS: OK. That’s it?
NYBERG: Yeah. (Laughter.)
NYBERG: For a beginning.
FARKAS: OK, all right. That’s great.
Well, I think—but I think that’s wonderful, and I think resistance and resilience is the most important thing, and we’ll get back to that a little bit in the Q&A. Thank you very much, Ambassador.
DEEKS: Great. So thanks very much.
I’m going to take a different tack. As a lawyer, I’m going to be sort of technical here and offer a number of buckets of tools, I think, that the—that the United States and its allies have to take little bites at this problem. I think the panel—the last panel was relatively pessimistic about the success on this. I’m not going to be much more wildly optimistic than they were, but I will just put on the table what I think the tools are that the U.S. government and its allies, especially its NATO allies, have.
So the first bucket would be, are there legal tools that we can employ? And I think the answer there is yes, there might be two different kinds of legal tools. One is international law and the other is domestic law.
So let me just say a little bit about international law. There have been lots of discussions going on for the past five or six years at the U.N. in something called the Group of Government(al) Experts that’s asking not just cyber questions related to elections or election interference, but broader cyber questions about how can we think about what’s appropriate behavior as between countries in cyberspace. And they were making some good success, some good progress up until 2017. And by that I mean there was at least a general consensus that the basic principles of international law embodied in things like the U.N. Charter were relevant to cyberspace. So, that is, if something happened in cyberspace that produced the equivalent of an armed attack, well, then the state—the victim had a right of self-defense. That’s sort of a basic principle of international law dealing with security.
What happened in 2017 was this basic sort of consensus that the well-understood norms of international law applied to cyber broke down. The Cubans were the sort of most vocal cause of the breakdown, but I think it’s well-understood that the Russians and the Chinese also diverted this consensus for entirely political reasons, so that there is not now even a kind of formal international consensus that some of these basic norms apply.
OK. So what, then, is left for international law to do? One thing might be to try to engage the Russians very directly in a kind of bilateral discussion about certain kinds of behavior that takes place in cyberspace. And I think our moderator, Evelyn, has suggested that is going to be quite challenging. Maybe we could talk a little bit more about whether that could ever happen. But there is a model for that, and that model is between the U.S. and the Chinese government related to economic espionage, most of which was occurring in cyberspace. So it was a real problem, it was something we were skeptical that the Chinese would agree to, and yet we got into a position where there was a bilateral MOU with an adversary, basically, agreeing that a particular norm should attach to our behavior, especially as it applied to cyberspace. So that might be one potential model.
There’s another model, which is can we as likeminded states—that is, sitting down with our NATO allies, for instance—try to articulate in some level of detail what norms we think are acceptable? There was some discussion on the last panel about values. What do we stand for? What are our values? We could come together as NATO countries and identify what our values are as reflected in behavior in cyberspace. So that would be another possibility.
There is a domestic law angle, too, and we’ve already seen some of this happen. And what I’m talking about here is indictments of people overseas, including foreign officials, for activities they’ve engaged in in cyberspace, including hacking. There have been indictments of people from the People’s Liberation Army. That might have been the thing that motivated the Chinese to enter into an MOU with us. There have been indictments of Iranian officials. And there may well be indictments of—there have been news stories that at least six Russian officials who were involved in the election hacking are being investigated and potentially going to be indicted by the United States. Is it likely that these officials are going to show up at JFK Airport so that we can arrest them and prosecute them? Not likely. On the other hand, it definitely cabins travel for people. It has proven very—it complicates their lives, I think, in important ways. And so I would say watch that space for whether the Justice Department decides to indict named Russian officials, six, probably more, for activities like this.
A couple of other buckets just to mention quickly. One is sanctions. We have seen sanctions imposed by the Obama administration. It’s something Congress has been very interested in, and it has enacted laws related to sanctions imposed on individuals associated with the election interference. That is another way to make people feel pain in their pocketbooks, through their business opportunities, and so on.
A fourth category, intelligence sharing. We and our NATO allies—Germany, Italy, the Brits, lots of these countries, Montenegro—have experienced election interference. So there are, I think, important ways in which we can share with our allies and they can share with us what are we seeing, what tools are being used, what are sort of the new iterations of tools we’ve seen before, and come together and think a little bit more coherently about how we can respond, both in cyber and in information operations. I understand that NATO has created a Center of Excellence related to counterpropaganda. So that is another kind of positive place where we can put our heads together and figure out how to manage this.
And then the fifth bucket is countermeasures. So what I’ve described is all sort of find and good and well-recognized, and at least public. Are there things that we can and should be doing less publicly? So, to take a non-Russia example, North Korea engaged in a hack against Sony. Reports suggest that the various media companies in North Korea were forced offline for some period of time. I have no internal knowledge about what happened, but you might speculate that the United States made it difficult for those media companies to get online for some period of time. So there’s this other bucket of activities, a toolkit that the U.S. government could and probably is thinking about using.
FARKAS: Well, thank you very much to both of you. I think there’s a lot in there for us to process.
I think I want to start with the United States and United States resilience. And it does relate to the legal realm, because in some—in some sense we need to be legally resilient as well as being resilient as a culture to false information and to cyber operations—and ready to also take action, but I’ll get to that in a second. But in terms of building resilience, one could argue, well, it’s easy in the Swedish context because it’s a small country, you all more or less can find some common ground on—
NYBERG: You mean the Finnish case? (Laughter.)
FARKAS: Sorry, what did I say?
FARKAS: I’m so sorry.
NYBERG: It’s almost the same. (Laughter.)
FARKAS: That was a terrible mistake. I’m so sorry. On the—on the Finnish case.
But when we’re talking about the United States today, we have a very divided political arena. We have, unfortunately, I would argue—although I haven’t seen any data on this, but I have seen reporting indicating that our civic—our civic literacy, if you will, is lower than it—certainly than it should be, and perhaps lower than it has been in the recent past. So you’re dealing with a landscape that is perhaps not similar enough to Finland’s landscape. Can you comment a little bit about what lessons you might draw from your country’s experience that could be relevant here? And what are the steps we need to take, even if some of them are more long term? Like, obviously, improving civic literacy, that would take—that would take action along the educational front, which would take some time if you’re starting with primary education.
NYBERG: I liked very much the comments of Clint Watts at the—at the second panel, where he made a couple of important principal points. The first is we don’t—I mean, the answer already in—during Soviet period was that we do not have a propaganda ministry. We don’t play—use the same playbook back. We don’t—we don’t act this way.
And it’s also important—and this is very much in our case, but also it’s also that this should not be anti-Russian. It is—it isn’t—it’s a larger problem. And we should also remember that—it was Thomas Rid who said that—this is, of course, pre-social media—even hacking or breaking codes, et cetera, stealing—reading other people’s mail, this is not a new phenomenon. It’s done on a—on a different scale and different methods today.
But I’m a Russia hand, and I’m actually reading a book now—transatlantic flights is good for big books—I’m reading Stephen Kotkin’s “Stalin: Waiting for Hitler.” I have 700 pages to go. (Laughter.) It’s a—and he has a wonderful example, which actually is worth quoting, and that was the famine which ravaged Ukraine and even more Kazakhstan in the 1930s. In 1932, the Russian media was not allowed to use the word “famine,” “golod peruski” (ph). But they did publish a thing about famine, saying that there is famine in Poland. The starvation in Poland, it’s not a crisis, it’s a catastrophe. In Czechoslovakia, the villages are dying. In China, hunger despite a good harvest. And the United States, bread lines and poverty. So this is—there is nothing new about this.
But the question is here that Soviet propaganda had a very low credibility. It’s effects in the West were actually very, very—at the end, there’s no effect at all. But Soviet propaganda failed also to convince its own population later, in the late period of the Soviet Union. The situation with Russian propaganda is different. They’re much more—smarter.
And this is—and this is, of course, the—this is, of course, the challenge. I’m not an expert on cyber, on the technical part, but there’s an interesting thing is there are hybrid operations where information operations come in. We had a case in—case which was quite exceptional or quite sensational in the fall of 2015. All of a sudden the Russians let third-country nationals, without documents, cross over to Norway. Five thousand people crossed over. After that, 1,000 crossed over to Finland. The just—the only—the only reference you can make about this, they just could not resist playing with a scare of the migration crisis in Europe.
FARKAS: I think your foreign minister at the time called it the same term that we had our military officials calling it, the weaponization of refugees, yeah.
NYBERG: Correct. It was absolutely that. And, but it was also accompanied by a media campaign. It died down. And one of the—maybe the only really effective argument we had with the Russians was asking them: Is this the view—the view you have about your border, that you allow criminal (schlepper ?) organizations operate on the Russian border? And this border is the best border. So we’re facing all kinds of very different things. And this is why—this is the reason why the Finnish government put up something, which was mentioned in the earlier panel, the European Center of Excellence for Countering Hybrid Threats. The government is also working with the university—Harvard University on this. So it is educating our people, informing the people, and also kind of trying to analyze what is going on, but not playing the playbook back.
FARKAS: That is an interesting example, which we don’t have time to get into in too much detail, but I do think it’s interesting to note at the same time that this was occurring—that these refugees were basically being told to go to the border and cross into these Scandinavian countries—we had the big refugee flows, which were already a problem for the Europeans, coming from Syria, and from Afghanistan, Africa, et cetera. So the Russian—the FSB was essentially, as you said, opportunistic. And I think that’s important to note, again, because they will be opportunistic.
However, in this case, it didn’t actually work. And in part it didn’t work—we don’t—many of you probably don’t even know about it. And in essence, that was also a failure for them, because had there be a greater hue and cry in your country, and then of course regionally and internationally, the Russians may well have succeeded in obtaining their objectives, which had nothing to do with the refugees, per se. It had more to do with having us fight one another, European countries fighting one another, fighting Russia, fighting across the Atlantic.
NYBERG: And passing the message that we can harm you.
FARKAS: Right. And in essence—in essence some—
NYBERG: It’s kind of—kind of flagging that this is—watch out, we can harm you. But there’s one point which is very important, and this is the borders. There’s no country that can control a border without a partner. Think about Russia and the Chinese border. And I won’t speak about the Mexican border.
FARKAS: Thank you. (Laughter.)
NYBERG: Think about the—there’s no way you can control a border without a partner. And violating this and playing with this is not something which is forgotten. So they didn’t achieve their aim. On the contrary—on the contrary, left a very bad taste.
FARKAS: Yeah. Yeah. So, Ashley, if you could comment on the first question, and then also, obviously, the legal—resilience from a legal perspective. And also, earlier in the—in the green room you mentioned sort of future threats we might face, like tampering with information. If you—if you could—I hope I’m not asking you to go too far forward, looking at what we might face in the future and how we can respond, from a legal perspective.
DEEKS: Sure. So on the—on the resilience point, I mean, I guess one way to think about a resort to NATO and sort of the likeminded democracies there, is a form of resilience, right? It’s a form of employing existing tools. And we understand each other very well. We know what each other’s capacities are. We have all sorts of fora that current exist. So, and we have—we’re all experiencing similar, though not identical, challenges. And so that might be seen as one form of resilience. You might also even cast the use of the existing tools, such as the ability to impose sanctions or the ability to indict for hacking, as a sort of form of, look, this might seem new, but we know how to deal with some of these challenges. And so we’re going to use what we have right now, maybe until—unless and until we can figure out a better way to do it.
We did talk a little bit. I was—I was in part stimulated by a question from the last panel, which is we’re thinking about the most recent past threat, challenge, the disruption to the elections, the information operations, the abuse of Twitter and so on. But what’s next? What else should we be worried about? And one thing that I’ve been thinking about some in my own scholarship is the misuse of things like fake videos, fake audio. Machine learning is making this increasingly realistic, easier to do. And I think it’s easy to imagine how a country like Russia, that wanted to stimulate a military action by the United States against another country, could use fake video to do it.
So, for example, it creates a fake video of Kim Jong-un saying, OK, time to move the missiles to the launch pad, and feeds that into a feed that our chairman of the Joint Chiefs receives. And the chairman says, oh my God. Here it goes. And you sort of send a country down to war against a country that is not actually about to attack you, because you have used these increasingly sophisticated tools, such as fake video to do it.
NYBERG: It’s been surprisingly successful, this source-checking things. Think about the organization called Bellingcat. People who are not providing information, but checking facts, and how they’ve been pinpointing, for example, the downing of the Malaysian aircraft over Ukraine. That is one of the most damaging examples of what has happened in Ukraine, which is a war of attrition, as we know. So they’ve been doing things where this kind of videos would be—can be very quickly identified as fakes, and using geo-positioning and all that—very sophisticated tools. And this is a NGO. It’s not government. It’s not a government job. I know there are a couple of Finns who are—Finnish nationals who are working there. But Bellingcat is really something. I mean, it’s one of those things where you realize that the—that the society can act on its own. Civil society is not—doesn’t take it.
FARKAS: Yeah, I was actually going to exactly make that point. I mean, I think there is a role here for civil society, for active and interested citizens who are doing what the Bellingcat folks are doing, which is essentially contributing to the investigations into the shootdown of the Malaysian airliner over Ukraine in 2014. And, interestingly, that’s another case where the law is being used—international law is being used, because the investigation has implicated Russian actors. And they will—intelligence officers, who will probably be facing some sort of retaliation legally, at least if you believe the Dutch government, which had the largest number of citizens who lost their lives in that—in that attack
If I could—we’re about to run out of time, but I do think it’s important to think about defense, retaliation you mentioned, you know, actions that we could take to retaliate. Clearly resilience is an important part of the equation. The only thing I worry about is when we’re talking about a country like Russia, unlike China which still has demonstrated that they believe in the international order, they are not acting counter to it, they’re not trying to remake the rules of the international order or to challenge the existing frameworks.
In the case of Russia, there’s a real danger that we can’t turn them back from their very disruptive activity and bring them into line with legal norms until, again, the cost gets sufficiently high. So until we threaten them, which brings with it some kind of danger for us. And what I mean by a threat is, you know, we threaten to take our their military cybernetworks, or something like that. Or we take them offline, you know, as a—similar to the example you gave, Ashley, of North Korea. So the danger with Russia is that we may have to bring the situation home to them in a way that makes it dangerous and increases the risk of escalation.
So, with that rather negative an alarming comment, I would like to open it for discussion. There are a couple ground rules. First of all, everything has been on the record—and I should have reminded you guys as well. But for the members, everything is on the record. We would love your questions. We ask you to raise your hand, and then when the microphone comes to you, and we need you to speak into it, give you name and affiliation.
Q: Larry Garber, Digital Mobilizations.
I’d like to pursue the international law piece, because that’s intrigued me. And how—I mean, do you think it’s possible, with the Russians, to be able to develop norms, particularly with respect to election intrusion, that would be comfortable both for us and for them? And particularly, isn’t—in a sense, that’s what Putin is looking for, which is norms that would prevent us from undermining his domestic politics? I mean, he doesn’t like us—you know, whatever we may think of it, he thinks of it as a—and his colleagues think of it as, you know, our seeking to undermine his elections and his presidency. So is there really a common ground that we can get to in terms of international norms, or would we be just playing into the Russian game. And in that regard, the alternative that you proposed of, you know, sort of likeminded countries getting together, a more productive approach.
FARKAS: I’m going to channel Angela Stent, and take that prerogative, and then, if I could, turn it over to Ashley first for the legal and then if the ambassador has anything to add.
So I think the problem with Putin is—I agree with you—that he sees us as a threat to his government and to other governments, like the Assad regime. He sees us as meddling internally, all of the things that we do to fund Voice of America, Radio Free Europe, you know, across the board, he sees as internal, you know, meddling. And so therefore, he can justify what he did to our elections. I don’t agree with that perspective, but I understand it.
The problem is, while that might make you think that there’s room for negotiation, it’s very difficult with Putin because we’re at a point where he really doesn’t trust us, and I don’t know—and, again, I would love your thoughts on who he would trust, or which institution, or which constellation of actors he would trust. Because when it comes to the United States, if we’re in the lead that, to Putin, is not going to—it’s not going to deter him, and it’s not going to put him at ease, if you will. Ashley.
DEEKS: So it’s a good question. Obviously when you’re sitting at a table negotiating with somebody, you have to think not just about what you want from them, but what you’re willing to give up as well. And I don’t think this is true recently, but historically the U.S. did interfere with other countries’ elections. There’s a long history of that. So that would drive some of the skepticism that Evelyn just mentioned. I am—I am not a Sovietologist or Russian expert, but I would think that there could potentially be common ground about a very narrow and concrete norm that said, for example, we will not interfere with the operation of election machines to switch the votes as they actual—the actual hardware. We will not tamper with those things.
Would we be willing to agree to a norm that says we will not try to influence any other state’s election ever? No. Right? I mean, because that is a very broad norm that would include not funding NGOs that were trying to assist in democratic elections and so on. I think it would have to be a very narrow norm and it would have to be a specific norm. But if the norm were: We will not literally hack foreign election machines, I think we could get on board with it. And I don’t know, but it’s possible that the Russians might be willing to get on board.
If not, then how does the likeminded conversation go? Well, you could develop a slightly broader set of norms, I think, that people would agree with, that might include, for example, not interfering with narrowly defined critically infrastructure during peacetime. But then the question is, well, OK, that’s nice. You and NATO countries have all agreed on this norm. How does that impact the Russians? Will that affect their behavior at all.
And I guess what you have to—I think maybe you have to take a long-term view here, which is if we think that that is a proper norm to develop generally, then we should try to start developing it with our allies. And maybe at some point we have 100 countries signed onto it. And at some point on the margins that could affect Russian behavior. It could also affect how others react to Russians when they engage in that. But it is a much softer move.
Finally, I’d just say, in the bilateral context, there will be suspicions on both sides because questions of attribution are very hard in cyberspace, and questions about enforcement are hard. So that is another—those are two other things driving, I think, a difficulty in concluding a narrow bilateral agreement.
FARKAS: Yeah, sorry, that’s what I was interjecting to say, enforcement of course would be a problem. Yeah.
FARKAS: Mr. Ambassador.
NYBERG: Let me take this to another level, and say that for—we have a situation where—this all boils down to Ukraine. The relations between the United States and Russia, and the relations between the West—the European Union and Russia are at its low point since the Cold War. So it all boils down to the war in Ukraine. It’s not a crisis. It’s a war of attrition going on. It’s Crimea. It is the catastrophe of Donbass and all of that. And before you can—before you see any progress on that, I cannot imagine anything meaningful discussed on—look at arms control. Just look at the—we all know that it’s the INF and missile defense, and all of that. Nothing moves, and will not move, before these—before we have a new situation.
Then the question is, is this regime in Russia—is Mr. Putin, who will be reelected in March 2018, is he able to change his policy on Ukraine? This is—this is the biggest question I have, because it is definitely his biggest mistake that he went after Ukraine the way he went. And one of the effects is that he created a Ukrainian national identity. And what you have is a real country. Ukraine’s a real country. It’s in great difficult days, but it’s a real country. So I would—I would say that cool your heels. (Laughter.) There’s not much you can do before the real issue is solved. And the only way to solve it, is to start really negotiating.
FARKAS: Yeah. And I think that goes to the point that I was trying to make about it’s very difficult with this current government, given the fact that they really have an adversarial perspective on the United States and the West. And certainly, Ukraine is at the crux of our disagreement with Russia. I would say our arms control issues are separate, but if you can’t negotiate with them about Ukraine, and if we have this—if we’re at loggerheads with the Russian government, it’s hard to imagine negotiating cyber agreements.
However, I will say that the possibility exists, if you can get significant other actors involved. So, for example, China, India—China, in particular, is interesting, because for the Russians, who are more strategically minded, who can see beyond Putin, they recognize that the challenge for them will be managing their relationship with Russia. And so if they can eventually come together with the United States and Europe in deterring, you know, very bad cyber acts from the Chinese, that would be in their long-term strategic interest. We just have to get through these remaining Putin years—(laughs)—it seems.
OK, any other questions? From the front here, Ashley. I mean, Audrey. Sorry. (Laughs.)
Q: Hi. I’m Audrey Kurth Cronin from American University.
This panel is speaking very much in state-centric, nation security kind of language, which is what we do when we talk about cyber versus social media or internet. And yet, this is all part of the same problem. So how do you deal state to state when you’ve got a whole range of actors in gray areas—everything from Chinese so-called volunteers, to Russian private companies that are behaving as trolls, to individuals in social media, to this whole range of actors that the state either doesn’t completely control or doesn’t completely admit to control?
FARKAS: That’s a great question. And if I could sort of add onto that question for Ashley first, and then the ambassador, because what we see in the Twitter world is not only do you have bad Twitter actors—non-state and state actors—but then quite possibly states countering them. And so the issue of now whether it’s bad to have a fake bot gets polluted by the fact that a good state could have a good bot. So, Ashley, if you could address that from the legal perspective, and then the ambassador.
DEEKS: So I guess two thoughts. One is, obviously, the closer that these private actors are to the government, the more we’re thinking about them as just proxies for the government. You noted there are potentially some groups of private individuals who are coincidentally acting in the same way that their country might wish them to act if they weren’t actually a proxy.
FARKAS: Or terrorists. I think she had in mind the terrorists.
DEEKS: Yeah, oh. So I guess the way that I think the U.S. government has been thinking about it is, as evidenced both by, I think, its statements and its indictments is, it’s happy to indict both, right. It’s happy to indict state officials, it’s happy to indict private actors who are hacking. And I’ve seen some indictments where it seems like both are captured within the same indictment. So we have domestic criminal tools to punish things that cross the line into criminal acts.
So then I guess there’s this other complicated question about the United States itself as a source of bots, including bots that then go and operate in other countries. So if we say to another country we want you to stop bots emanating from your territory and affecting things here, they’d say, well, we’d like you to do the same thing in response. What would that require? Well, my understanding is that would basically require the National Security Agency to be inside all of our networks, monitoring what’s leaving our networks, which is incredibly problematic, and civil libertarians hate that idea.
So I think that’s maybe in part why we, as an interagency, have sort of gotten stalled out on thinking about how to manage that problem.
NYBERG: These privates are probably either proxies, they’re loonies, or they’re terrorists. This is the three categories that come to my mind at least. (Laughter.) You have to fight them differently. And one of the things is you have to fight anonymity. I very much agree on what was said in the panel before us about the (eggheads ?) and others.
So I don’t have much of—I don’t have any specific things. The idea of censorship is, of course, a touchy issue. And a government and a state has to protect itself and it has the right to defend itself. And this is—this is—these are the legal issues which are burdening and how do you do that.
For example, in Germany, there’s an open debate today about hack-backing. And Ashley referred to the Sony case as an example. But in Germany, the question is that, who has the right to hack back? Most of the—most of the Western governments and their agencies have the physical and technical means of doing that. But you also have privates who are quite powerful. So do you want—do you want a private actor to be the famous, be the big player, like Facebook or Google, to start its own private war by hacking back?
But in Germany, this is an issue which is now—which is now a question of, on what premises, who can allow, give the rights to hack back? It has to be—the thinking is very much states’ thinking, that it has to be the security services or the armed forces, which is almost the same thing as business.
FARKAS: Thank you.
Let’s go to the back. There’s a hand in the center there.
Q: Hi. Elmira Bayrasli from Foreign Policy Interrupted.
Just to dovetail on that last question—and the ambassador just mentioned, does Google have the right to hack back—Google, and moving away from, you know, Russia and hacking, the reality is Google and Facebook are taking essentially foreign policy decisions. If you take a look at what Facebook has done, whether it’s in Israel and Myanmar, with Free Basics in India or in Africa, I mean, they are taking foreign policy decisions, whether they know it or not. And I personally think that they’re not aware of it. Google, when they went into China in 2006, that was—that had foreign policy implications.
Ashley, I’d love for you to talk a little bit about, how is the U.S. government engaging with Facebook and Google, which are clearly recognizable actors, and how do we actually make these businesses realize that they are—they have foreign policies and that they need to be engaging on the policy spectrum?
DEEKS: So it’s a fantastic question and one I don’t have a good answer to. I’ve not been in government now for about five or six years, so I don’t have sort of great visibility into it. There was a lot of reporting at the tail end of the Obama administration about lots of trips from the National Security Council out to Silicon Valley to have conversations. I assume some of those conversations touched on this, although I think they were more focused on how do we suppress ISIS’ use of Twitter than about the foreign policy implications.
But it’s a great question. And a couple of years ago, I heard an individual who worked at Facebook describing the types of legal assistance requests they received from foreign governments. So he was using India, I think, as an example. And he said, well, we get these requests and we try to figure out if we should comply with them. And one of the things that we think about is, is the system, is India’s systems, consistent with the International Covenant on Civil and Political Rights? And I thought, what? That’s the assessment you’re making? So that is an assessment that we would expect U.S. government officials in the, you know, Democracy, Rights, and Labor or the Legal Adviser’s Office to be assessing; and yet, they are basically forced to make those kinds of analyses. And that’s a sort of international law analysis, not just a foreign policy analysis.
But I think the question really surfaces this really important point that actors—maybe 30 years ago, most of the international exchanges about foreign policy were government to government. And now, because of the size of these companies, the kinds of decisions they’re making on the foreign policy stage carry these very significant foreign policy implications, not just for the company and the receiving country, but also for all of us, for the U.S. foreign policy as well.
NYBERG: The only thing I can say is that foreign policy is mostly the privilege of the governments, but it’s not the monopoly anymore in the sense that you can’t do anything, and especially in our societies, our free societies, and civic societies. I mean—I mean, you’ll break rules if you start using—if you start using force. But otherwise, I mean, it’s very difficult to do.
I defer to what Ashley said. I can’t—I can’t imagine that the government could, I mean, could interfere with nongovernmental organizations, which can be very big and very powerful in this sense.
FARKAS: I think I would venture to say, just based on my conversations with executives from—I won’t name the companies—but anyway, with companies like Google, that they, for a long time, have had a—some of their executives for a long time have had a sense that they are operating in foreign policy waters, but it was beneficial to them to not discuss this publicly. Now they’re forced to discuss it more publicly, and they cannot act sort of on a case-by-case basis anymore.
And that’s the problem, because even, as we know, the U.S. government has trouble being consistent across the board when it comes to national security and other threats that we have to deal with. So you can only imagine what a corporation with just a few people inside the corporation who understand how the U.S. government would approach it are facing.
So I think that is an excellent question that you should let David Sanger and Senator Burr struggle with, because it really is at the crux of, how far does the government go now to regulate or partner with these big companies, which, in effect, we have allowed to accrue this amount of power?
We have time for a few more questions, so let’s take from the front row here.
Q: Dee Smith, Strategic Insight Group.
I would like to ask you what you think the strategic goals of Russia are in this whole area of information operations. Are they attempting to get specific policy things happening, to get specific people elected? Or are they simply attempting to sow social decohesion within countries and sow international decohesion in organizations like NATO in Western Europe?
FARKAS: OK. I will channel Angela Stent, but I will follow you, Mr. Ambassador. Or I’ll channel myself, too. (Laughter.)
NYBERG: Thank you very much. I think Russia is a siege fortress today. They’re isolated and they’re self-isolating themselves. And just think about the decision taken yesterday that Russia will not be able to participate in the Winter Olympics. I’m not questioning it or talking about why it has happened. But you can imagine what it means for how it is received in the population. It is something which people are—they will—they feel insulted.
So we’re talking about a country which is—which is—which is not as powerful as the Soviet Union used to be, but it’s a big one. And it is a country, the only which is—which is—is a nuclear superpower like the United States. So this is a—this is a country which feels itself—feels itself under attack, which is partly not—which is only partly correct.
So what they are using—they have a long tradition of—they have a very, very strong mathematical tradition. They’ve been breaking codes since czarist times. They were one of the best during World War I. They read in real time British cables during World War II. This is nothing new. They have one of the largest organizations on this.
But what they are—they feel—they’re directing their anger today towards the European Union, which they dislike as an organization, they dislike the idea that there is something like the EU. You can’t—there’s the famous Henry Kissinger question about what’s the telephone number of the European Union. And they dislike, of course, the United States, which is—which is—which is—which is historically it’s very strange because that was the model for Stalin. I’m referring to Stephen Kotkin’s wonderful magisterial biography of Stalin. That was the model for Stalin of reindustrializing the Soviet Union.
So it is—it is—I don’t think the motives are very strategic. There’s a lot of—a lot of defensive thinking and feeling, feeling insulted and not admitting that you’re the underdog, although, you know, you’re the underdog.
FARKAS: So I will add to that as well, though I’ll note that Stephen Kotkin just called and he said he’s going to offer you some of the royalties for sales from CFR members.
There are three things essentially that this Kremlin wants: One, to maintain its hold on power. Two, to remind the world that Russia’s great, to demonstrate that Russia’s great again. That’s now related to objective number one because Putin got into power on an economic platform; he’s staying in power based on making Russia great again, so nationalism. The third one is pushing back against the right to protect or the international community’s right to intervene in states to change their regime. That’s also related, obviously, to his desire and to his cronies’ desire to hold onto power.
How does that affect the United States? What does that have to do with the United States? It’s because the United States is the only power that can threaten those three objectives that Vladimir Putin and the Kremlin have.
So what would he like? He would like a weak United States that’s unwilling and unable to push back against Russia, to counter Russia’s revanchist, anti-status-quo international agenda.
Do you want to add anything to that, Ashley? OK.
On the aisle there.
Q: Hi. Joanne Young.
I would like to understand better how a country like North Korea, which is so backwards and so, you know, isolated—
FARKAS: I’m sorry, Joanne, your affiliation?
Q: —yeah, I’m sorry, Kirstein & Young—how that country can field the kind of hackers that are able to infiltrate a company like Sony. And when you talk about hack back, why is it that we can’t go back and literally seduce these hackers and deal with the threat that countries posing nuclear threat, through, I guess, hack-backing? Because it would seem to me that, I mean, first—I guess I’m asking two questions. How do they field that kind of sophistication in such an isolated regime? And why can’t we hack back and undermine that effort?
FARKAS: OK, I can answer a little bit of the North Korea question to get us started and give you guys a chance to collect your thoughts, because I did work on North Korea from the vantage point of my previous jobs I held in Congress and on the nonproliferation agenda, looking closely at North Korea.
And for them, it’s a matter of national priorities. So they’ve put their resources where they want them to be, which is in developing a cyber capabilities. Obviously, we know about their nuclear program as well. They have been able to partner with other countries, potentially also with other individuals from countries to help them.
In some part, the United States even has a role to play because we’ve provided some cyber education to North Koreans, obviously not intended to build their cyber offensive and military capability, but nevertheless, as part of our engagement with them. And civic organizations have done the same. So for North Korea, it’s essentially a priority. And so they’ve put their limited resources towards those priorities.
Ashley, maybe if you could start because there’s a lot in there for you.
DEEKS: So you asked a question about hacking back and why can’t we seduce those who have hacked in. So just to be clear about what hacking back is, generally the idea is—let’s say it’s a private company has had some of its intellectual property stolen and extracted from its system by a hacker and taken to another system. So hacking back would be tracing the hack that removed the intellectual property and retrieving it from a foreign computer, right? So it’s generally a sort of, at least at this point, conceived of as a sort of modest retrieval of property that you’ve taken from me, not necessarily a kind of propaganda or attempt to influence the loyalties of the person who is engaged in the hack.
Can we hack back? Well, I mean, I guess we’ll ask who the we is. Conceptually, the U.S. government has the capacity, often, to identify the hacker, to attribute and to trace it back. The hacking back we were talking about a little bit earlier was a question about whether the private company itself that has been hacked has the legal authority to go and retrieve its items?
So why don’t we—to the extent it’s something the U.S. government sees as its own problem, let’s take Sony, maybe it saw that as attacking critical infrastructure, produced real damage on the ground, and we had good attribution, significant enough that the U.S. government, say, wanted to sort of retaliate.
I think the government has been quite cautious about doing things destructive in the cyber sphere, both for legal reasons and for norm-setting reasons. Right? I think there is very much a—we haven’t seen major, major cyberattacks on each other, even against countries that are hostile in terms of producing physical damage. And I think the U.S. government probably thinks it has a strong interest in not crossing that Rubicon.
NYBERG: Just a general observation of resilience, which goes beyond North Korea, but back to Russia, is, of course, that the real divide here is countries which are ruled by law or countries which are ruled by man. And for those countries which are ruled by man, they usually consider ruled by law to be—to be something—to be a sign of weakness because it takes a lot of time before decisions are made and all kinds of people who are unnecessary, participating in making decisions. (Laughter.)
But actually, that is the strength. And this is—this is—this is the—this is the real problem. Russia has a long history. It has Westernized and modernized during the last three centuries. And every time there’s a new ruler, there’s been a new push to modernize, et cetera. But it has never attained the level of rule of law. It was close to that in the very early ’90s and it had one period after Alexander II in the 19th century where independent courts were created and all of that.
So this is—I think this is one of the—democracy is not the word. I mean, you can define democracy in many ways. You can have—and you have all kinds of democracies. This rule of law, which is the basic thing, this is the deep divide which we have in the world. And it’s not only the West and Russia, a couple of other countries, too.
FARKAS: We have time for two more questions. Do we have two more questions? There—oh, and what we’ll is we’ll bundle them. Oh, just one more. I’m terribly sorry, just one more. You can try to accost the speakers later.
Q: I’m Dave Cooper, I’m with a small business called Ivory.
My question is, it seems, going back to the Russian hacking of our election this year, come full circle, it seems this is very public now and acknowledged by our government. Did we miss an opportunity in norm-setting with the response we did? So did we do enough, or was it, in your view, a proportionate response? Or could we have done a lot more to help deter in the future?
FARKAS: OK. Let’s start with the ambassador and then Ashley and then I’ll close.
NYBERG: Well, I said in the beginning I don’t—there were three people who were surprised by the result of the elections. I have nothing to add to that. I don’t think the—the hacking was bad, but it didn’t sway the elections.
FARKAS: OK. Ashley, rather quickly. Because Marisa Shannon, who’s really in charge, is telling me our time is up.
DEEKS: Right. So President Obama was careful not to say that he thought it was an international law violation, that he thought it violated existing norms, but he didn’t say that it violated international law. And that actually seems right, at least the norms as they currently exist.
So maybe your question is more, should we have said more to try to announce that this should be a norm? I think then you have to think carefully about what kinds of things do we do in other countries to try to influence elections in ways that we think are legitimate, but might be seen through the eyes of others as not as dissimilar from what we saw in the Russia thing.
So I think he was trying, I think, to draw a line between what was unlawful versus what was problematic normatively.
FARKAS: And I think this is a perfect question for you to pose, David, for the session with Senator Burr. I think David Sanger is here listening, or he was anyway earlier, so he may have noted it himself.
There is—I want to just make one final comment before we close the panel and before I thank the panelists.
I think the real issue when it comes to tackling these problems is a need for international leadership. And this is where, whether it’s, you know, having the right response ahead of the time, after the fact, whether it’s getting—setting ourselves up so that we have no further cyber information operations exercised against the United States, we need a gang, if you will. And to get a gang, you know, to gang up against the bad guys, you need to have international leadership, you need to demonstrate leadership, you need to go out there and get all the good countries together and then figure out ways you can exercise leverage to get the ones that are grey, let’s say, like China, bad, like North Korea, bad, like Russia—I guess China, I don’t know, it depends on what the issue is, but never mind—just get everyone together. That takes international leadership.
So I will leave it on that note. Maybe Senator Burr will tell us how we can get that.
Thank you so much to the panelists. You stepped up the plate, filled the Angela Stent void.
And thank you to Marisa for organizing this.
And thank you all for participating. (Applause.)
This is the keynote session of the Hacked Elections, Online Influence Operations, and the Threat to Democracy symposium.
This symposium convenes policymakers, business executives, and other opinion leaders for a candid analysis of the cybersecurity threat to democracies, particularly to the election systems themselves and the subsequent attempts to shape the public debate through disinformation and online commentary.
SANGER: Well, Senator, thank you for joining us.
To all of those of who are now well-fed and back in the room, I’m David Sanger from The New York Times.
And I’d like to welcome Senator Richard Burr for—to the keynote of this—it’s been a fascinating morning, a discussion of Russia, the election interference, but also broader cyber issues. And we’re very glad that you’ve come here to the Council to talk to us.
BURR: Delighted to be here, David. And good to see you. I question the meal part. It didn’t look like it was substantial enough, but—(laughter)—that hopefully will keep you from sleeping through this part of it.
SANGER: It’s the Council. We’re on a tight budget, that’s right.
So, for those of you who don’t know, Senator Burr is in his third term, senator from North Carolina. I haven’t sorted out whether you’re in the Duke-rooting side of the state or the University of North Carolina-rooting side of the state.
BURR: Whichever one wins. (Laughter.)
SANGER: OK. It’s probably a good sign for what we’re going to be discussing for the rest of the day.
He is, of course, chairman of the Senate Intelligence Committee. And for his sins as becoming the chairman, he is now managing the most complex, the most politically charged, and I would say probably the most important Senate investigation run in at least a generation, maybe two. And it’s not every day that a senator gets to run an investigation where there are disclosures happening every day on the front pages of the newspapers, there is a parallel criminal investigation underway, and where the president from his own party gets to declare on Twitter every few days that the entire investigation is part of a witch hunt. So that gives you a—gives you a sense.
BURR: Yeah, nor can I ever remember an investigation that every news article that’s written had no named sources in it—(laughter)—which is a—which is a fascinating—to me, it’s a fascinating thing that should take journalism schools and turn them on their head a little bit.
SANGER: I think that’s—that is almost certainly true, but I also think it’s true that the news that’s been produced on this, while certainly there have been some errors along the way, we’ve actually seen some significant leads in the investigation broken by news organizations, whether they were anonymous sources or not, which has made this to be, you know, an added complexity of this entire thing.
BURR: And that was—David, it wasn’t a critical statement.
BURR: It was a statement that will come out to—for you to better understand as I talk about a complexity of an investigation like this. Certain things happen. And it forces you in a certain direction.
And to some degree, my concern long-term is that if we get too accustomed to unnamed sources, that that will be the predominance of what’s out there. And I only ask you to look at what we’re going through right now with members of Congress on sexual harassment, many of which probably have every reason to be concerned and should rethink their profession and rethink decisions they’ve made. I firmly believe that some people will be captured in this that aren’t guilty of something because the way the stampede starts. And it doesn’t necessarily stop until innocent people are stepped on. So I use that as a—as a relative example.
SANGER: Oh, it’s an interesting one.
I’ll tell you how the next hour will lay out. The Senator and I are going to talk for about 30 minutes. And then we’re going to open it up to questions from all of you. We’re on the record, so no—there may be anonymous sources in this investigation, but not today.
So let me start, Senator, with this—just sort of more of a historical look at this. Your committee’s main responsibility is oversight of the intelligence community. You could argue that the failure to see this coming was among the biggest intelligence failures that the United States has suffered in recent times. There’s an argument about whether we had strategic warning this was coming or whether there was tactical warning, but certainly there were large elements of this—especially the social media part—that we never heard about. In fact, when you go back—I looked before you were coming at the—as I was preparing for this—at the national threat assessments that you get each year.
In 2007, cyber wasn’t mentioned at all. It started to be mentioned. It’s been the number one for the past four or five years. The weaponization of information warfare has been many of them, interestingly enough. And then, of course, that’s part of what it is that you’re doing. So start by telling us, if you can, do you view this as a significant intelligence failure? And how does that figure into what you’re planning to do with your report?
BURR: Well, let me say this: I don’t see this as an intelligence failure. I think the intelligence that we had on an ongoing basis should have required us to ask more questions and to look a little deeper. Did we have an imagination that was wide enough to say cyber is probably the number-one threat, therefore since technology is the tool that they’ve chosen and default to, should we look at technology as the easiest way for them to penetrate and influence the American electorate? We should have looked no further than the news outlets that are controlled by the Kremlin—RT and Sputnik—and extrapolated from that we needed to look at those avenues that reached the masses in America that are not traditional media outlets. We didn’t do that. And I think, as evidenced, so were the social media platforms caught flat-footed.
I’m not as concerned with looking back on it and trying to understand where we should have seen the red flag in responding. I am very conscious of the fact that we need to look forward and, if we can, look around the corner. Try to figure out how not just Russia but others will exploit the ability to create chaos in our society, to potentially influence one’s views as it relates to elections. Might even be a tool of nation-states that want to buy interest in U.S. companies and choose initially to use social media platforms to drive the price down before they make the acquisition. So this expands to a much greater breadth than I think any of us ever envisioned. This is not limited just to elections, it’s not limited to Russia. It is now a tool that’s available for any country potentially to use as long as they have innovation and they have capital, and we continue to export that daily.
SANGER: Just one more on this question of whether or not there was intelligence failure or not. So you sit in a lot of public hearings, but you sit in far more closed hearings. If you think about the years leading up to the 2016 election, in many of those hearings, especially the closed hearings, you spend an enormous amount of time on our cyber capabilities, both offensive and defensive. Was there any prolonged discussion of either the question of whether the Russians would take techniques that they had used, successfully in some cases, in Ukraine, in the Baltics, and bring them here? And was there discussion about the social media risk? Do you remember those coming up very much prior to the election?
BURR: David, two distinctly different questions. I believe that the intelligence community as a whole saw in real time Russia’s efforts at disinformation in the United States. The mere recognition of that has to be followed by some type of policy action, and I think both the intelligence community and the Hill were anxious for nine months of the last administration to see the leadership guidance that was needed to exercise different policy towards individuals that carried this out.
In the case of ‘16, we were focused significantly on Russia and Russia’s intentions as it related to our election cycle, but I could have made the same case in ‘16 that this was about another nation-state’s—or multiple nation-states’ efforts to hack our critical infrastructure, to steal personal data, to target U.S.—the whole of USG, and I think in all cases the—Russia’s intent for the election and other nation-states’ targeting of all of those areas of public and private entities, we had no response. And I think the investigation will be limited to the previous administration’s response or lack thereof as it related to Russia’s intent in ’16 to our election process. But from a committee standpoint, we are very focused on what we need to do going forward on the whole issue of cyber and where technology drives this.
SANGER: So on the deterrent side of that, it—you could argue the Obama administration underreacted. They said they had very clear reasons for not wanting to react prior to the election. We could debate whether those were good or bad, but this wasn’t the first time the Russians had come into the United States, so were the hacks on the State Department, the White House, the Joint Chiefs of Staff, Black Energy, which was inside our electric grid, and yet in most of those cases you didn’t see the administration, the intelligence community, the State Department nor the White House coming out in public and naming the Russians or naming other actors, with the exception of Sony and North Korea, and imposing a cost for that.
So do you think the administration, do you think the Congress has basically done too little to create a price here for cyber actors in general, and might the Russians have been less willing to go into the election hacks had they paid a price?
BURR: Well, let me say this, that I think the committee’s role is to assess what we did or didn’t do, and to make from that recommendations as to hopefully how we change our policies going forward. So I don’t think you’ll see in our final report us arguing one side or the other or whether we should have or shouldn’t have.
I think to believe that Russia has changed since the Cold War is just an absolute myth. How can this happen today? It’s almost like Vladimir Putin is a KGB officer. (Laughter.)
I think we believe that as we change and we accept different practices, that everybody else does. And as it relates to Russia, they still believe that if it’s bad for the United States, it has to be good for us. So, when you look at the magnitude of it, and you say for them to go onto social media, and to take two opposing groups and to set up a rally for both sides of the same issue on the same day at the same location, so that the media shot that night will be the video of two groups confronting themselves and create the perception of chaos in the United States, someone will rationalize that and say, well, how did that impact the election? They don’t care. It’s projecting chaos inside the United States that provides them the vacuum to do other things.
So, as a committee, I’m less concerned with what chaos they’re up to. I want to limit through policy the tools that they can use.
And let me just say, because I said this in the public hearing, which we’ve had—11?—11 of this year, and I think everybody was worried I would have none. If I could have gotten away with none, make no mistake, I would have done none. (Laughter.) But I’ve had 11 because there was value to what we did for the American people, and that’s my threshold. If the American people can learn something that can be shared publicly, then I make every attempt to try to do that. But from a standpoint of companies, companies have to take some individual responsibility for the protection of the system we allow to have in this country. And in the absence of that individual responsibility by each company, then you only leave it up to us.
Now, look at the makeup of Congress. Do you really want to leave technology decisions up to the group that I serve with? (Laughter.) I mean, at 7:00 at night a lot of my colleagues are in their pajamas watching “Hollywood Squares.” (Laughter.) So—
SANGER: We hope. (Laughs.)
BURR: So they’re not necessarily the one that you want out counseling you on the next iPhone that you buy. But by the same standpoint, the same education you’re going through, we have to take policymakers through it. And I can only tell you this, that I can’t come up with a solution to end cyber unless I have help from tech companies, because they’re the ones that are innovating in the space.
And, David, let me just take a moment of personal privilege. If you think you’ve seen technologies at a pace that you never dreamed, you just haven’t looked at the next 10 years. My dad, bless his heart, died three years ago. He was 90 years old. He’d lived a fruitful life. And I remember the day before he died he was still lucid, and I said, “Dad, is there anything you regret?” Now, he was born in ’21. He served in the Second World War. He listened to the news on the radio. He watched the news on the TV. He actually had a cellphone and he used it before he died. And he looked up and me and he said, “Yeah, I’m going to die and never figure out how a fax machine works.” (Laughter.) Now, to him, because of when he grew up, he was more concerned with how it worked than what it did. And in a cellphone he could envision that somehow, some way you could talk in one side and you can hear the voice on the other. Maybe it’s the can and the string theory. But he never could figure out how you put a piece of paper in and on the other side you pulled a piece of paper out.
And I would only tell you that we’re hung up in the United States because we always ask how it works. When David lays his phone down here, if you’re over 50 you look at it and say: How does it work? And if you’re under 50, you look at it and say: What else will it do? Now, understand, from a policymaker’s standpoint, I have to bring those two generations together, because the over 50 controls capital and the under 50 controls innovation. One without the other is no good.
The unique thing about where we are—and this is why I’m driving the committee so hard on technology and cyber and all of these issues—is that over the next 10 years what you’ve seen emerge over the last 30 years will be dwarfed in comparison to what you see over the next 10 years. By 2020, you won’t fill out an application for patent protection of a technology because the year and a half it takes the patent office to approve it, your technology’s going to be obsolete.
My point is, we don’t have the time then to be going through the discussions that we have today about what are we going to do to Russia, what are we going to do to China, what are we going to do to this, because their level of innovation will be on the same pace, if not greater, than ours. If anything, our architecture of government slows down the deployment of technology in this country, where they have no architectural impediments to rolling out technology, whether that’s in the military complex that they have or whether it’s in the private sector complex. They compete just like that.
We’ve got a tremendous amount of self-imposed impediments that are going to stand in our way from us fully deploying technology. That will put us at a distinct disadvantage, I think. Or, if we begin to handle some of the policy issues, it can put us in the driver’s seat for the next 50 years. This all plays into intelligence as well.
SANGER: Absolutely. And that’s going to—in some ways, going to be some of the most interesting parts of the report as you make your way through what we were prepared for and what wasn’t. A few political questions for you, Senator. I was hoping that you may be able to take on this. You described before a Russia that, as you said, hadn’t really emerged from the Cold War.
And I hear that from many of your colleagues, Democrats and Republicans now. In fact, we hear it from everybody in the political spectrum, except the president himself. He’ll talk about North Korea as a threat. He’ll talk about Iran as a threat. He’ll talk about terrorism as a threat. He’ll talk about cyber as a threat, as I discussed with him at some length in the foreign policy interviews leading up to—in the campaign that Maggie Haberman and I did with him. But not Russia. Why not?
BURR: Well, I could probably make the case that there are other countries, North Korea—
SANGER: I say, he talked about North Korea and Iran frequently, yeah.
BURR: Those seem to be the two hotbeds in the world right now from a standpoint of less than perfect relationships that we have. The president said when he ran, and I think he practiced in business, the ability to go to a table and the person across the table not to have a clue as to where he’s coming from. And I think in many cases that’s his art of negotiating. I think he’s continued that in his role as president. It is uncomfortable sometimes for members of Congress. It is uncomfortable sometimes for the American people. It is his style. I don’t think it’s going to change. At the end of the day, he will be judged, just like every president is, by what they say in the books that they write after you’re gone.
It has also been amazing to me as I search out, like you do, David. I search out individuals that have been in certain fields for a long time, negotiated deals with North Korea in the past, and sat down with them and said: What do you think about the president’s North Korea strategy right now? And was amazed that many that I respect said: I can’t disagree with it. So I think that there is a—it’s almost a sport of public criticism towards how he does. But when you get in—but when you down to the content of what he does, what I’m finding is that people who I would perceive as subject experts don’t have too much critical to say about the content of what he does.
SANGER: So you’re saying the substance is better than the noise around it?
BURR: And the noise around it—I’ve said this to reporters on the Hill because they come to me with every tweet. I don’t tweet. They still haven’t learned I don’t read them, whether it’s my wife’s or the president’s. (Laughter.) But my comment to them is—
SANGER: I’m trying to figure out which of those two could get you in more trouble. (Laughter.)
BURR: That’s an open question.
SANGER: Yeah. (Laughs.)
BURR: But my point to them is this. As long as you cover it, he’s going to continue to do it, because in his world, when you control every segment of the news, why would you ever give that up? And sometimes it’s a little more outlandish than the last time. But when it dominates the news cycle for the next 60 minutes—and we are a world where, relative to news, we’re broken down to every 60 minutes and every show needs a highlight the next 60 minutes.
Do understand, there’s a generation coming behind us that is not on a 60-minute cycle. They are on an instantaneous cycle. Their news goes to them instantaneously on their devices. They are not programmed to watch the 6:30 news at night. They’re not programmed to watch the news every hour on cable. They get it. They don’t fact-check very much, but they get it. And they make decisions based upon what they get.
Shouldn’t be a shock to us they buy the same way. Think about this statistic just real quick. By 2026—50 percent of the 16-year-olds that turn 16 after 2020 will never have a driver’s license and never own a car. We’re seeing generational change in habits that makes the description that I gave you of my father look normal.
You—David, my point is this. You have to look at all of this in the same bucket to try to figure out what are the right policies in the future.
SANGER: Well, building on this thought the president has not said very much about Russia, the central mystery that it strikes me that you’re trying to grapple with within the investigation is this. What might the president be so worried about that he denied contacts with the Russians? Remember, it was only in February we were told there were no contacts between the campaign, transition, with the Russians, and then denounced the attempts to go investigate those contacts, to the point that we read that he asked you to wrap this up very quickly.
Have you uncovered anything that would explain why it is that he was so intent on denying those contacts and so determined to have the investigations end quickly?
BURR: Let me say this, that when we complete our investigation, we intend to have this thorough report shared with the American people as we possibly can. And it will, I think, answer many of those questions. And I’m not going to cut the investigation short and make news on that today.
But I think it’s important to understand that if we were—if we were talking a year and a half ago, there’s nobody in this room that believed that the organization of their—of the Trump campaign was capable of collusion. Think about that statement. Up till about the first of October, there was nobody in America, including the president, that believed he was going to win.
So when you put things in those contexts—and that’s what we have to do when we go back and look at an investigation, because we’re trying to put our mindset in the same mindset that policymakers, election officials, foreign governments, were in at the time. You can’t do the investigation just looking through the rear-view mirror and taking today’s perspective. And we’re beginning to sort through all of that, some of which will probably be alarming and concerning. Some of it won’t play out to have the same impact that I think some stories have suggested. But I think what you can be assured of is that our review of that information will be as thorough as we possibly can do.
SANGER: Let me ask you a little bit about the broader Russia hacking. So it wasn’t just the Democratic targets, as you alluded to before, but a lot of government and non-government targets we’ve seen. And yet, the Russians do not appear to had much trouble penetrating American targets, and they’re the lead suspects in the Shadow Brokers and Vault 7 leaks, which were—you’ve read about pretty broadly in the—in the open press. These were the leaks that appear to have come out of the NSA and the CIA. So there’s a concern now—seems to be a growing concern that even our intelligence agencies can’t protect their most valuable secrets. Why do you think that is? What is it that has made the intelligence agencies themselves as vulnerable as every other group that you have talked about here?
BURR: Well, let me—let me bring up two things. When Amazon spends $2 billion a year to defend their cloud platform, which I believe is a(n) accurate number, and the U.S. government doesn’t spend anywhere near 2 billion (dollars)—200—2 billion (dollars) to defend its data, are we shocked that our data gets hacked? We shouldn’t be.
And then I’m going to—I’m going to read you a statement that was made, and I’m going to challenge you to tell me who said it. The quote is this: “It’s often difficult to determine the precise effects of Russian political influence activities. Typically, they seek to capitalize on existing sentiment within the countries, and cause and effect is hard to establish. Their resources do not guarantee success, but in a close election or legislative battle they could spell the difference. These activities are designed to exploit internal conflicts and doubts, in the expectations that these will tip public opinion and government policy.” Any clue as to who that was?
BURR: That was Deputy Director of Intelligence at the CIA Robert Gates, testifying before the Senate Foreign Relations Committee on the United States policy towards Eastern Europe, Western Europe, and the Soviet Union in 1985. This isn’t new.
But I don’t think that we’ve kept Russia—and for those that truly do follow Russia—and by the way, I think I used Russia, and his actual quote has “Soviet,” but I thought I’d change that. (Laughter.) If we don’t keep them focused as the threat—there was once a time where 70 percent of the folks that worked in the intelligence agency were educated in Russian. Not the case today, as you can well imagine. We surge to where the greatest concern is. We forget the fact that some don’t diminish in the threat that they present to the country, and Russia happens to represent that.
SANGER: Let me ask you about two older Russian threats, and then we’re going to go out to the audience here. The North Koreans launched a pretty successful—appears to be successful at least long-running test last week—ran for more than 50 minutes—an ICBM that looked a lot like an old SS-18. It wasn’t quite that. When we looked at the engines that were on those, it looks to be the Russian RD-250 engines. Go way back to the ’60s, produced in Ukraine for many years when it was still part of Russia. Do you have any evidence or any reason to believe that Russia is now a or has been a significant supplier to the North Korean missile program that is occupying whatever waking hours you have that are not devoted to the current investigation?
BURR: Well, I think it’s safe to say that the stated U.S. government position is we want to disrupt, based on the sanctions, any technologies that would further their missile program, their ability to project a threat globally—not just to us, but around the world. Would it surprise me that Russia might attempt to provide products? No, that wouldn’t surprise me. I think that your reference to the similarities of a—of an ’80s-style power plant might be accurate. But I think that the one—the one mistake, I think, we would make is to say that North Korea on its own can’t innovate.
Remember, North Korea was one that hacked Sony. I think North Korea continues to carry out incredible cyber operations. They would probably be somewhere in the three to five category of globally who we’re most concerned with for cyber. So I don’t think you can look at the progress of the North Korean missile program and not think that that can be generated internally. So it may be that we’re no longer in a world where the sanctions have the same impact that they might have two years ago or 10 years ago. And this is something that policymakers have to put into their thought process as we determine what’s the way forward.
SANGER: And are the sanctions you passed recently being enforced, to your mind?
BURR: I think for the most part we keep a pretty good eye on sanctions. And I think we would all point to the obvious countries or specific areas where it might not be fully enforced, but sanctions are a significant tool that still does alter, in a fairly significant way, the actions of people like North Korea.
SANGER: Well, let’s go out to all of you. There are some microphones around. I’ll remind you that we are on the record. And I ask you just to keep your questions very short. Make sure they’re a question and not a lecture. And we’ll start with you, sir.
BURR: Nice (docks ?).
Q: (Laughs.) Thank you, Senator. Thank you both for the time. Adam Ghetti with Ionic Security.
Senator, you mentioned that by 2020, approximately, tech companies won’t file nearly as many tech patents because by the time it’ll take the patent office to evaluate them the technology will be outdated. So we look at a year and a half timeframe of technology refresh rates, and that we’re going to have 10 to 20 X the impact of technology the next 10 years we had in the last 30. How do you feel, from an intelligence perspective, that countries like China and Russia are investing on the orders of magnitude of 10 to 20 times the amount of capital into the research advancement for supercomputing and machine learning, while the U.S. is decreasing its overall support for those same things? And a case in point, organization that keeps track of the global supercomputers that are publicly known. And the last, however long that thing’s been around, China’s had, I think, a dozen on there, give or take. And in the last year they’ve put 41 up.
BURR: Well, one would be a general statement that I feel very confident about the level of investment we have in making sure that technologically we can compete globally with all of our adversaries. And that’s both from an economic standpoint and from a military standpoint. Let me just be real clear: If it takes 2 ½ years to procure a weapons system at the Department of Defense, and China can do it in 30 days, we will have a distinct disadvantage as you begin to roll in new technologies. So—
SANGER: Two and a half years at the Pentagon would be called record speed over there, right?
BURR: I’m trying to be diplomatic, since this is on the record. (Laughter.) But I think that gives you an example of what we’re up against. I think that—I don’t think that the whole of government understands the disruption that technology will play the next 10 years. Placement of data, systems, security within the systems, firewalls, where you actually do the computing, the sorting of data. And my hope is that we will see a much greater partnership between the private sector and USG going forward, because don’t have all the base strength. And we compete with the private sector for the talent that we have within the intelligence community, within the Defense Department, wherever. Nowhere else in the world—it’s all one. It’s shared. And that’s why you can look at somebody that works for a software company in Russia and automatically assume that because of the rules over there they practically work for the FSB.
SANGER: Over here.
BURR: I’m not suggesting that any U.S. tech companies become controlled by the federal government. I said partnership.
Q: Well, thanks. I actually wanted to follow up on that point. I’m Emelia Probasco with the Johns Hopkins Applied Physics Lab.
Do you think that with this responsibility for the private sector and the implications of the technology for national security, that we should be sharing intelligence with them on a limited basis, or that there’s some other new relationship that we should have with these type companies to help prepare them for the implications of their technology?
BURR: That we, the U.S. government, should be sharing with the private sector? I think there’s a pretty good sharing relationship that exists today. And, you know, the one point I would make is that technology doesn’t affect just one thing. It’s going to affect everything. And if I looked at where the greatest impacts are, it’s probably not in the world I deal in every day. It’s probably in health care.
I mean, I could pick some sectors where technology is a potential game changer from a standpoint of—I’ll just make this statement. We had at least a six-month debate on the Affordable Care Act. I looked at technology not long ago that can be loaded in this phone, that probably will be approved by 2020, that will allow you to take a retinal scan from your phone, a breath analysis into the phone, a blood sample from not penetrating the skin, and send it to a lab where it will be tested against 49 biomarkers by 2020 that will give you a report within a matter of minutes or hours that tells you whether you have a disease or not.
The whole debate we had about where do you live based upon where is a doctor, where is a hospital, by 2020 it’s out the window. Technology is taking care of that. I’m not addressing the insurance side of it. I’m addressing how some of the issues that we were grappling with in this debate are off the deck.
SANGER: Right down here on the aisle.
Q: Joanne Young, law firm of Kirstein & Young.
How do you square the fact that the Department of Homeland Security has named our election process critical infrastructure with the fact that our election system is largely state and locally run? And what do you see the federal government’s role generally in the American election system, given the threat of cyberattacks?
BURR: I think it’s safe to say those secretaries of state or equivalent in 50 states took great offense at what they perceived as Homeland Security’s attempt to take over their election process in their states. And—
SANGER: Was it that, or was it—
BURR: I don’t think it was intended to be that, but I think that the interpretation was that, because we lacked the clarity in policy—now, I’m a little bit empathetic of Secretary Johnson’s role at the time that we never experienced anything like this. They were pressed for an answer, not only in putting together an intelligence report, the ICA, in 90 days, I think, but they were also impressed with trying to come back with things that assured the American people that the federal government was on top of it.
SANGER: You might explain to the group what the ICA stands for.
BURR: The ICA was the report that President Obama asked the intelligence community to put together on Russia’s involvement in our election cycle. It was briefed to the president and to the Congress in December. It was released publicly to the American people in January.
SANGER: And briefed to President Trump, who said—President-elect Trump—who said at the time he accepted the conclusions.
BURR: Correct. So, going forward, there are lessons learned that we’ve uncovered in the investigation. They weren’t difficult. A lot of notifications weren’t made to states where there was ongoing attempts by Russia to get into their data files.
Now, let me say emphatically, we have verified that there were no—there’s no election fraud. There’s no change of vote totals. But there were attempts to get into voter files for reasons that we don’t know. But our federal policy was that if the secretary state wasn’t cleared from a standpoint of security, then she couldn’t—she or he couldn’t be notified. So—
SANGER: Is that a problem with the clearance system, Senator, or is that the problem that we should have just declassified all this information right away and put it out there?
BURR: My take is we should have declassified the information and/or found somebody with a clearance status that we could have notified.
SANGER: Can you see any particular reason that information couldn’t have been made public? I mean, I had this push and pull with the Obama administration at the time. They were quite insistent it would remain classified. I can’t for the life of me figure out why.
BURR: In hindsight, I don’t think I see anything that declassified it wouldn’t—except that there were potentially sources and methods that might have jeopardized. And I—but I think there was a way for us to make notifications that might have sanitized it to a way that we could have declassified it. The fact is, David, we didn’t. And even in the intel authorization bill which we finished four or five months ago, we wrote a piece in there that instructed that every state would have somebody that was designated and cleared to be the recipient of notifications when this happens in the future. So there is a federal action.
The determination of how states run elections? States. That is their responsibility, and we don’t want to do anything to change that.
SANGER: Gentleman right here.
Q: Thank you. Thank you for your time, Senator. Kamal Lukovac (ph) from George Mason University.
I was just wondering, is committee concerned of less-capable countries, perhaps our allies, that are or could be targeted themselves, but are not capable of identifying or perhaps finding the results? Is the committee at this time concerned or aware of any Russian actions towards allies or friends that are going through election cycles before ours—our next one?
BURR: Yes. (Laughter.) I’ll look backwards: France, Germany, Montenegro, Netherlands. I could probably look forward. I think to believe that Russia’s not attempting in the United States to do things potentially for the ’18 cycle I think would be ignorant on our part.
SANGER: Have you seen any evidence on Capitol Hill among your colleagues, many of whom run for reelection I hear, that in fact there is continued Russian activity?
BURR: I think all of my colleagues probably are worried or should be worried about it. I think every state should be worried about it.
It is the committee’s intent to put out recommendations as part of our final or interim report. And David and I had a conversation before we came out, so let me—let me distinguish those for you. If we’re not to a point that we can write a final report with sufficient time for states to be able to handle their primaries this year, then we will probably make a joint decision to release our recommendations on election security by itself so that states can at least have the blueprint that we suggest. These are not necessarily initiatives that involve federal legislation or federal initiative.
I’d just give you one as an example. I couldn’t in good conscience tell any state that it would be wise in 2018 not to have a paper trail of the vote total. Now, that may only affect a couple of states, but with the limited amount that I know right now, I can’t go into 2018 and say that would be a wise thing for you to do, to give up on that ability to go back and check the accuracy of vote tallies.
So the things that we do, I don’t think you’re going to learn anything in there that you didn’t think yourself. They will be common sense. It’s just we’re adding a voice to the fact that there’s a sense of urgency to do it.
SANGER: And what’s your timeline for the rest of the report? So those—that would come out within the first quarter, I guess, because you’re heading into primary season and—
BURR: Well, if I—if I gave you that timeline, then you’d be the one journalist in the country that had this.
SANGER: Well—(laughter)—I can’t object to that. (Laughs.)
BURR: I will answer you the same way I answered the president. And you alluded to a statement that the president made to me, and I want to put that in context. That was in a telephone conversation that the president and I had last May. It wasn’t last week. The subject of the telephone call had nothing to do with the investigation. And as we concluded that conversation, in the only way he can do it, he says, hey, I hope you can finish this investigation as quickly as you can. And I responded: When we have interviewed everybody that needs to be interviewed, and we feel like we have answered every question that the committee jurisdictionally should, we will finish. And that’s the answer I’d give you, is that when I started in this we had a well-defined box of interests. And with every interview, there may have been another individual that was added. With every news story, there may have been another individual that was added. What really hurts is when they’re added, but they’re not relevant. (Laughter.) Because every individual that is added, it puts about three more weeks into an investigation, so that’s why it makes it difficult for me to look out.
I can tell you, with the known individuals, and we have interviewed well-over a hundred, I know exactly how many I’ve got on the deck to interview, I know how many interviews can be done in a week, in a month, so I could project today when I finish those and when we can begin to conclude and write a report. I can’t tell you how many people might get on the deck between now and that time that we didn’t know about.
SANGER: Well, you’ve had some put on the deck the other day by Special Counsel Mueller, because when he submitted the public documents on the plea deal that General Flynn signed, you saw stories immediately appearing, based on the plea deal, that said that he had consulted with other people about his conversations with Ambassador Kislyak. So I assume now you have to go back and, unless you had already, interview everybody who he might have been in contact with. And there were some public references in the indictment—in the statement of fact about who those might be.
BURR: When you’ve—when you’ve interviewed a hundred-plus people, you have probably captured a lot of folks that nobody in the room would think would be on that list. So I’m not sure that the special counsel’s actions brought any surprises to the committee. Again, we’ve been at this now for almost just over 11 months. And we have had unprecedented access to individuals and to intelligence, setting a precedent that has never been set in the history of this country.
And, you know, I see it bandied around in the media and I get it from people back at home about, how much is this costing us? We’ve done it with the same professional staff that is already hired by my committee. And that’s why we’re at 100-plus individuals that we’ve interviewed, well-more than any other committee. It’s because we were able to start on day one because they’re seasoned, professional staff that knew exactly what they were doing, knew exactly where to look, knew exactly what to ask. Had we turned outside and said let’s go get a basket of people to come in, I’m not sure I could have gotten them security clearance in 90 days, much less already have been in the interview mode. So I think we’ve made the right decision.
And again, I’m willing to let the American people judge the product that we come out with at the end of this. And I can assure you it’s not going to be something that I’m going to be able to influence because they will test it against the House Intelligence Committee, the Senate Judiciary Committee, and they will eventually test it against Bob Mueller’s independent counsel. So I have every reason to try to get this right.
SANGER: OK, let’s see, in the back corner right there.
Q: Senator, given—I’m Jack Janes from Johns Hopkins.
Given the last point that you just made, and given the fact that you said that the generation of 26 doesn’t fact check very much, how much confidence do you have that when this report comes out that it won’t be put into the tribal chemistry that we have in this country right now, that people will take whatever you say, with the clairvoyant and the rational presentation you just made, and run with it in two different directions? How much confidence do you have and how do we get out of this vicious circle that we’re in to begin with?
SANGER: Interesting question.
BURR: The short answer is I don’t have a lot of confidence. But what you’ve presented in your question is the challenge I accept for the product that we produce. Tribal atmosphere, I haven’t heard that one, I’m going to remember that, I wrote it down.
We’ve got a couple of routes we can go from the standpoint of a committee and that product that we end up with. I don’t think you can have political differences if your objective is to lay down the facts, and to let the American people see facts and come to their conclusion. So though I’m chairman, I do this with the support of all members, because that’s the way the Intelligence Committee always functions. There may be a point in time where I need to exercise the chair’s prerogative in moving forward. We’re not there for me to make a call because, I got to tell you, that Senator Warner and I and every member of the committee have worked together. If you were interviewed by this staff, you couldn’t pick out who was Republican and who was Democrat. And I know that because I’ve had individuals who have been interviewed who came up to me and told me that. Nor has any member participated in any of the private interviews.
So my point is this: We may be, when we conclude, in a situation where we don’t choose to have a committee vote on anything, where our intent is to lay the facts down. And if there’s a disagreement about how to interpret the facts, there may be a majority-minority views on the facts. But what I have said from day one to the staff and to all the members, there is no substitute for us verifying that what we’re putting down are facts.
SANGER: Well, one of your members, Angus King, said at a session I was at the other day that he thought you’d be able to get complete committee unity on what the Russians did, the timeline, even the recommendations going forward. He thought that the split would come on the question of was there collusion, was there conspiracy, was there any moment in which the president’s motives looked like they were to obstruct justice. Do you agree with him? Is that an area where you’re likely to go into disagreement?
BURR: Well, I think most Americans would probably agree with that. That’s the area where politics potentially could come into play. And last time I checked, this town was full of politics. So I expect it to continue. I think what we’ve tried to do is to leave politics out of it from the standpoint of the investigation. I can do that structurally. What I can’t leave out is politics as it relates to how the final facts are spun. So short of the committee having some disagreement on what we write, I’d rather not write anything. And if we do, I’d rather write it as this is the majority, this is in the minority. But here’s unanimous agreement on here are the facts. And if somebody wants to be influenced by either one of the reports, great. If they want to assess and come to their own conclusion based upon the facts, I need to provide them the opportunity to do that. And that’s what we’ll attempt to do.
SANGER: So we have just a few minutes left. So I’m going to take two or three questions together and we’ll let you pick which ones you actually want to answer. (Laughs.) A great Washington tradition.
We’ll start right here and then the gentleman right in front.
Q: Good to see you, sir. Thanks, again, for coming out. Dave Cooper (sp) with the ITEA (ph) company as well, from industry.
Sir, $6 billion to the cost of an aircraft carrier. We just established 2 billion—I’m sorry, 6 billion (dollars) for an aircraft carrier. Two billion (dollars) for Google to secure their environment. Why would we not put more money into creating that ecosystem? And then the second piece of that is can we bring in industry, defense contractors, into that ecosystem? There’s no way we can at all afford to, as companies, match what these threats are. And our supply chain seems to be—
BURR: When you say, bring the companies in, to the defense of the data?
Q: Possibly. If we’re going to create an ecosystem, say of 2 billion (dollars), why are we not actually bringing our contracting supply chain into that?
SANGER: That is one, and then, sir.
Q: Hi. Derek Johnson, Federal Computer Week.
Earlier you talked about wanting to look forward and, if possible, look around the corner when it comes to election cybersecurity. Some folks on the stage here before you talked about not wanting to get bogged down in the last war or the details and the tactics of the last war. My question is, and cognizant of the fact that the report is yet to come out, what is around the corner. What issues or problems or holes are out there that we are not discussing or talking about that you think will come into play over the next 10-year timespan that referenced?
SANGER: And we’re going to take one last one. It’s right there.
Q: Ben Deering, Department of Energy.
It’s been discussed earlier, and in many other forums, that maybe the U.S. government needs to establish a more credible cyber-deterrence policy. I was wondering, what are your thoughts on what the U.S. government can do to establish that? And what’s the role for Congress in that process? And I would only clarify that I don’t just mean responding to cyber events, what cyber means; using all tools at our disposal.
BURR: I’m going to wrap both of your questions into the second one and the third one, because cyber is the greatest threat. Now, I may be more concerned today about a potential North Korea action. But when I’m going to bed and I lay my head on the pillow, what am I thinking about? I’m thinking about cyber. I’m thinking about the vulnerabilities that it presents to us.
If I get the gist of where you’re going, Senator Feinstein and I have been focused on the cyber issue for three years, three or four years. I wish I could tell you today that I’ve come up with a legislative remedy to secure not just USG but secure personal data, regardless of whether it’s a target or wherever. I haven’t. As a matter of fact, I’ve come up with the belief that that can’t be done.
So our answer is a combination of moving data to the Cloud, riding on somebody else’s investment in security, private sector. So I’m looking at a public-private partnership with a—in a different way, understanding what we need to do in the future from a standpoint of where do you sort that data. Do you bring that data back down and sort it? Do you try to sort it up there? We’ve got to rethink the whole security of the connections.
Now, for those in business—and I’ve tested this on CEOs; it didn’t go over real well—I said if you’ll take every employee off of Internet connection and ask them to come to work and bring their iPad or their iPhone and do their personal shopping on that versus on your desktop computer, then I can cut out 80 percent of the risk of a cyber intrusion into your business, because I’ve cut down the number of portals that get outside. And without exception, every CEO has looked at me and said I can’t do that or I wouldn’t have any employees.
So you’ve got to understand where I start from. And I would imagine that if I went to agencies within the federal government, I would probably get the same answer. So if I can’t control the number of portals, then I’ve got to try to put together a partnership that controls where things are—where data is stored and how much is invested to protect it. And then we’ve got to rethink everything that we do with that data.
You know, I’ve given you a world that some of you sort of understand what I’m saying about technology and its disruption. But the one thing that everybody has to understand, it’s not just technology’s advancement that lets you have an autonomous car. It’s a meticulous commitment on the part of the companies that are doing the software of having people label the software.
Let me explain. It’s getting a map and teaching a computer what a tree is, what a pedestrian is, what a car is, what a hydrant is, what a curb is, what a line means. If not, it doesn’t work. It doesn’t learn this on its own. It’s able to learn once you’re able to label what it is it learns from.
There’s a step that we have yet to get to of though we have accomplished it on autonomous vehicles, we’ve got a long way to go on everything else. It’s estimated that there are 200,000 people in China—between 100,000 in the government complex and 100,000 in the private complex—that do nothing but label data, because until you label it, you can’t use it as a learning instrument for artificial intelligence. We’re nowhere near that.
So for the foreseeable future, cyber will continue to be the thing I most fear. And it’s primarily because everybody can innovate. Everybody has smart people, which means everybody can potentially look at us at their target. The reason that we prioritize, and the five that I would name at the top you would also name at the top, it’s because their capabilities are matched with their intent. Anybody below the line, they may have the capabilities, but the intent may not be there to penetrate the United States.
SANGER: And your five are Russia, China, North Korea, Iran, and who did you leave out?
BURR: I’m going to leave out the last one, if I can.
SANGER: Well, Senator, when they designed this beautiful building here, they put a secret dungeon down underneath for moderators who run over time, and I’m trying to avoid being—joining their company. So I thank you very much for spending the time with us, for a fascinating discussion. I hope you come back for more.
BURR: Thank you, David. (Applause.)