11/20 ACC

National Security and Silicon Valley

iStock/Getty Images
from Academic Conference Calls

More on:

United States

Cybersecurity

Technology and Innovation

National Security

Amy B. Zegart, senior fellow at the Freeman Spogli Institute for International Studies, Davies family senior fellow at the Hoover Institution, and professor of political science at Stanford University, discusses how new technologies are transforming the nature of national security threats and ways to close the gap between the technology and policymaking communities.

​​​​​​Learn more about CFR’s resources for the classroom at CFR Academic.

Speaker

Amy B. Zegart

Senior Fellow, Freeman Spogli Institute for International Studies; Davies Family Senior Fellow, Hoover Institution; Professor of Political Science, Stanford University

Presider

Irina A. Faskianos

Vice President for National Program and Outreach, Council on Foreign Relations

FASKIANOS: Good afternoon from New York, and welcome to the CFR Fall 2019 Academic Conference Call Series. I’m Irina Faskianos, vice president of the National Program and Outreach here at CFR.

Today’s call is on the record and the audio and transcript will be available on our website, CFR.org/Academic. As always, CFR takes no institutional positions on matters of policy.

Amy Zegart is senior fellow at the Freeman Spogli Institute for International Studies, professor of political science at Stanford University, and a contributing editor to the Atlantic. She is also the Davies Family senior fellow at the Hoover Institution, where she directs the Robert and Marion Oster National Security Affairs Fellows Program. From 2013 to 2018, Dr. Zegart served as co-director of the Spogli Institute’s Center for International Security and Cooperation, and was the founder and co-director of the Stanford Cyber Policy Program. She previously served as the chief academic officer of the Hoover Institution.

Her areas of expertise include cybersecurity, U.S. intelligence and foreign policy, drone warfare, and political risk. An award-winning author, Dr. Zegart has written four books including Bytes, Bombs, and Spies: The Strategic Dimensions of Offensive Cyber Operations, co-edited with Herbert Lin; and Political Risk: How Businesses and Organizations Can Anticipate Global (In)security with Condoleezza Rice.

She served on the Clinton administration’s National Security Council staff and as a foreign policy adviser to the Bush‑Cheney 2000 presidential campaign. And she has also testified before the Senate Intelligence Committee; provided training to the U.S. Marine Corps; and advised officials on intelligence, homeland security, and cybersecurity matters.

So, Amy, thanks very much for being with us today. I thought you could really begin, maybe get us started by talking about how the nature of national security threats have transformed, and ways in which we can close the gap between the technology and policymaking communities.

ZEGART: Well, Irina—

FASKIANOS: Essentially, Silicon Valley and Washington, D.C.

ZEGART: Great. Well, Irina, I want to thank you for inviting me, and I want to welcome everyone to the call. As you can tell from that introduction, basically I’m interested in any thing scary, so that’s what we’re going to talk about today. (Laughs.)

What I thought I would do is give a bit of a thirty-thousand-foot view of what’s different about this moment from a threat landscape perspective. And technology is a part of that equation, but it’s not the only part of that equation. Policymakers, as all of you know, often lament that they face an unprecedented moment with unprecedented challenges. But I think this time it’s actually really true.

If we think about the threat landscape of today compared to, say, the Cold War, I think what we’re seeing is the convergence of three megatrends. There are megatrends in business, there are megatrends in geopolitics, and there are megatrends in technology. So technology is part of the mix but it’s not the only part of the mix.

With respect to business, what we’re seeing is, for those of you who follow business, is really the—a moment of the supply chain revolution where we have more global, leaner, longer, and less visible supply chains. That matters for companies and it matters for countries. The Pentagon, one of the big challenges is, you know, what’s the supply chain vulnerability of, say, the F-35 or the F-16. Companies, too, have supply chains that are global, which you’re seeing play out with respect to the trade war with China.

So that supply chain revolution has some real upsides. We have more commercialization, more customization. But the downside is new vulnerabilities that are affecting geopolitics. So that’s the first set of megatrends.

The second set of megatrends is geopolitics, and we’re all feeling those trends very acutely today. If we think about the threat landscape in the Cold War, it was a scary time with the threat of global nuclear but it was, from a threat analysis perspective, a much simpler time. There were two major powers. Everyone knew who they were. They had territory on a map. They even paraded their weapons through Red Square.

Our economies were divided, too. So there—you know, there was a stark divide between high politics. You had the Soviet Union and the Eastern Bloc states on the one hand and Western capitalist economies on the other. So politics and economics were starkly divided.

Fast forward to today and the threat environment we have is marked by complexity, it’s marked by anxiety, and it’s marked by velocity. All three of those elements are different and harder now than they were before. It’s a more complex threat environment. We have—we talk a lot about the return of great power rivalry but that all the other threats haven’t gone away. So we have a threat environment with rising states, with declining states, with failed states, with weak states, rogue states, nonstate actors, companies that are playing an outsized role, and global threats like climate change.

So it’s much more complex than any time in the past, I would argue the past century. We’re also facing a time of unprecedented anxiety about the threat environment with the rules-based international order under stress, with democracy in recession, with American leadership in retreat.

And I think the third sort of component of that political threat landscape that is often underappreciated is the velocity of the changing landscape, and cyber gives us a very good window into just how fast the threat landscape is changing today.

So if you look over a ten-year period, in 2007 the Annual Threat Assessment to Congress, which the Director of National Intelligence gives—it’s unclassified, you can Google it online and read what the threat list is that our intelligence officials think are most important for the nation—if you look at that 2007 threat assessment, the word cyber does not appear once. Not once.

So cyber really wasn’t on the radar screen in 2007. If you fast forward five years after that, in 2012 cyber is suddenly at the top of that threat list. But the way that cyber was considered was Secretary of Defense Leon Panetta at the time worried about a cyber Pearl Harbor. So the concern was physical critical infrastructure being disrupted or destroyed in a cyberattack.

And we now know—you fast forward five years from there to 2016, 2017—that the principal cyberattack on the United States did not come through machines. It came through hacking of minds. So this is information warfare waged by the Russians in the 2016 election. And that’s just over a ten-year period. We went from no cyber to focusing on the sort of physical or kinetic effects of a cyberattack to now information warfare attacking the sort of fundamental tenets of a democratic process. So the velocity of that threat landscape is complicating, I think, the policymaker challenge.

So that’s the business part, the politics part, and now you have technology. So that’s a long way to get to Irina’s first question is how is technology now affecting the threat landscape, and one of the pieces that you had in the e-mail recommended reading is a piece that Michael Morell, former deputy director of the CIA, and I wrote in Foreign Affairs, and what we argue and what we are finding is that we have never been at the cusp of so many groundbreaking technologies that affect national security.

There’s a lot of talk about AI, but AI is not the only one. So there’s quantum computing. We have nanotechnology. We have breakthroughs in the life sciences. And so the combination of these technological breakthroughs is posing new challenges to every element of foreign policy. It’s changing our threat actors, it’s changing our diplomacy, it’s changing how we think about fighting and winning wars, and it’s definitely challenging and changing how the intelligence community operates.

Now, I’ll just give you all a couple of data points to make this a little bit more vivid. We think about AI, for example. Some estimates are that up to 40 percent of global jobs could be automated in the next twenty-five years, thanks to AI. Right. So that’s a massive change in the societies’ economies and politics globally.

We also know that technologies are democratizing. They’re becoming more available to all. So the technological piece of the threat landscape is not just for great powers with real state capacity. These technologies are available to small groups and even lone individuals. So that has all sorts of implications for cyber security. It has implications for national security.

On the intelligence piece, for example, I think one of the biggest changes is that governments until now used to have a monopoly or near monopoly position when it came to collecting and analyzing threat information. Right. That’s not true anymore.

When Russia invaded Ukraine, the best intelligence about those troop movements didn’t come from any secrets from a prime minister’s safe. They came from photos posted on social media from Russian troops with highway signs in the background that anybody could see online. More than half the world is online today, and estimates are that in 2020 more people will have cell phones than running water.

So we’re all connected in this new technological universe and that means everyone’s an intelligence collector, potentially, and anyone is, potentially, an intelligence analyst, and that is fundamentally challenging our intelligence community.

Now, just to give a couple other examples, when we conducted the Osama bin Laden operation in Abbottabad, it ended up being live tweeted by a local Pakistani resident. So now you can imagine what—how do we think about the implications for operational security for the future when someone—when a local resident can actually live tweet what’s actually going on. So big challenges there.

And then I’m obsessed, actually, right now with nuclear threat intelligence and open source teams. There is a cottage industry of really interesting actors, including some of my colleagues here at Stanford, that use only unclassified sources, a lot of imagery analysis from commercial satellites to track in real-time illicit activities of proliferators like Iran and North Korea.

So the satellite revolution is quite remarkable. Just to give you some historical context, and you can—students who are on the call can now amaze their family at Thanksgiving by telling them these little details about the Corona satellite that their parents may have heard of—the first photo reconnaissance satellite in 1960 was named Corona. It was a huge breakthrough. It actually was a camera in space that took pictures and then sent a capsule with a parachute that was captured in mid-air over the Pacific Ocean and then developed. You can’t make this stuff up.

The Corona satellite had a resolution of twelve meters and what that means is that it could not distinguish between two objects that were next to each other on the ground unless they were at least twelve meters apart. That’s about thirty-nine feet.

Today, we have commercial satellites that you can get imagery for free that have resolutions that are better than two meters. Right. So you can—and some resolutions are so good you can tell what kind of car is driving on the road. This imagery is available for cheap or even for free.

Now, what does all this stuff mean for the threat environment? It means that data, right, is becoming much more essential for power, whether it’s the market power of firms, the political power of governments to provide services to their people or to surveille their people, in the case of China, and data is becoming much more important for power projections of nations.

And when data becomes more central to power, attacks on data and its integrity are likely to grow and that’s what we’re seeing with data and truth becoming key battlegrounds in geopolitics today. That’s true in information warfare, it’s true in cyber security, and it’s pervading every aspect of the threat landscape.

So that’s just a couple of thoughts on the technology piece and I just want to spend a minute or two on the Silicon Valley-Washington divide challenge, or what I like to call the suit-hoodie divide. It’s widely accepted, as all of you know, that the United States government today cannot go it alone in the threat environment, whether we’re talking about cyber security or developing the technologies to fight and win the wars of the future.

What we’ve also seen—and the CFR report on this is fantastic and I hope all of you get a chance to look at it—what we’re seeing with respect to technology is that we’re in a role reversal moment. The government used to be the primary generator of leading-edge breakthroughs, which then became commercialized over time by companies. You think about the internet, which was really originally a DOD-funded ARPANET program. GPS started the same way.

But now that funding pattern has reversed and so the commercial breakthroughs are starting in commercial industries and the challenge now is for the government to harness those commercial technologies for military advantage. We’re also seeing today that commercial companies, particularly tech companies, are some of the primary threat vectors that are challenging American national security with Facebook and Google and Twitter and other social media companies enabling adversaries to reach deep inside the United States and affect the polarization of our own society.

So that piece, bridging that suit-hoodie divide, is growing more and more important for all those reasons. But it’s a fraught relationship, and as I sit here in my Stanford office looking out at the campus, it’s very apparent here when students talk to me a lot about, you know, their career aspirations. Over the past six years we’ve sort of seen three phases of this very troubled relationship between the U.S. government and between the tech companies here.

I would categorize phase one as the betrayal phase. This is the Edward Snowden revelations. This was the fight over encryption after the San Bernardino terrorist attacks where, as you may recall, the FBI wanted Apple to, basically, break its own encryption to allow law enforcement into the iPhone of one of the terrorists who committed the attack in San Bernardino.

This was a huge moment where tech companies felt betrayed, whether rightly or wrongly, by the U.S. government. I can tell you, I took a group of policymakers to a major tech company during this period and they were slack jawed when a senior executive of the company said, I think of you all—pointing at the government officials—just like I do the People’s Liberation Army of China. You are an adversary trying to break into my networks and I have to defend them against you. So that was a very problematic time in that relationship.

But that then led to a second period of the relationship, which I call the wooing period, when the Pentagon in particular made a lot of effort to come out to the Valley, to meet with folks in industry, and to try to forge a better relationship. Secretary of Defense Ash Carter came here to Stanford. He came to the Valley many times. There’s the standing up of the Defense Innovations Unit here in Silicon Valley. So this was a real effort to try to repair that trust deficit from the Snowden and the Apple-FBI encryption controversies.

We’re now in the third phase of this relationship, which I liken to intensive marriage counseling. Washington and Silicon Valley now know they have to stay together for the sake of the children, which is all of us, but they still don’t understand each other. They often talk past each other and they’re struggling with how to make progress, and that new intensive marriage counseling period is really being driven by a couple of things.

It’s being driven by the China threat, which is, clearly, getting bipartisan attention, and it’s being driven by the fact that the bloom is off the rose on these tech companies. So with Cambridge Analytica and some other scandals, what I’m seeing here, among engineering students at Stanford at least, is a real concern that tech companies aren’t all that they’re—that they thought they were.

And so I think this is a moment of opportunity, actually, for a much better set of relationships to emerge because the tech industry is under threat of regulation, it’s under threat of sort of reputational risk, and it’s clear that both sides need to work together to harness not only the technologies but the talent to go back and forth between industry and government.

And on that—on that more optimistic note in an otherwise gloomy picture, I’ll stop there and I welcome your questions and your comments in our discussion.

FASKIANOS: Amy, thank you very much. That was a terrific overview.

Let’s go to the questions from the students, please.

OPERATOR: Thank you.

(Gives queuing instructions.)

The first question will come from Washington and Lee University. Please go ahead with your question.

Q: Hi. My name is Christina (sp). I’m a sophomore here at Washington and Lee. Thank you for being here with us today.

I’ve heard that at a summit this December NATO is expected to declare space as a warfighting domain, partly in response to new developments in technology. What implications will that have? And does it not violate any international laws?

ZEGART: So, Christina (sp), thanks for that question. It’s a great question.

So there’s a big debate about what war looks like in cyber space to begin with, and is it helpful even to think about cyber space as a separate domain. So lots of debate about that. What I will say is that that is an area where many smart people have been struggling for a long time.

So I mentioned Secretary Carter came out to Stanford. When he did, in 2015, he unveiled the Pentagon’s cyber strategy and a lot of that effort behind that strategy was actually to define what does an act of war in cyber space look like—an act of national significance. And what we can see is that actually states and nonstate actors around the world have been very clever at launching cyberattacks below the threshold of war. Right.

So we don’t know exactly what the threshold is but whatever is going on is below it. And so we’re seeing that with Russia, we’re seeing that with China, we’re seeing that with Iran, and we’re seeing that with other actors and, by the way, we’re seeing that with the United States, too. So our own cyber strategy, which you can look up—it’s available online—in 2018 is called Defend Forward. Right.

So the idea is that in order to defend American networks we need to be much more forward leaning about taking the fight to adversarial networks.

Now, what does that mean for NATO? In some ways I think this is a really positive development because you want likeminded countries with shared values about the protection of privacy and the protection of democracy to come together in cyber space and that’s a hard thing to do because there’s an incentive not to share your cyber capabilities with others because when you share them you may lose their—your ability to use them.

So that’s beneficial. The United States position, of course, has been that the law of armed—the laws of armed conflict apply to cyber space. That’s really important, and so there’s—you know, there are, obviously, different views about that internationally. But there’s been no insinuation that at least the perspective of American and NATO allies’ position in cyber space is in violation of any international law.

Now, how to apply international law to cyber space is another set of challenges that we’re seeing government leaders working through right now.

FASKIANOS: Thank you. Next question.

OPERATOR: Thank you. The next question will come from the University of Texas Rio Grande Valley. Please go ahead with your question.

Q: Interesting, then, the environment complexity and the velocity assessment. I really find that interesting. What implications do these areas of assessment have for the incremental regulations on Silicon Valley, ma’am, that may be imposed for the proliferation of technology and for the ideology of a country, let’s say, versus—democratic ideology versus conservative—insofar as their stand on regulations there?

ZEGART: So this—it’s a fascinating time to look at regulation right now. What we’re seeing is a lot of—a lot of movement in the executive branch, so new investigations by states attorneys general as well as the various executive branch agencies to see—has tech gotten too big. But so far no hard and fast regulatory answers in the United States.

But what we are seeing is that Europe is regulating, right, and so the default now is that the EU—there’s a series of EU regulations called GDPR that have started to go into effect, and you may have read about them. They include major fines of tech companies if they don’t report breaches, for example, and they are much stronger on privacy protection through their different levels of concern about freedom of speech and things like the right to be forgotten, so the right to have your data about you erased online.

And so in the absence of American regulation, European regulation is actually leading to some changes among tech companies. And of course there are some European business interests that also benefit from beating up on American tech companies, too, but you’re going to see that Europe is leading, and actually American regulation is lagging. And so it will be very interesting to see whether, as GDPR becomes more implemented over time—since it just came into force—how much that’s going to create behavioral changes in these major tech platforms before there is regulation in Washington.

FASKIANOS: Thank you. Next question.

OPERATOR: Thank you. The next question will come from Michigan State University. Please go ahead with your question.

Q: All right. My name is Vishnu (sp). Thank you again for the call.

I want to return your question—or to your discussion about Silicon Valley and Washington. And it seems like a lot of people—certainly my age in college—understood Silicon Valley to be pretty independent of the policymaking body in Washington, and as something of, you know, a home for startups and the like.

And so how do, you know, the valley and Washington understand their own relationship to each other given the kind of independence that Silicon Valley depends on to be this engine for innovation. It depends on, you know, people who are not necessarily tied to the American national security policymaking community ideologically or on the basis of citizenship or something else to, you know, create startups and to invest in new technology. So how are they understanding this relationship and balancing that independence?

ZEGART: Yeah, it’s a great question, and I should say it’s hard to paint Silicon Valley with one big brush, right? You’ve got—on the one hand you’ve got these behemoth companies like Google and Facebook, and then you have—as you allude to—this incredible ecosystem of startups, right, so ranging from two people to two hundred people, and they’re not all the same, right, and they have different markets, they have different workers, and they have different relationships with Washington. So I want to be a little bit more nuanced here that it’s not all one relationship between Silicon Valley and Washington, D.C.

There are inherent tensions in that relationship, not just because of events, but also—as you allude to in your great question—you have different employees. There is a sort of libertarian, anti-government ethos that is stronger among many in the valley than you would see in other industries.

You also have a lot of non-American workers, a lot of innovation coming from bringing the best and brightest from all over the world. And so one of my colleagues here at Stanford said to me—and I wrote about this in one of the pieces that you all have in your email—he said, I want to do a thought experiment with you, Amy. China is a blank of the United States. Fill in the blank. And he said, if you ask that question in Washington, policymakers from both parties are likely to tell you China is an adversary, or a strategic competitor, or a rival of the United States. He said, but if you ask that question here in Silicon Valley, here is the answer you are likely to get: China is a market, China is an investor, China is a great supplier of human talent. So there are fundamentally different views between many here in the valley and those in Washington.

However, they do need each other, so this fabulous, independent ecosystem actually isn’t so independent. It’s funded by basic research at universities that then create the talent of tomorrow and create the cutting edge things that are spun off into companies. The government can be a purchaser of these technologies, and that’s I think what a lot of effort is going into now. And of course the regulatory environment, so the prior question absolutely affects the innovation ecosystem.

So there is more connectivity. They have to live together, but I think the implication of your question is, you know, there are very different perspectives about the role of government, about morality in warfare, and leaders of various companies are trying to sort through those in a serious way with their own employees and with their partners in the U.S. government.

FASKIANOS: Thank you. Next question.

OPERATOR: Thank you. The next question will come from National Intelligence University. Please go ahead with your question.

(Pause.)

Caller, please make sure your line is not muted.

Q: All right, can you hear me?

ZEGART: Yes.

OPERATOR: Yes, sir.

Q: OK, in one sense you answered part of it because my question was going to be why does Silicon Valley worry about regulation because the regulations are already in place? They’ve been passed by the Europeans—the GDPR—and the Chinese cybersecurity law went into effect in 2017. The U.S. companies, in order to operate globally, are going to have to comply with those regulations, and no matter what happens here in Washington, sooner or later a pro forma, de facto standard will be set by the GDPR and the Chinese cybersecurity law.

Why don’t they understand that? Or do they understand that out there on the west—or as we say in the east sometimes, the left coast?

ZEGART: Well, I think—I think there—you know, there is regulation and there is regulation. Yes, there is GDPR, and there’s the Chinese cybersecurity law, but I think you are seeing there’s concern about anti-trust potential, breakup of tech companies here in the United States. There is also concern about increasing regulation with respect to political ads. There is concern about regulation with respect to algorithmic bias so, you know, what you see in your newsfeed is determined largely by algorithms, and how do we make sure that fact-based news is actually getting attention in the newsfeed as opposed to sort of incendiary, unsupported news.

So there is still a big regulatory battleground that’s being fought over here in the United States that is going—that could have pretty significant implications for these companies, even though GDPR has already come into effect. And that’s why you see the lobbying efforts of these companies increasing pretty significantly. That’s why you see a lot of public discussion, a lot of effort to talk about what they’re doing with respect to election security because the heat is on and they know it.

FASKIANOS: Thank you. Next question.

OPERATOR: Thank you. The next question will come from Eastern Mediterranean University. Please go ahead with your question.

Q: Thank you very much.

The major challenge in the security threat posed by the cyberattacks seems to be the lack of trust between the Trump administration and the intelligence community on the one hand and the highly polarized political environment in the United States on the other hand.

So in your opinion, how can this gap of trust be breached so that to overcome the security threat posed by cyberattacks?

ZEGART: So I didn’t hear the first part of your question. Is the question how can we overcome the trust problem between our own intelligence community and the Trump administration?

Q: Yes, and the—

ZEGART: Yes.

Q: —politicized and the polarized environment in the U.S., too.

ZEGART: Boy, that’s a—that’s a tough question to—I wish I had a good answer for you. For those on the call who haven’t been following this very closely, I can’t remember a time where relations between an intelligence community and a sitting president have been more publicly—I mean they have been—they are always challenging. Presidents are always dissatisfied with their intelligence agencies because they want prediction, and that’s impossible, right? But I’ve never seen a time where it’s been so public and so vicious, right, so the president has referred to his intelligence community officials as Nazis. He has really gone out of his way to denigrate the intelligence community.

And there are—there are real risks to that rift, right, but the risk is that the president doesn’t actually believe the intelligence community and can’t make good decisions because, you know, the basic idea is that you make better decisions when you have better information. That’s the short-term risk.

The long-term risk is that, by overreacting—so we have a lot of former intelligence officials, some of whom have been wading very deeply into political waters—that that reaction may make it harder for any future president to have a good relationship with his or her intelligence community, that there is a concern that this will become politicized in a way that is counterproductive for future intelligence-presidential relations.

We have not seen this movie before with the intelligence community, so we’re—this is a new challenge, but what we have seen and where there is research already is what happens when former military officials become very political and weigh in on political issues. And I point you to some research by a West Point instructor named Michael Robinson who recently wrote his doctoral dissertation at Stanford on this topic, and he’s written in War on the Rocks about it that the data so far show that people view these former officials through partisan lenses, and that there is a risk that when former officials speak out, it can increase partisan polarization rather than bringing people together.

So I wish I had an answer for you about how to bridge that divide. I think this president is immovable when it comes to his views of the intelligence community, so I think it’s going to be something we’re going to have to watch closely with the next administration, whether that’s four years from now or, you know, a couple of years from now.

FASKIANOS: Thank you. Next question.

OPERATOR: Thank you. The next question will come from Wheaton College. Please go ahead with your question.

Q: Yes, thank you.

Earlier in your talk you mentioned that earlier in the cybersecurity—in the history of cybersecurity the main concern was on attacks on physical infrastructure. Why do you feel that these attacks have largely failed to materialize, and do you think that there are still significant threats to them?

ZEGART: Yeah, I want to take the back part of your question first. Yes, there are definitely still significant threats to physical infrastructure. I don’t want to suggest that just because we’re dealing with information warfare that those other threats have gone away. They have not.

Why haven’t we seen them yet? Well, I think, you know, as you know it’s a combination of capability and incentive, so there have been public reports about infiltration of the electric grid by the Russians. There have been public reports that the United States has also done the same with respect to Russia. But I think there is significant uncertainty about whether, say, a significant attack on the electrical grid of an adversary—what happens to escalation after that?

And so far what we’ve seen is that there hasn’t been the incentive for an adversary with that capability to act. Now we don’t know what will happen in the future, but we do know that the Russians have in fact attacked the electrical grid of Ukraine on more than one occasion, and that’s a real concern.

I’d say also when we think about why haven’t we seen an attack that brings down the financial sector, there when you think about the incentives of adversaries we have to disaggregate a little bit, right, so China would stand to lose just as much as the United States would if there were an attack on the global financial system. Russia is in a bit of a different position since it’s not nearly as economically advanced as China is, and so, you know, when we look at threats to critical infrastructure—and there’s a lot of critical infrastructure—it’s both a combination of capability and incentive. And so far—knock wood—those two things have not come together.

FASKIANOS: Thank you. Next question.

OPERATOR: Thank you. The next question will come from Kentucky Wesleyan College. Please go ahead with your question.

Q: Well, Professor, you’ve successfully scared us here at Kentucky Wesleyan College. And my question is, to deal with these many important security problems, how would you advise the Congress to proceed on budget matters? If you go to the Congress and say, we should spend 50 percent of our budget on security, that may not fly, but what kind of guidelines would you give to legislators who are trying to spend the right amount on security, but also considering health, and education, and other important matters? Thank you.

ZEGART: Right, so that’s a big and hairy and interesting policy question. I would say a couple of things. First, if we look at defense spending, there’s a question of not just overall levels, but where are you spending your money? Are you getting the best bang for your buck? Are you looking at what we need to be doing in the future?

So I get very concerned, for example, at our acquisition system in the Pentagon, which takes too long, costs too much, and produces too little. So if we look at, you know, the average cost of one F-35, right, depending on how you count, it’s somewhere between a hundred million dollars and two hundred million dollars a plane. And so one of the jokes, you know, around national security circles is eventually we’ll be able to pay for one plane, right, in our Pentagon budget.

So we have to get smarter about how we spend our money and what we spend our money on. And that’s where this tech piece comes into play. So one of the key areas where we can improve is harnessing commercial, off-the-shelf technology from companies and bringing it into the Pentagon. And there are a number of different efforts to try to do that.

AI, for example, can dramatically decrease your costs of maintenance of all your aircraft or your major weapon systems in the way it’s done in the commercial sector. So there are real opportunities to get more for less in the budget. So that’s sort of point number one.

You know, I look at my kids and what they’re going to have to pay, given the deficit that we’re racking up and the debt, and it’s alarming, right? We cannot afford all of the programs that we are spending money on. But we also have to be aware that we are facing an increasing threat environment, and you can’t have a strategy if your defense spending is not aligned to it, right? That is a failed strategy. And so we have to think strategically about what does the threat landscape look like, what are we prepared to do to meet it, and then how do we get the best budget we can.

I will tell you, at least in my view, the best kind of money that we’re not spending enough of is in two areas: one is in diplomacy because the more diplomacy you have the fewer wars you fight. And our State Department has been hollowed out, and we need to double down on our diplomatic efforts. And the second is funding fundamental research in universities because the breakthroughs of tomorrow are going to come from funding STEM-related university research of today. And the CFR report notes very alarmingly how that basic science funding has been cut by almost two-thirds from its peak to today. That’s basically eating our seed corn for our economic vitality, our prosperity, our income inequality to solve that problem. That’s going to hurt us for the future. So we can have both a win on the defense budget side and a win with respect to our domestic economy if we devote a much percentage of our budget to fundamental research.

FASKIANOS: Thank you. Next question.

OPERATOR: Thank you. The next question will come from Stevens Institute of Technology. Please go ahead with your question.

Q: Hi. Thank you for doing this talk today.

We got into a debate in an international politics class earlier this semester where the students thought that there was argument to support the notion that the United States should prohibit students from nations such as China, Russia from being schooled or studying computer science in the United States for fear that these tech tools would then be used against us.

What are your sort of thoughts on a position like that?

ZEGART: Oh, that’s a good question that hits very close to home here.

I think it’s a complicated world. On the one hand, I think we’re—universities should be open to the best students in the world, and we are a country that thrives on bringing the best and attracting the best to the United States.

One of the challenges, however, is that when we bring the best and brightest from around the world to study in the U.S., we make it almost impossible for them to stay. So it’s hard for them to get visas to stay and work in the U.S. So we need to align our immigration system to be able to keep these fantastic students in the country and helping us with our own development.

On the other hand, we have to be clear-eyed about the fact that we are a major counterintelligence battleground—we at universities. I see this, you know, outside my door on the Stanford campus. And so we have to balance very carefully the equities of being open and having a global knowledge ecosystem which benefits everybody, but being savvy about the fact that China in particular is absolutely targeting universities for counterintelligence purposes. And how—you know, how can we be smarter about having visas accorded to foreign students so they are studying what they actually say they are? How can we make sure that we are welcoming the students that we should but we’re being careful about the risks to our intellectual property of others—of the tiny percentage of students who come to this country who maybe have other motives in mind or be working on behalf of foreign intelligence services?

Let me also say that it’s really important—I mean, I did a Fulbright when I was right out of college, and I went to China and Hong Kong right after Tiananmen. I think these international collaborations are really important. They’re important for building bridges, they’re important for collaboration. We have to be able to talk across country lines, even at moments of great strife between nations. That’s how we keep threats from spiraling out of control. That’s how we find common ground. But, at the same time, we have to be really careful when we send students to foreign countries, and some in particular, that they’re not being targeted for attack.

So it’s an area that is really thorny, and I hope very much—and I was just—before I joined this call was on another call—where I hope the National Security Commission on Artificial Intelligence will tackle these issues. What I would like to see is a consortium of universities that are working in a very thoughtful way to think about what are the policies, and practices, and guidelines to balance all of these different issues so that we continue to be the sort of innovation hub of the world, welcoming of all who want to come, but so we’re smarter about the counterintelligence and technology risks of engaging in these activities.

FASKIANOS: Thank you. Next question.

OPERATOR: Thank you. The next question will come from Georgetown University. Please go ahead with your question.

Q: Hi. Thank you for the talk. My name is Lu (sp) from security study program at Georgetown University.

My question is about informal institutions. Apart from the education and talents attractions, what other advantages—advantage institutions—more informal institutions in the U.S. innovation system—ecosystem that you think very hard for all other countries including the countries in Europe—very hard for them to construct or catch up in the short term?

Thank you.

ZEGART: So, you know, it’s hard to—a lot of people come to Silicon Valley and want to know what’s the magic of the ecosystem and how can we actually replicate that in other places, and it’s really hard to do. And I think—you know, in thinking through, I think there are a few things that enable an ecosystem to thrive. The first is—and obviously I’m biased because I’m a university professor—having really strong research universities that are training the next generation. So that’s got to be part of an ecosystem to be successful.

Second, and one of the things that one of the earlier callers alluded to is, you know, you have to have enough independence from governments to be able to have the ecosystem thrive. So, you know, why is Silicon Valley in Silicon Valley and not anywhere else in the world? Because the government doesn’t put a strong thumb on that innovation ecosystem which allows creativity to flourish. And then you also have to have free capital markets, so you have to be able to attract capital that attaches to ideas early so that people are rewarded for taking risks, and so—and then of course you have to have rule of law because none of this works if you don’t have sort of the certain rules of the road where you know what’s allowed and what’s not.

And so those—all of those components are actually much harder to have come together than we think they are, and so I think that’s one of the reasons why Silicon Valley has been able to do so well for so long. And other areas that have tried to copy Silicon Valley haven’t been as successful.

FASKIANOS: Thank you. Next question.

OPERATOR: Thank you. The next question will come from Portland State University. Please go ahead with your question.

Q: Thank you, Dr. Zegart. I’m Jim Mignano, Portland State University political science graduate student.

I’d like to hear more about the resilience of our financial services sector. Could you elaborate on what we should be looking at specifically in the case of financial technologies that might either disrupt or strengthen the sector?

ZEGART: So I wish I had a detailed technical answer to your question, which I don’t, but you’ve hit on something that’s really important which is there is a big debate right now about is deterrence every possible in cyberspace. And some come down on the side of, well, the best we can do with deterrence is basically deterrence by denial, and what that means is resilience, right? So an adversary may attack the financial system but we’re going to have resilience.

And how do we build resilience? Well, the financial services industry in the U.S. has been at the forefront of cybersecurity. That’s not to say it’s perfect, but they’ve been working pretty hard at it. And so on the—so there are a couple of things that I think are happening. One is they are investing a lot of time, and attention, and resources to defense. But as you know, defense is never going to be perfect in cyberspace. And so you have to be able to operate even after you’ve been attacked or after you’ve been infected.

And sometimes—and there’s a lot of discussion about this, too—the ultimate way to be resilient for a technical vulnerability is not with a technical solution. So what does Facebook have as its backup-backup-backup plan if all their data centers go down? It’s not technology; it’s paper binders that have detailed instructions about how to get everything up and running. So we have to be creative about resilience. Resilience is about making sure that, you know, your failsafe, right, which may not be technical, you can actually access if and when you’re attacked. So I think the financial services sector is doing a lot there.

Let me just add one other thing which is that the best that I’ve seen in cybersecurity for a financial services company is about supply chain vulnerability. So you always have to ask, who is my weakest link, right, and one company that I’ve seen looks at and vets the cybersecurity of vendors four hops away. So they are vetting the cybersecurity of their vendors’ vendors’ vendors’ vendors. That’s the kind of level of detail into the supply chain that you’re going to have to have to really improve your defenses.

FASKIANOS: Thank you. Next question.

OPERATOR: Thank you. The next question will come from College of London, Florida State University. Please go ahead.

Q: Thank you. This is the University College, London, and we have an international public policy program with a question from one of our students.

Q: Thank you. Thank you for talk.

(As you said ?), data has becoming much more essential for national security. So the question is, how can countries cooperate together to combat some transnational threat such as global terrorism? For example, European countries, a few years ago, somehow failed to share intelligence to prevent (a series of ?) terrorist attacks. Thanks.

ZEGART: Yes, so information sharing, particularly the counterterrorism, is vitally important. And we see challenges of information sharing within governments as well as information sharing challenges across governments.

But that’s an area where I think there has been tremendous progress since 9/11, not only in the United States, but internationally as well—still problems with information sharing, but that’s getting much better.

The challenge with information sharing is much more acute with cybersecurity. So I mentioned before—just in my prepared remarks—how there’s a use-it-or-lose-it quality to cyber weapons that makes sharing information about what you have very difficult. And I think that inhibits sharing and collaborating, not only in sort of—if you think about international treaties or international regimes, but it makes it hard for allies to share information about what it is they’re doing.

So I’ll give you one concrete example to make it clear—the difference between physical space and cyberspace. If I want to attack a target in physical space, as long as I have a bomb of sufficient yield, I know I’m going to destroy that building, right? It doesn’t matter whether the building is made of wood or stone. It doesn’t matter whether that building has two doors or three doors. My bomb is going to destroy it.

The cyberspace equivalent is not true. If I have a cyber weapon, and that target machine has a tiny change in its configuration, right—they installed the Microsoft patch on Tuesday—my weapon could be rendered completely useless, right? So tiny changes in the target of an attack, right—you patch, you remove your thumb drive, whatever it may be—can render my weapon absolutely ineffective. And that’s why there is such an incentive not to share or make transparent what cyber capabilities each state has. And that impedes information sharing with allies as well as it does with adversaries to find common rules of the road.

FASKIANOS: Thank you. Next question.

OPERATOR: Thank you. The next question will come from National Intelligence University. Please go ahead.

Q: Hello. In the recent article that you referenced of Mike Morell, you make a number of suggestions about culture change, and one of those is that we should create a new agency that is wholly dedicated to open source material. And I’m curious to know a little bit more about where your thinking is on this, and do you see this as both inward facing and outward facing, given that you named one of the key battlegrounds of the future to be truth itself? So I’m wondering if you see room—and also if you see room for that being a potential area of partnership between the two coasts in the United States?

ZEGART: Yes, so I’m glad you brought that up. So Michael and I went back and forth for a long time talking about ideas behind that article, and the importance of open source intelligence is something we agreed on almost immediately.

And so one of the barriers we find in the intelligence community—and you’re at NIU so you know this better than I do—is there is a tendency to believe that if something is classified, it must be more important than if something isn’t. And that’s less and less true today. There is increasing insight to be gained by harnessing and synthesizing open source information than just looking at secrets. And so that’s a big cultural shift that needs to happen.

And we kept thinking in our minds about the Air Force during World War II. Right after World War II, as you all know, the National Security Act of 1947 created a separate Air Force. That used to be part of the Army. For air power to get the attention that it deserved, it needed to be a separate organization.

The same thing we think is true for open source intelligence. If open source continues to be part of the CIA or part of what intelligence agencies do, it will never get the full-throated attention and resources that it needs to generate the insights that our policymakers deserve. And so that’s why we advocate, and one of the few concrete proposals we put in that article was open source has to have its own center.

The question alludes to something that is really thoughtful, which is that this is also a primary mechanism for the government to engage with the private sector. So when you are talking about source—you know, open source information, there are all sorts of opportunities to think about how collaborations with various different companies might work, bearing in mind, however, that all this has to be done with a rigorous oversight regime so that Americans understand and there is transparency about what open source intelligence can and cannot do because it’s one thing for Google to have information about your habits; it’s another thing for an intelligence agency to have that same information.

So those very important constraints would need to be worked out, but I think there is real opportunity there.

FASKIANOS: Thank you. Next question.

OPERATOR: Thank you. The next question will come from Georgetown University. Please go ahead with your question.

Q: Yes, thank you. This is really a fascinating discussion, and the question that I have is kind of unformed. I’m not quite sure where I’m even going with it, but when you started right at the beginning talking about the kind of—let’s call it the democratization of access to cyber tools, et cetera, it would seem—in spite of, you know, the conversation about regulations, et cetera—it would seem that we’re headed for a very chaotic potential for conflict coming from a variety of very small but powerful actors—kind of what happened with terrorism.

And I’m just wondering if you feel that, with the U.S. government, whether there is any broader thinking on how to address that because regulations are not going to work, and if you have individuals of very small groups able to leverage great power like this, it seems very disturbing.

Thank you.

ZEGART: Yeah, so there’s a wide spectrum of cyber challenges, as I alluded to. I’m—if you’re—it depends on where you sit. If you sit in a company, you are much more worried about the low end of the threat spectrum, cyber criminals and ransomware. If you are sitting in Washington, you are worried about the high end.

Perhaps the most disconcerting element, as we look back and learn more about the 2016 Russian election interference, is how cheap and easy it was, right? So this didn’t take a lot of money. It took some planning, so the best information that we have now suggests that the Russian weaponization of social media actually started back in 2014. We know from Mueller indictments that—from the special counsel indictments that Russia sent operatives to the United States with the express purpose of learning about how they could make their social media more effective. So this was a dedicated effort by the Russians, but it didn’t cost a lot.

And now the Russian playbook isn’t just for Russia anymore. So Facebook has been reporting publicly about other countries that are utilizing these same kinds of tools, and these—this sort of information warfare is exploiting the openness of democratic systems.

Now how do we deal with that? We haven’t dealt with it very well. We see some movement on the part of tech companies, some movement on the part of governments, but this is going to require much greater effort to figure out some creative solutions to protecting the fundamental elements of our democracy.

I’ll give you a couple of thoughts on that. One of the things with respect to elections is that suppression—voter suppression can be very insidious and can be very indirect with, you know, fake Facebook personas or fake—you know, automated tweets that are essentially encouraging people to stay home. And that’s not changing the outcome in terms of vote tampering, but it is suppressing the vote. So there is some evidence that—you know, we saw that in 2016.

So how do we deal with that? Well, elections are a time-specific event. Polls open; polls close. So you can imagine, if we thought more temporally about it, do we have blackout periods before elections with certain types of media, certain types of ads, certain types of communications? Do we think about elongating the election cycle so that we don’t have such a focus on one particular moment in time? There are ways to deal with the time horizon that can mitigate some of those challenges. We’re never going to be able to get them to be defeated entirely, but we can certainly do a lot more to mitigate them. So those are the kinds of things that I think that at least I’m starting to work through: what can we do to kind of mitigate this threat in the near term?

FASKIANOS: Well, Amy, this was a terrific hour. Thank you very much for giving us your time, you analysis, your thoughts. We really appreciate it, and to all of you for your great, great questions.

You can follow Amy Zegart on Twitter @AmyZegart, so I hope that you will do so and find her thoughts there as well.

ZEGART: Well, thanks very much, Irina, and thanks for the great questions. I hope you all have a good Thanksgiving.

FASKIANOS: Our final call this semester will be on Wednesday, December 4, at 12 p.m. Eastern time. Kathleen McNamara, professor of Government and Foreign Service at Georgetown University, will lead a conversation on “Democracy and Identity in the European Union.”

So in the meantime I encourage you to follow @CFR_Academic on Twitter, and visit CFR.org for new research and analysis, including our Election 2020 hub, which we just launched, with presidential candidates’ responses to a dozen foreign policy questions and interactive tracker of the candidates’ positions on a variety of critical issues; podcasts called The President’s Inbox, which is showcasing different perspectives on international challenges; as well as a video series, will be forthcoming.

We hope these resources will help you navigate the important foreign policy choices that play in the campaign, and shout-out to Carnegie Corporation for their support of this Election 2020 hub, and we look forward to reconvening on December 4. And as Amy said, happy Thanksgiving to all of you.

(END)

Most Recent