Symposium

2024 Cybersecurity Symposium: Protecting the United States and European Union From Cyber Threats

Wednesday, May 15, 2024
Virtual Session I: Cyber Resiliency for Threats to Critical Infrastructure

DUFFY: You all came out in the rain. Thank you for that. Does anyone else live on Capitol Hill? Is it just me? Anybody else? It was an adventure. I had not one, not two, but three motorcades this morning, my friends, which is my D.C. record. It was very exciting to have launched the day with a new PR.

So good morning and welcome to the first of two sessions today at our sort of mini cyber symposium. This morning I am delighted to welcome Rob Lee, the CEO and co-founder of Dragos; Phil Venables, the chief information security officer for Google Cloud, as well as a member of the President’s Council of Advisors on Science and Technology, and a CFR member; Tarah Wheeler, our senior fellow for global cyber policy, as well as the CEO and founder of Red Queen Dynamics.

I wanted to start by asking each of our panelists to just give us a brief explanation of what they do in this particular space, how they came to it, and how they see themselves in this space, because I think that sometimes for those who don’t come from cybersecurity it can feel like a sort of overwhelming, you know, not the most human area, and what we have are three panelists who are deeply thoughtful and human in their approach to this topic. And so, Rob, can I start with you?

LEE: Sure. So everything I focus on ends up being operations technology. So everything you’ve probably heard about tends to be what we refer to as that enterprise information technology, cloud infrastructure computing systems in the enterprise, et cetera. Everything I focus on is those digital systems that operate gas turbines, power systems, rail, the underlining of datacenters. So kind of IT plus physics would be, really, that area.

I started my career in the U.S. Air Force, spent most of my time over at the National Security Agency, built out the government’s first mission looking at state actors breaking into industrial sites around the world. Got involved in pretty much every major incident you’ve heard about in that field from leading the investigations in Ukraine in 2015, which was the first time ever a cyberattack took down electric power; to getting involved with an attack in Saudi Arabia in 2017, which is the first time an adversary tried to kill people through a digital attack; all the way up to recently with the Colonial Pipeline and leading the—sort of the operations portion of that incident response.

So everything my firm and I focus on is trying to make sure that the “critical” part of critical infrastructure is getting protected because that impacts our livelihood, safety. It’s all the revenue of your companies as well. But most importantly, I got two kids and one on the way, and I’d like them to have lights and water when they grow up. (Laughter.)

DUFFY: Of all of the things that you just said, the thing that I find most impressive is that you went for three—(laughter)—because the idea of going for three was terrifying to me.

LEE: Oh, my wife is a German Bavarian. We’re probably going to have, like, four more. (Laughter.) No.

DUFFY: (Laughs.) But you’ll have a really compact European family, right?

LEE: Oh yeah. I’ll have my own little microgrid eventually to support the fam. (Laughter.)

DUFFY: Phil, how about you?

VENABLES: Yeah. So, you know, my background is I’m a software engineer by background, worked on energy, finance, telecom, defense applications. And like many people of my generation, I never set out to be a security person. I stumbled into it and got stuck doing it for many years in financial services. Came to Google about three years ago, you know, and one of my main focus areas at Google is just how do we make security and resilience easier for everybody by embedding this stuff in a platform that everybody else can build on, whether it’s Cloud or whether it’s all of the other types of Google services. And so me and my team get up every day thinking about how we can make the secure path the easier path for everybody.

DUFFY: And Tarah?

WHEELER: I am a person who ended up taking systems apart a lot when I was a kid and I wanted to understand how things worked. And I think that there’s a—there’s a good combination of people who end up in cybersecurity who used to get into a lot of trouble, and I used to get into a lot of trouble. And that was sometimes bad trouble, sometimes good trouble, but mostly what it landed me with is a series of skills in technology and in speaking that lets me translate across a complicated technology to people that don’t understand it and need to for their own lives.

And it landed me in a place where I find myself and our team building technologies and building ways for small businesses to understand their cybersecurity responsibilities, attest to them, and then enter the defense industrial base. So a lot of small businesses, a lot of IT providers in this country don’t understand what they need to do to stay safe. And I’ve talked about this a couple times before; I think cybersecurity is a combination of a technologist who also just can’t believe everybody else isn’t seeing the wreckage this could become if we didn’t take some precautions around it. So I guess an overdeveloped sense of responsibility and mischief, maybe, landed me in this seat.

DUFFY: (Laughs.) And we have almost seventy members joining us on Zoom, by the way, so I want to greet our audience online as well.

And, Tarah, I want to, before I—Phil, I have a question for you. But before I turn to that, Tarah, I want to ask you, when you say defense industrial base, what do you mean?

WHEELER: When I’m talking about the defense industrial base, I’m talking about the fourth- and third-party suppliers to the large contractors that supply and provide supplies, technology, munitions to the U.S. military, frankly. When you think about the organizations that are manufacturing MREs or putting them in trucks and getting them to National Guard bases in Iowa, I think of the thirty-person trucking company that doesn’t understand how to fill out a data privacy form so that they can be entered in as a potential vendor for those National Guard bases. And frankly, we don’t really want to have the fact that 100X number of MREs and ammunition cases are being shipped to a specific location in this country to be readily available public information, and yet the companies doing it don’t understand how to secure that information. And they’re part of the defense industrial base, the people supplying the U.S. military.

DUFFY: Perfect. Thank you.

So, Phil, you are a member of the President’s Council of Advisors on Science and Technology, and this week you have launched a Strategy for Cyber-Physical Resilience. Can you walk us through what’s in the strategy? Why is this a big deal? Why is it important?

VENABLES: So, yes. So we—in fact, it was last month the report was kind of released.

One of—so—

DUFFY: OK. Sorry.

VENABLES: This PCAST report on cyber-physical resilience. So it was kind of, you know, unusually for a report in the government, those three words actually say most of the things about the report.

So the cyber-physical piece, so we recognized, and we—the reason it was PCAST along with many other things that PCAST does, wanted to produce a report to the president on this—was basically to recognize that we can no longer think of our cyber systems as being separate from our physical systems. We’ve taken across all of our critical infrastructure—whether it’s water, energy, health care, telecoms, all of these things, we’ve taken a bunch of systems that were designed to only work in isolation—they were never designed for the security expectations that we have now—and we’ve basically wired everything together. So the cyber world and the physical world have merged in some quite dangerous ways.

And then the resilience part comes from the fact that we need to recognize that while security is very much a clear goal that we have to be able to prevent and defend attacks, we can never—no credible security person will ever say we can achieve a hundred percent security, so we have to plan not just to be able to recover systems but to be able to operate systems perhaps in some degraded states in the face of outages, natural disasters, cyberattacks, and other events. And so all of the recommendations in the report are about how do we go about achieving that.

And I’m not going to spend, like, ten minutes, obviously, going through the report, but probably the biggest things in it are about how do we—as a society in the private and public sector that collectively run this cyber-physical infrastructure for the nation, how do we set more ambitious performance goals? And how do we connect the tone at the top of organizations—that pound the table, that say security and resilience are important—how do we connect that with the actual resources in the ranks that need to actually do the work? And so thinking about how do we set more ambitious goals and connect the tone at the top with the resources in the ranks to actually make more change in society is what the report is all about.

DUFFY: That’s great.

And, Rob, I’m going to turn to you. We recently saw Russian hackers breach a water system—well, water system in Texas. Can you connect what you saw there with what Phil is saying in terms of the lack of connections that we have right now and how we need to think about security differently?

LEE: Yeah. When he’s talking about cyber-physical, a type of cyber-physical is operational technology. So same domain. And as you mentioned, there’s a lot of systems across the world—and we see this a lot in the U.S. as well—where it used to be this antiquated, legacy, maybe analog-type control system—the things that were operating the water filtration units, the water pumps, et cetera—and by necessity we’ve moved those to more digital systems, and they’re starting to get connected, sometimes even without the owner of that facility really understanding it, just as their vendors and contractors sort of help them out, get to the next phase. And as a result, the attack surface has then sort of blown up. So you have adversaries and criminals and nonstate actors, as well as state actors, that can just reach across the internet to grab those systems, and anything that system can do naturally they can do with it. If I can overflow a water tank as the operator making a mistake, so can a Russian, you know, ransomware group.

And so we’re starting to see us move in this path where, call it fifteen years ago, we would still see all these compromises. We’d see China, Russia, Iran take a swing at getting into these industrial sites; that’s not new. What’s new is the level of connectivity is allowing them now to do damage, to do physical consequence. And we’re seeing adversaries that are actually now bold enough to do it. And so there’s been multiple sites across the United States that have been hit, but they’re local and they’re smaller, so they don’t make the bigtime national news. A couple years ago Oldsmar, Florida, somebody tried to dump chemicals to try to poison the water town—the water for the town. Three different attacks here in the U.S. in the last year where they’re overflowing tanks and impacting local providers. That same issue they’re facing the large players face as well.

And again, I think there’s—the idea of let’s think deterrence, let’s think artificial intelligence, let’s think quantum encryption, what’s the next thing that’s going to save us, in reality a lot of these companies just need to do the basics. It’s the foundations. We have the technology. We have the insights. Very often it’s an economic issue or a policy issue. If you want to help out a small water facility, even government grants and work at government, et cetera, they’re too small to even know how to fill out that grant, kind of what Tarah was talking about. Or even the larger players may not have the economic ability to do it. A gas pipeline is often considered a monopoly in its territory, so the rates are governed by, like, a public utilities commission or similar, and they’re not able to pass the security costs into the rate, therefore they can’t make the security decisions.

So it’s just us tripping ourselves over, and that problem is only going to get much worse as we go much more digital, much more sort of standardized across the industry.

DUFFY: So I was—I was having dinner recently with one of the sort of leading cybersecurity diplomats from another country, and one of the things I pointed out is the joys of federalism—(laughs)—how federalism in the United States complicates this in ways that I think if you’re looking at the United States as a nation as a whole from the outside it’s really easy to under-index the complexity of that.

So, Phil, you’ve seen an enormous amount of different voluntary standards come out from different governmental organizations, and we’re seeing standards come out at the federal level, at the state level. We’re seeing you know, industry standards coming out. So there’s a really complex array. And then there’s also different compliance requirements. Can you walk us through on the Google side, when you’re thinking about large-scale infrastructure, how are you trying to secure these things against those standards, but also where do you see the fundamental gaps, essentially, in trying to—trying to adopt those standards?

VENABLES: Yeah. So, you know, from a Google perspective, clearly we’re a very large global infrastructure. And so on the one hand we because of that scale have a lot of challenges, but on the other hand because of that scale we have an easier time of this, that we can deploy more resources.

So we’ve been focused heavily on security by design: How do you build in security, not bolt it on? How do you think about resilience by design? A whole array of other things. So generally speaking, you know, we pay a lot of attention to different standards, regulations. We’re held accountable by our customers and governments to meet those. And we’re generally focused on that.

When you flip it around, though, to kind of the broader point, when you look at some smaller organizations and other less-well-resourced organizations, it can often be challenging to meet these standards. And then, in turn, there should be higher expectations of them relying on large technology companies, service providers, the specialist software providers in those industries to implement a higher level of control because many small/medium-sized businesses, many local utilities, they don’t have the expertise or the funding to be able to do all this for themselves. So there should be a more significant expectation on these—the organizations that can do this.

In the PCAST report, for example, we call out this kind of eighty/twenty problem. You know, 80 percent of the risk in society is due to kind of 20 percent of the underlying service providers because of the commonality across many industries. And you can think kind of woe is me; there’s so much kind of concentration risk there. But then that’s a massive opportunity, because if you can do that uplift in that smaller set of critical providers you can have a wide-ranging effect. And you see this pattern in every sector. There’s a small set of core technology, service and other types of providers, that may not be kind of well-known household names. In individual industries, there’s very kind of niche providers. If you can get them to improve, it has this wide-ranging effect across the industry.

DUFFY: And, Tarah, I want to turn this over to you because, I mean, you literally learned to fly in part so that you could get to, like, the small company with the guy who, like, makes tin.

WHEELER: (Laughs.)

DUFFY: And so I want to hear from you, because you know, you have built your company to really work with mom-and-pop operations. When you—when you’re listening to Phil and you’re thinking—like, about that complexity, what are you seeing on the ground when you are—I mean, your company signed the Secure by Design Pledge last week with Jen Easterly of CISA at RSA. What are you seeing when you’re talking to people? Got a random company in a random small town in America; what are they up against?

WHEELER: Well, they’re up against a stifling of innovation, I think, in two different ways: first, in terms of the expectations that are often pushed out towards them in terms of compliance expectations as they sell into larger companies; and second, in the way that the way that we build technology in this country is we let people who like shiny things give you money and then you build things for them, right? And the challenge that I often see when I talk to mom-and-pop organizations—so when I talk to small to midsize enterprise—is that people aren’t first innovating and developing solutions that help them to get safer and more secure in a way that understands not that they’re smaller than an enterprise, but fundamentally different than a large enterprise that can afford to bring compliance experts in house.

What I find when I—when I listen to Phil, when I’ve talked to enterprise before, is that in reality what happens on the ground is those small organizations that end up supplying larger organizations—I’m thinking about—(inaudible)—on a lot of different levels—end up having 400 question, fifty-one-page vendor assessment process questionnaires pushed towards them and said, address these as best you know how, and if you do good enough we’ll let you sell to us. That’s actually the origin story for my company, was about five different people I knew asked me for help just getting set up to get through vendor assessment processes and portals. And after about the fifth person asked me—and every single time I’ve been doubling my price because capitalism, right? Every single time that happened, I doubled my price. And at the end of it, I was like, I don’t have a service here; this is—this is an area of innovation that’s missing, serving these smaller businesses who are experiencing this problem. And yet, there’s not really a path towards creating a company in this country that works on boring problems.

The kinds of problems that Rob has mentioned before are low level. They are multifactor authentication; you know, the codes you get on your phone. They have to do with making sure that no one’s shoved a rock into the door of the server room to keep it physically open. They have to do with whether or not the people who are in water-treatment facilities are remembering that they have to turn the lights off at the end of the night or unplug machinery when they walk away from it. These are very basic concepts, and yet they feed into whether or not a small business can sell into and become part of a larger supply chain. And yet, that innovation for those small businesses is often stifled by the very thing that we’re trying to create, which is that an organization that already has its act together on compliance may be the one that’s chosen by default for larger enterprise and for the military to supply it. That’s stifling the competition amongst small and innovative businesses that might be able to compete in the defense industrial base, finding new technologies, new more efficient ways and processes of supplying the U.S. military. And yet, because they are small—they’re small, they’re fast, they’re agile, but they don’t have their game together on compliance, they can’t get those magic waivers on the Cybersecurity Maturity Model Certification process that the Defense Department requires to just sell into the DOD.

So I see some stifling of innovation here. It’s an interesting challenge. It’s a—it’s a difficult problem to work on. It’s why I like working on it. But it is hair-pulling most of the time, honestly, seeing the lack of capacity, the lack of understanding, and the lack of translation of those compliance standards to something meaningful that small businesses can operate on in this country and succeed.

DUFFY: Rob?

LEE: And if I—if I could—

DUFFY: Yeah.

LEE: On that topic of regulation, just two policy recommendations that would help.

Number one, we absolutely need to harmonize regulation in this country. You have big electric companies, as an example, that have jurisdiction with the Nuclear Regulatory Commission. You have TSA. You then also have NERC under FERC. Depending on that, you also have your state and local that are trying to develop their standards. It is very common that a single power company has five or six different regulatory regimes that are—that are competing in their interest. So we’re overspending without actually accomplishing much.

The second policy recommendation is—and I’ll stand up here all day long and say, hey, that cyber-physical stuff in OT is very, very different than IT. But even inside of enterprise IT, you’ll find a power company needing something different than a rail company. And so on a policy level, we’ve spent too much time trying to tell people how to operate their infrastructure, tell them how to implement security, this is a good control. And that’s not necessarily the truth when it comes to how those companies operate. At a policy level, the government needs to get a lot better at why—sort of why are we doing this? What is the requirement? What is the concern that we have? What is the outcome we want the companies to achieve? And to let the asset owners and operators do the how. And that’s the best method I’ve seen work, and we’ve seen it work before under FERC with a couple things that they’ve done.

So those two sort of positions are things that could make regulation and compliance in this country actually more meaningful to security outcomes.

VENABLES: And I think the only thing I’d add to both of that is and let organizations consume and reuse the certifications of the providers they use. Oftentimes large tech companies or small tech companies that are these service providers will have all of the audits and certifications, and they hand that off to the small business that wants to use them, but then some other regulation will require that small business to do a fresh risk assessment as opposed to just pass through the certification. And you know, there’s some very simple issues that could make a big difference to organizations.

DUFFY: Anything where we have a—we have a strong argument that paperwork is a national security threat, I am—I am here for that discussion.

You know, and what strikes me as interesting about this, Rob, to your point, so I spent many, many years overseeing foreign assistance programming to provide digital safety training and digital security training to activists and journalists and vulnerable populations across about ninety countries. And the—one of the things that we would constantly have to work through was how do you—how do you report on impact, right? How do you—and there would be a push from the—from the government, right, from the funder, to say: Tell us who is safer. And I would always have to go back and say we cannot tell you who is safer. We have no capacity to tell you that they’re safer because of this. What we’re pushing for is like spidey sense. Like, what we’re pushing for is some sense of empowerment and agency to say this doesn’t feel right, or I should figure this out, or this feels like it could be a problem; like, the ability to sort of get in there. And it feels—what’s interesting to me hearing you say that is that it feels like there’s some element of that, then, with the small businesses as well—like, this is what I should be getting to; how I’m going to do that is going to require some innovation on my side.

What do you—there’s been a lot of discussion this year around critical infrastructure and, you know, potential Chinese incursions into critical infrastructure, Russian incursions, the upcoming elections, a potential invasion of Taiwan, and how that could be used. What in that debate, Rob, do you hear as sort of maybe hype cycle or fearmongering, you know, like putting the emphasis on the wrong—(intentionally mispronounces)—syllable, if you will? And what do—what do you think we’re not looking at closely enough and that we should really be paying attention to?

LEE: Yeah, that’s a good one. So I mean, over the years I’ve definitely pushed back against the hype cycle on this stuff. And, look, I love our federal partners, but nobody stands in front of Congress without asking for budget, it seems, and so there’s always some pitch being made, and there’s been a lot of things that have been hyped up over the years.

I’m not so sure too much of this is hype these days in the sense that, again, fifteen years ago China breaking into electric systems, it’s happened that they were getting into the enterprise IT networks, they were trying to steal stuff. It’s bad, but it’s not like, oh my gosh, the lights are going to go out. This latest one was getting into the right locations and the right targets to embed in the right infrastructure to be able to do consequence attacks; Russia developing capabilities to employ, and then getting into the right locations. I am actually thinking we might be underselling it a little bit, where if you—if you just look at something and go here’s what’s going to happen, I think that’s a very poor job. Like, as a prior intel analyst, I would never try to predict the future. But I’m a big fan of forecasting and saying, look, here’s where the hurricane is going. Here’s what we’ve seen, consistently seen. Here’s the next move they made. They’ve been consistent for the last fifteen years. This is a very predictable outcome we’re looking at.

And I think the focus with the China-Taiwanese issue is very, very real, and I think everybody has well understood the national intelligence community that cyber as a component of influence and cyberattacks on infrastructure is very much in the cards. And we’ve had these near misses time and time again, and nothing ever happens.

I mean, in Ukraine in 2015, I remember getting back, briefing the White House. The people around it, they prepared a Rose Garden speech for the president to go out and talk about it, just to acknowledge that the attack had happened and that targeting civilian critical infrastructure is off limits. Not saying we’ll do anything about it, but just even acknowledge it. And the advisors pulled him back and said the American people have trouble understanding why they should care about something in Ukraine and this just doesn’t seem like something we should get involved in, which just sent a giant signal to anybody doing attacks that, hmm, we’re not even willing to talk about it, let alone do something.

Same thing happened in 2016 again. I was working with the Trump administration. Same thing, didn’t want to talk about it.

Colonial Pipeline, the first time we even say something, and it’s really not about the attack; it’s more about the impact. And it wasn’t even the cyber component of it that we discussed, and there was still no let’s do something about it.

So I think we have lost all credibility in doing anything against these attacks. We have expanded the attack surface, we’ve seen our adversaries get really smart about how to target these systems, and we’re seeing it happen around the world in record numbers. I don’t want to contribute to hype, but I’m confused at where the hype is currently.

DUFFY: Mmm hmm.

Phil?

VENABLES: And it’s interesting. So, you know, one of the things in the PCAST report we called out, related to what Rob was saying, is that our adversaries have a better understanding and map of our critical infrastructure than we do ourselves. And one of the recommendations in the report that we’re hoping DHS and the National Risk Management Center will continue to champion is the creation of what we’re calling a national critical infrastructure observatory, which is a capability that we can map ourselves, look for the dependencies, look for the concentration risks, look for all of these things that if an event happens these are the places we need to defend the most proactively, and effectively have, in essence, a digital twin of the United States critical infrastructure that we can outmatch our adversaries in knowing where the risk points are and the likeliest points of threat that we can get ahead of those types of things.

DUFFY: And, Phil, what’s a digital twin?

VENABLES: A digital twin is—you know, it’s an industrywide concept where you build a digital replica of physical infrastructure and you manage the digital replica, and then that automatically updates the actual physical infrastructure. In this case, it would just be basically the ability to monitor and show what’s happening in that wider infrastructure.

LEE: As sort of a quick, hard-hitting fact, because I’ve worked on this problem—not on that piece, but on, like, what is critical infrastructure in this country, and there’s a variety of lessons.

DUFFY: Literally was my next question, so thank you. Perfect.

LEE: Oh, OK. Never mind. No. Oh, you want—yeah. OK. All right.

DUFFY: No, no, no, no, no, go for it.

LEE: So when you look at—I don’t do the read ahead of the questions. You sent them out. I like the organic, so I never actually read them. So that’s my fault.

But the—everybody claims to have lists. So I’m a Department of Energy employee as well, looking at how we design the electric system in the future, on their advisory committee piece. And they’ll come out and say, well, we have DCI. DCI is the list that’s critical. Military will say we have the Section 9 list and that’s our critical sites. Like, I was involved in the creation of some of those, and the critique was always you’re just looking at what the biggest sites are. And those are really important—and we’ll talk about small business, but big sites are in trouble too—but they just go: This is the biggest substation, therefore it’s on the list. But it might be a—it might be a distribution substation that feeds a certain port that allows our ability to put troops in the South China Sea, and it’s not on anybody’s list. So, like, what is critical infrastructure in this country is not built on what are the requirements and the mission function; it’s built on the voltage.

And so we’ve tried to work with government for—on that of what is your purpose. Do you want to launch ICBMs back if something happens? Well, that’s a different set of critical infrastructure than having a crank path to bring back up the electric sector, or a different set of critical infrastructure on vaccine production and manufacturing and distribution. So the government’s not quite good yet at knowing what is the actual requirement, and then therefore we can then figure out how to solve it. It gets right—earlier right to the how: Here's the sites. This is what you need to do. Here’s the NIST framework. Go apply this. It’s like, hold on, hold on. What are—what are you trying to accomplish? And let’s figure that out together.

WHEELER: Unfortunately, right now critical infrastructure is as likely to be anything that the three of us say it is as anything we have defined in government.

DUFFY: So this was—this was a follow-up question I had for you, Tarah: Hospitals.

WHEELER: Absolutely.

DUFFY: Are hospitals critical infrastructure?

WHEELER: Raise your hand if you think hospitals are critical infrastructure? (Laughter.) I have some bad news for you about IT in hospitals, especially rural hospitals. In 2017, we experienced two major cyberattacks. One was in May—May 12, 2017, which was North Korea releasing a weapon that ended up attacking the British National Health Service’s—all the hospitals in the U.K. And something along the order of actuarially speaking, about 300 people died; about 13,500 appointments were canceled; cancer treatments went undone for weeks at a time. In June of 2017 we saw the NotPetya attack, which crippled 40 percent of the world’s shipping for near three weeks.

The challenge that we are having right now is that of all of the companies and organizations which were vulnerable to the underlying attack—it was called EternalBlue from Microsoft, who’d actually patched it two months before these attacks came out—of all the companies and organizations that were—that were vulnerable to this attack in 2017, I and Jake Williams, another information security professional, just did a recent estimate; we think that approximately 15 to 16 percent of those organizations still are—seven years later, right? And the answer to that has to do with medical equipment, critical infrastructure, and devices people need running more than they need patched. Many of these are hospitals that can’t afford to patch, update, and handle their outdated equipment because they’re serving a medical purpose. They’re ultrasound machines. They’re cameras. They’re operational technology that Rob can tell you a lot more about.

But the concern that I have is that so much of our infrastructure is running on outdated machinery that is being operated by people who are directly incentivized to not patch it—whether by being forbidden to by vendors or simply out of fear that if they turn it off they won’t know how to get it turned back on again, they don’t have the technical expertise to make that happen. So I think we have a deep and fundamental problem in health care in this country that in—it revolves around the fact that in saving human lives people often think of the technology as secondary. In reality, more and more of it is not only becoming connected, but crucial to keeping people alive. And that’s where our vulnerabilities live.

It was devastating to talk to some of the incident responders from WannaCry. I spoke to several of them in 2021, and I remember listening to a British IT responder in the National Health Service choke up and ask me if I thought he’d done the right thing by shutting off the technology that his doctors were using in the hospital he was servicing at the time. People were hurt one way or the other. But the thing that he remembered the most was that no one believed him about how serious the attack actually was.

DUFFY: And that, I think—Tarah actually wrote a really beautiful paper with—for her Fulbright with Lord John Alderdice specifically on the psychosocial impacts of first responders in cybersecurity incidents, and the fact that we need to be thinking about this much more seriously in terms of a field and how we help those individual human beings that are at the frontline of that attack because they are frequently, like, the random IT guy just shouting into the wind, and then everyone turning to them to fix it. And it is—it causes true mental anguish. It’s devastating.

We have to turn to questions. I’ve already stolen four minutes of the audience’s time for questions. So, Phil, I just have one last question for you. And everyone online and here, be thinking of your questions. Reminder that this is on the record.

When you—when you’re hearing the challenges that Tarah is describing, especially around the sort of operational components, and you’re thinking of your role as a cloud provider that people are looking to to, like, I’m going to use Google Cloud because it’s going to help keep me safe and it’s going to help protect me, what are the connections there? Like, what are the fault lines that you can’t protect against? And what can something like Google Cloud protect against?

VENABLES: So it’s interesting. There’s a thing in the cloud industry, you know—and you know, in fact, any kind of technology service, there’s a thing everybody calls shared responsibility, which is the cloud provider runs the base security infrastructure and then everything else is the customer’s responsibility. And look, that’s kind of contractually correct, but we started talking about this in terms of shared fate rather than shared responsibility, and it informs a philosophy where we think all tech companies need to step across that line of shared responsibility and think about: Are we shipping products with full safeties on so that customers then don’t need to quickly turn some dials to make it secure? How do we think about making sure that some of the things that Tarah described, which is, you know, our ability to provide service doesn’t limit the customer’s ability to keep things up to date? And how much of the responsibility is with us to keep things up to date for them?

And the broader thing of that is there’s a lot of disincentives in the ecosystem. Like, for example, if you update a medical device, there may be some rules that require that to be recertified and you’re kind of locked up for months in doing that. Collectively as an ecosystem, we have to look at all of this and design it so things can be kept up to date, things can be segmented. The headwinds of well-intended rules that were designed for a previous era that stop us doing these things, we need to adjust those.

And so I think we too easily focus on this as a purely technical problem, whereas it’s actually more of kind of a societal/organizational/cultural kind of end-to-end systems problem. But I think the more we look at it like that, the better off we’ll be.

DUFFY: And I think, Rob, that goes directly to your point as well. People are making rational decisions based on a cost-benefit analysis for their particular situation, and I don’t think that we’ve empowered folks to make that rational decision sort of on either side in a way that actually does truly balance the risks as well.

With that, I have so many questions about ransomware and cybersecurity insurance and how AI increases threats and all these things, but I have to turn it over to the audience. So let me start—I’ll start here. And maybe let’s take, like, two or three questions at a time, actually. And—yes.

Q: OK. Sorry. Andrew Borene with Flashpoint. CFR member, longtime.

I just wanted to mention, you had a great conversation about the persistent presence/threats of major APTs like China, Russia, Iran, and what that means for the critical infrastructure and public-private kind of defense. A thing that is a lot on our mind at Flashpoint, on my mind day to day, talking about international leaders, is the extortion ecosystem. The business interruptions that are being caused at hospitals, health care, construction firms, et cetera from extortion and things moving into—beyond just lock the data, unlock the data into actual DDoS plus threatening families and taking the cyber into the physical in another direction, I’m just wondering if it would be possible to hear from each of the three of you on how big is that impact when we saw LockBit take down $500 million in payments last year, and they might reinvest some of that in future capabilities?

DUFFY: OK, great.

Q: Thanks. Alan Raul, Sidley Austin and Harvard Law School.

And my question, actually, follows on nicely yours, which is on the issue of detection, detection of persistent threats. You talked about secure by design to improve the state of security in advance and resiliency to help recover after you’ve had an incident, but what about detecting the persistent threats? On critical infrastructure, I think you’ve all been referring perhaps without naming Volt Typhoon, a Chinese effort to place malicious software lurking in the background in various critical—in critical infrastructure and internet devices. Can we do a better job of detecting these threats? Perhaps AI will help do that. And is the role of the government being underplayed at the present time on that front, the detection front, or that primarily a private-sector issue, in your opinion?

DUFFY: Fantastic. Thank you.

And let’s go over here, and then we’ll—and then we’ll take these three questions, turn them over to you all, and then we’ll do another batch, including anyone who is online.

Q: Good morning. Jonathan Cedarbaum, GW Law School.

I wanted to pick up on a point that Rob Lee made about the preferability of outcome-based regulations. My question has two parts. First, you mentioned FERC as a possible good example, some of the standards they used. If you could say a little bit more about that example, that would be great.

But then flipping to the other side, what do you think are the obstacles to—that are preventing more agencies from following the outcome-based approach? Having worked on a lot of cybersecurity regulations in the government, I will suggest one—(laughs)—reason, which is I think regulators find it more difficult in the first place to define outcomes, but second and more important how to audit regulated parties when it comes to outcomes. So if you have ideas about how to overcome those obstacles or others, that would be great.

LEE: Do you want rapid fire from—on all three?

DUFFY: Yeah. So just to—just to bring it all together for folks, first of all, when you hear the phrase “extortion ecosystems,” what does that ecosystem actually look like for you all? I’d be interested in you all sort of defining that term. How are we thinking specifically around the persistent threats there? Because it’s the year of our lord 2024, we’ll have to discuss whether AI increases offense or defense capacities. And then how do we think specifically about building policy and outcome-based policy around those?

So, Rob? (Laughs.)

LEE: OK. Extortion ecosystem, yeah, we’re seeing—I mean, you’re seeing the same thing, right? Nonstate actors and state actors influencing each other, sort of moonlighting where somebody is in the government using infrastructure during the day and at night the use the same infrastructure and capabilities to target people, shared resources back and forth. Yeah, we’ll continue to see that grow.

There was somebody that compromised a single, like, email at Dragos and tried to make a big thing about it, and then emailed us saying, hey, we won’t do public if you pay us. I was like, yeah, go public, it’s fine; like, let’s share. And it got intense enough where they started calling my—the teacher at my kid’s school to threaten violence, trying to SWAT my house to try and get me killed, and a bunch of other things. I mean, they’re like, yeah, screw you, dude. And it was—law enforcement was—I love law enforcement, but it was almost completely useless during the whole situation. Like, I called ahead to the police department saying, hey, if you get called swatting—or, like, that you’re supposed to, like, SWAT my house—(inaudible)—active shooter, you’re coming in guns blazing, like, what can I do to, like, help you know that that’s not true ahead of time? And they’re like, nothing; we have to take it serious. I was like, OK, good. Good to know.

In terms of detection, when you look at infrastructure, that’s exactly an issue that I focus on a lot—(inaudible)—focuses a lot. You probably have about 95 percent of the budgets for all companies, small and large, dedicated towards enterprise IT, so about 5 percent is going to where you generate revenue and where you impact society. In that 5 percent, all the standards and regulations coming out from government are on prevention—patch, passwords, antivirus, encryption. It’s all prevention. And we looked at NIST Cybersecurity Framework and 6403 and a bunch of these standards. Literally, about 95 percent of them are prevent, which means across identify, detect, respond, recover companies are getting told to spend less than 5 percent. So I think you—we deployed our tech in a company that was actually one of the Volt Typhoon targets and found that they had been compromised for 300-plus days.

So I don’t think it’s a topic of what do we need to do next in terms of AI or whatever; it’s just do what we know to do and we can be sort of good about it. On AI, I think AI has a lot of interesting use cases. Massively important to the sort of digital world in general. In cybersecurity, it’s incredibly hyped up. And a lot of companies are doing things that are not AI and just claiming it AI because you get a market multiple bump—bump in the market, that is. AI is really good when you have training data. Adversaries doing things is an inject into the training data. So the idea that you’re going to use AI to detect threats, I think, is probably ridiculous. But using AI to help automate workflows, to help make the defenders more proactive, absolutely. I hear a lot about AI-based attacks. I’m really looking forward to the first person that shows me one, and so then we can talk about it more. Social engineering and fake calls and things like that, absolutely. But AI targeting critical infrastructure and doing attacks, show me one and we’ll look at it. Until then, I don’t think it’s worth the conversation.

On the topic of regulation, it’s exactly your answer. We are really good about creating regulation for auditors, not the for the audited. It’s very hard to audit outcomes. And so that is probably the main challenge.

On the what makes FERC a good example, FERC and NERC have this model, just to relate it for everybody—

DUFFY: What do FERC and NERC stand for?

LEE: Just to relate it for everybody, the—FERC is the federal energy regulator for the United States, bulk electric system. So if you’re, like, a big transmission substation or big generator, you’re under FERC.

FERC then also sort of delegated their authorities to an organization called NERC, which is the North American Energy Reliability Coordinator. It’s a private-sector organization with regulatory authorities, one of the very few in the country. And how it works is FERC will come out and say here’s why we’re concerned, here’s what we’d like you to accomplish, here’s examples of what we think the how looks like but we’re not going to hold you to it; NERC, what do you think? And NERC pulls together members from each of the power companies, or twelve or fifteen of the power companies together, and then says, well, we think your how is a little bit off; it wouldn’t work that way. But we understand the why, and we’d tailor it this way, and as a result this is what right looks like in back. And it goes back and forth, and FERC says that meets my intent or it doesn’t, just cycles and cycles, probably a two- to three-year process. And then when they say, OK, this is it, then there’s economics, because now the utilities are able to pass that back to the rate. Your electric bill goes up by a penny, but they’re able to go do the cybersecurity effort they have.

So it’s here’s the why, here’s the what, work with industry on the how. Yeah, the auditing piece of it is a little bit more difficult so we got to be careful there, but it’s also backed by the economics of it and everybody knows why they’re doing this. And the NERC’s regulations, as an example, have very meaningfully up-leveled the electric system in the United States as a result. So that as a model is much better than TSA saying, hey, you have twenty-four hours, pipeline community, to give us feedback, and then we’re going to implement all these IT security controls across your pipeline, which happened a couple years ago.

DUFFY: I’m really struggling to maintain focus because I just keep thinking about what a Bluey episode would look like—(laughter)—featuring NERC and FERC and their very heartfelt conversation.

Phil, over to you.

VENABLES: Yeah. So I’ll pick out just a couple of things just in the interest of time.

So on the—on the detect, absolutely I think a lot of organizations are focusing more and more on detection. The one angle I think I would call out as a positive is the public-private partnership. So there’s a lot of work across the industry, private sector to private sector, sharing information about threats that prime everybody to detect. There’s also an increased amount of collaboration, whether it’s with the DHS Joint Cyber Development (sic; Defense) Collaborative and the NSA Cyber Collaboration Center, a lot of stuff going—and more important, there’s a lot of stuff going on electronically rather than just people getting in a room and saying, well, did you see or did I see. And so we’re becoming much more industrialized in our collective detection, which is imposing costs on attackers. By no means anywhere near where we want it to be, but it’s way better than it used to be, and so I think there’s glimmers of hope there.

On AI, I think that ultimately delivers a decisive defenders advantage. A little bit of risk of talking our own game because we’re a massive AI provider and we have a lot of the data that’s training security-specific large language models, but we’re seeing the early indications that that is going to deliver advantage when used in the right way.

The thing on the outcomes regulation, so I don’t disagree with the notion of outcomes-focused regulation, but I think we have to also move away from lagging indicators of performance like breaches, malware events, and I would just—more toward leading indicators of performance like how much we can reproduce our infrastructure, reproduce software, manage all those things that incidentally, deliver massive adjacent commercial benefits, not just security resilience. And one of the things—again, in the PCAST report we called out that we need to radically shift from lagging indicators of performance measurement to leading indicators of performance measurement.

DUFFY: And Tarah.

WHEELER: When I think about the extortion ecosystem, I think about the Noble Christmas Tree Farm, which is a farm that experienced a ransomware attack on their payment systems and weren’t actually sure how you pay a ransom in bitcoin. When I think about the extortion ecosystem, I think about a logistics company that is a client of one of our clients who experience 300 million—or, sorry, $300,000 in business email compromise. Agriculture companies, Christmas tree farms—the extortion ecosystem is our farmers being farmed for their money.

And when I think about the way to prevent that, I do think about detection in advance of the situation, the posture of each of these organizations. And the challenge that I keep coming up with is that the current ways that we think about compliance and regulatory structures don’t actually do the kind of detection that you’re asking about. What they ask are questions that teach to the test, right? This is—this is a compliance framework, and I have the entire structure of most of the U.S. cybersecurity compliance and regulatory framework set up, cross-walked, and mapped in my head right now.

DUFFY: This is why she drinks.

WHEELER: It’s why—yeah. Well, it’s why I drink irregularly and thoroughly when it happens. (Laughter.) And the challenge we have is that on any of those frameworks you never find a question like: Is your home and work email password different? That’s not a question that appears on any one of those frameworks. We ask that question of people. A question we often ask people, just the receptionist, is: Do you feel safe in your workplace? It’s not a question that appears on any of the regulatory questions, and yet that question—questions about how people are behaving in terms of cybersecurity often elicits by far the most valuable information about the actual cybersecurity posture of any organization.

And I’ve spent a lot of time kind of on the bleeding edge of information security. I’ve done stuff here and there; let’s not talk about that right now. And yet, I found myself caring more and more about this baseline of what people need to do to keep themselves safe around the world.

When I think about outcomes-based auding processes, I am struck by the fact that these are checklists that we currently have as tools, and there’s no match between the answers one gives and whether or not you actually got a better security result at the end of the day. I don’t know if I’m the only person in this room who’s actually a certified information security auditor, but I’m also an auditor. And so I keep this information in my head, and I keep seeing all the ways we could make this better, and I keep seeing us driving towards whether or not we can make a new shiny tool with AI to go find this information out. And the answer is no, because the thing we have to do is start, for detection purposes, asking the questions that involve actually keeping people safe, and eliciting information from the most sensitive—the most sensitive orchestration and automation and telemetry we possess, which is the people that work at these organizations that know the posture.

DUFFY: It’s a—you know, there’s such a long history of human-centered design in technology, and I wonder with regulation to what degree we think of auditee-centered design as opposed to auditor-centered design.

And one—do you all think that it would be fair to say that at this moment the, you know, extortion ecosystem, if you want to call it that, is essentially a cost of doing business in the United States?

WHEELER: I have an opinion on that, and it’s going to be short and sweet. For enterprises, it’s a cost of doing business. For small businesses, it is existential: They pay or they go out of business, and the thirty jobs they created are gone with them.

DUFFY: And what I think is so interesting about that is that as a country we have worked so hard to develop a business ecosystem in which extortion is not a defining factor of how we grow and build businesses in this country, and it is a defining factor of how we define the political and economic stability of so many other countries around the world. And yet, in this particular area I think we’ve really lagged behind in ensuring that small businesses in particular, and medium—and medium businesses, are not having to grow against that particular business schema in a way that we would not allow in any other form, because policymakers are familiar with the other areas much more than they are familiar, I would say, with the extortion ecosystem.

LEE: Could I—could I add on that?

DUFFY: Rob, yeah. And then we’re almost to more questions, I promise.

LEE: Just to be purposely sort of bombastic, you have to include vendors in that ecosystem. There are a lot of vendors out there that their security features cost a lot more, and so we have very, very insecure products still going out into infrastructure because the more-secure version costs so much more. And I don’t know how that’s not a form of extortion.

DUFFY: Mmm hmm, totally.

VENABLES: Yeah. And in fact, you know, this is one of the pleasing things, you know, that we all—many of us signed up for with the DHS pledge. You know, for many organizations that signed that, it was easy to sign it because we’re already doing it. For some of the organizations that signed it, they’ve got work to do. And I think, you know, the organizations that do have the—I think that have the most challenges are the one that Rob calls out, which is where you’re selling a bunch of products and then you have an adjacent business that’s selling a bunch of security products; to secure the products you’ve already just sold somebody is, you know, questionable.

DUFFY: Yeah.

OK, we have a question online?

OPERATOR: We’ll take our next question from Dana LaFon.

DUFFY: Oh, hi, Dana.

WHEELER: Hi, Dana.

Q: Hi. Hi, Tarah. Hi, everyone on the panel, Kat. Thanks for doing this. This is a great panel. My name is Dana LaFon. I am the Council on Foreign Relations national intelligence fellow this year, and I also—my day job, I guess, is I work for NSA as their lead cyber psychologist.

And I have a non-cyber-psychology question for—(laughs)—for the panel. Currently, we have a massive machine that seems to manage government contracts and contracting which seems to be dominated by the largest prime government contractors. Does this system get in the way of product delivery, particularly innovative cybersecurity products and services? And—because there’s always an and—what would need to change, in your opinion, in this contracting ecosystem as it is today?

DUFFY: I love it, Dana.

OK. And do we have any more questions in the room? I’ll take maybe two, here and there. (Inaudible)—the back.

Q: Great. So then John Ackerly. Oh, hi. (Comes on mic.) So then John Ackerly, co-founder and CEO of Virtru Corporation.

We have an election coming up. Should we feel good? This is for Rob. Are our systems ready to go? And I thought Phil made a great comment about a certain company in Redmond. But anyway, we—

VENABLES: Wasn’t necessarily—(inaudible). (Laughter.)

Q: No, but really focused on the—on the election question, should we feel confident?

DUFFY: And back here.

Q: Joe Hill, recently retired from BlackRock.

Are any of our allies—the Germans, French, Israelis, South Koreans—doing it better?

DUFFY: All right. So we have the government contracting machine, we have elections, and we have whether our allies are doing better. And I would throw into that mix for your consideration: Have we done too much governing here with sticks and not enough with carrots?

And so, with that, Tarah, maybe I’ll start with you because I went the reverse direction before.

WHEELER: There’s a real challenge there when it comes to providing enough qualified, competent, capable supply into the supply chain for the defense industrial base. I love Dana’s question, and it goes directly to whether or not we are governing our way out of innovation and competition at the level of small to midsized businesses.

Is it possible to get into the supply chain for the U.S. military? It’s a real challenge. One of the solutions for that is to pay attention to the GAO’s recent reports I think in 2019 and updated in 2021-2022, which say that many of the current contractors for the DIB get waivers for their cybersecurity certification status because they had previously been compliant. And so even if they don’t remain in that compliance, they often will receive a waiver. I think the rate in 2017/2019 was 60 percent of them were noncompliant but getting waivers anyway. This is the whole DIB, not just the ones applying for it. And now it’s still high, around 40 percent. So removing that and making it an even playing field for all organizations based on their current cybersecurity posture would make a substantial difference, I think, in adding to the capacity and competition at the small to midsized level in the supply chain for the DIB.

And then when it comes to the last part of the question, there’s a deep responsibility, I think, that we—that we all have to pay attention to what’s happening. Phil and I had a great conversation just before this. There’s a lot of people that are shocked, stunned, and dismayed to learn that cybersecurity is on their plate of responsibilities. And it’s time we start to figure out where the buck stops in every organization and who takes responsibility. I am thinking right at the moment of the fact that in SolarWinds the CISO was charged and the company was charged, but somehow the boss of the CISO and the CEO weren’t charged at all for any cybersecurity issues with SolarWinds. Why does the buck stop there if the budget didn’t and the responsibility didn’t and the legal responsibility didn’t stop there? I’d like to know where that responsibility lies and who’s willing to step up.

DUFFY: Phil.

VENABLES: So I’ll pick out a subset of the questions.

On the question of kind of, you know, other nations, I think, you know, to your point earlier about the federalist approach in the U.S., other nations have an easier time of setting standards. You know, with our complicated structure of federal, state, local, tribal, and territorial, all of these, this intermix of public and private across that makes it a lot harder, I think. When you look at some of other countries, they have a much easier time.

Interestingly, I don’t think they always have enough—as much success as you would expect with that easier time, largely because a lot of countries around the world intermix a set of concerns around privacy, data localization, that sometimes work against resilience in inadvertent ways. I think in the U.S. at least we’ve kept kind of—you know, in some ways there’s some good silos between these disciplines, while we still have to cross silos a little bit. But when you intermix geopolitics with security, it tends not to get to the right outcome.

I think on the question of kind of—you were getting to with incentives, I think we don’t do enough to educate business leaders and government leaders. The fact that while it’s OK to keep saying security and resilience is—you know, it’s an obligation, it’s a loss avoidance, it’s for rules and regulation, it’s for your duty as a good leader, I think that only gets you so far. I think we need to educate people more that doing security and resilience well actually does confer some real business benefits.

And again, I touched on this before with the moving from lagging to leading indicators. I’ll just very, very quickly pick one example. If an organization can manage all of its software, and reproducibly build it, and deploy it in an effective way, that gives you significant security and resiliency benefits. But the main reason for doing it is it actually gives you increased ability to deliver product, meet customer demands, reduce cost, increase efficiency, drive product, and be a better business. And if you do it in that approach, then you get the security and resilience as part of doing that. I think we need to reframe every single security discussion around why would you do it for the adjacent business benefits. Do the thing that creates the security, not just do the security. And I think that will get us in some way to resolve this incentives problem that we seem to have.

DUFFY: Fantastic.

And Rob.

LEE: Yeah. From a(n) incentives perspective—I’m going to work the questions backward. So from an incentive perspective, I do think it’s appropriate for these to be a base level of regulatory oversight on infrastructure providers. I should not be able to accept risk as a company that is borne by the community members outside my fence line, OK? Now, defining that’s the tricky part, but I do think there needs to be a baseline of regulation, sort of a stick, and that’s missing in some areas and some industries.

But then absolutely much more on the carrot of if you are doing this you can actually benefit. And the economics—I love the be a better business, but like, people like money. There’s power companies that deploy security products and get a 6 percent rider on the capital return. They make money deploying security products that make their infrastructure and make their community more safe and secure. Guess which ones are leading the discussions? It’s not hard to pick out.

On the topic of allies, yeah, I definitely agree some have it easier in terms of, like, size and scope as well, and they don’t always achieve the same outcomes. But there are some doing very well in different areas.

Australia, as an example, is doing very well with the SOCI regulation in terms of having conversations that go to a board level. So the board must sign off the risk management plan for the operations infrastructure of those countries—or, companies.

You look at Saudi Arabia. They’ve gone very, like, hey, we don’t have the time for this. We’ve seen America try to ask. We’re going to go to the companies and tell them they’re doing this, and we’ll just provide them the direct resources, but they are going to make these investments. And if you don’t, we’ll come in and do it for you. Maybe a little draconian, but it’s having an effect.

And then probably one of my favorite ones is in Singapore. They launched a regulatory regime for their critical infrastructure asset owners and operators. They’re able to identify all of them. Like, there’s a smaller set in Singapore. But they bring them all together; they say here’s the regulation, you have a voice in it, let’s work together on it. It is outcomes-focused. Now what do you need? Let’s go. And government will lead by example, which, honestly, sometimes I think is what hurts us in the U.S. You’ll get a federal agency saying you should do this, you should report your incidents faster, you should do that, and then the same crap happens in the government and you look like hypocrites. So at least Singapore is sort of leading by example on that topic.

And then in terms of election infrastructure, so the systems themselves, sure, you could find a bunch of vulnerabilities. And as a cybersecurity person, I’d love to talk about the systems. But the problem with the elections has nothing to do with the technology and the systems at this point. We have eroded every ounce of confidence the American voter has in, like, anything they’re being told. You can watch events on TV and somehow it’s political to talk about the fact that there was an insurrection in this country. So I think we’re screwed, but I don’t think it has to do with cyberattacks. (Laughter.) So I think there’s basic education and critical thinking that needs to be applied.

And there’s politicians—and this is the piece I hate most. If you have an opinion, that’s fine. If it disagrees with mine, I’m fine with that. But if you’re positioning a position or an opinion that’s not actually yours because you think it’s good for ratings or whatever, I think that is the death to democracy. And there’s a lot of senators I hang out with, Republican and Democrat, and I go back in the bar, have a cigar, and everyone’s on the same page. And they get on a screen, it’s WWE. You know, I mean, I think everybody but Ted Cruz is pretty nice in Congress, so, like, let’s—(laughter)—let’s figure out how to go show that to the American public.

My last committee hearing was Garbarino and Swalwell, Republican and Democrat, working together on how do we take care of water infrastructure across the country. That doesn’t get the airtime, so not incentivized to do that, but it’s bipartisan and it works. And so I just—I would like to see a lot more bipartisanship and start teaching people to, like, have basic elements of respect. Like, I can’t have my 6-year-old watch any of the election debates. Isn’t that silly? So I think there’s a huge issue with the election that’s upcoming where any adversary can do anything and the other party that benefits will grab onto it and amplify it instead of saying, hey, this is a strategic adversary to the U.S. trying to hurt us. I think that’s the problem, not cyber.

DUFFY: And on that note, on allies, I want to say everyone should stick around for our next session, which is going to be great, “Transatlantic Cooperation: U.S.-EU Cyber Trust Mark.” We’re going to take a fifteen-minute break, but I think my takeaway from this whole panel is two things.

One, I am so, so proud and grateful to our panelists because I used mom voice on them—(laughter)—before this panel about acronyms and they came through in spades. There were really so few acronyms, and I am just so proud of you all. So thank you for that. (Laughter.) And I didn’t have to do, like, giraffe jargon once—or, jargon giraffe.

And second, I think my takeaway from this is how deeply human the problems that we are dealing with are, how deeply, fundamental societal they are. They are the types of things we have governed for centuries, and it’s a new modality and there are new complexities to it. But meeting people where they are, giving small business the—and small-business owners the dignity and the respect that they deserve, leading by example, asking people to do as we do and not just do as we say, these are all really, I think, critical components of the leadership that we need at this time. And the politicization of what is a true fundamental national security and national unity question for us, the cheapening of that for political points and hype cycles is not in anyone’s interest. And so as a community, whatever we could be doing to, I think, reset that conversation and demand more of those who are in charge of decision-making at this point is also, for me, something that I really take away from this panel, that we should demand and expect more of our leaders in terms of the way that they talk about this and the seriousness with which they hold it in private and, importantly, in public.

And so with that I want to thank our amazing panelists. Phil, I want to congratulate you on the—on PCAST being out. I know that that report was a big lift.

VENABLES: Yeah.

DUFFY: And so it’s a really exciting thing to be out there in the world. I encourage everyone to read it.

And, yes, go enjoy coffee, get some fresh air, and hopefully we’ll see you back at 10:45. Thank you, everyone. (Applause.)

(END)

This is an uncorrected transcript.

Virtual Session II: Transatlantic Cooperation: U.S.-EU Cyber Trust Mark

 

EDELMAN: Good morning. Good morning and welcome back to the second session of the Council on Foreign Relations Cybersecurity Symposium on protecting the U.S. and EU from cyber threats.

 

I’m David Edelman from MIT, and I’ll be your presenter for this on-the-record session—on-the-record session, which we all know—on transatlantic cooperation between the U.S. and EU on the Cyber Trust Mark, which we’re going to get into in just a second.

We could not be more fortunate today to have two very distinguished guests at the absolute forefront of these issues with us today. Joining us from Brussels, Roberto Viola is the director-general for communications networks, content and technology at the European Commission; and joining us from I think about 600 feet east of here is Anne Neuberger—who I had the pleasure of serving alongside in part of her long and distinguished U.S. government career—is currently serving as deputy assistant to the president and deputy national security advisor for cyber and emerging technology.

So we’re going to discuss for about thirty minutes here among us, and I would invite members to think of their questions now because we’ll try to reserve a full thirty minutes for questions thereafter.

Anne, Roberto, thank you so much for joining us today. Now the title of our discussion is U.S. and EU Cyber Trust Mark, and we’ll get to the contours of the cooperation piece in a minute. For those who were able to join previously, we just had a panel about some of the threats, particularly to IoT and the risks here.

This is also coming in the context of a broader set of regulatory moves. Of course, in the EU and the U.S., across telecommunications and technology, when it comes to something like the U.S.-EU—the U.S. Cyber Trust Mark, you know, I’m reminded of other areas. We just had broadband nutrition labels, for instance, rolled out by the FCC, that’s, of course, thinking about price and service transparency, but I think something altogether different is envisaged here.

So Anne, if I could start with you, could you set the scene here for us a little bit and talk about the U.S. Cyber Trust Mark—why? Why do we need something like this? And what’s really anticipated with trying to have a government labeling on the security of these kinds of devices?

NEUBERGER: Absolutely. So first, great to be here with you, David, and always a pleasure to be here with my friend and colleague, Roberto. It’s always nice. We share a technical background, so sometimes we particularly enjoy geeking out on various—

VOILA: Yes.

NEUBERGER: —technology policy topics we get to work on together. Great to be here with all of you.

So when we think about, you know, Internet of Things devices, connected devices, there are billions of connected devices. Each of us have many in our lives, whether we live in a home with a smart home security connected system, whether it’s a fitness monitor, whether it’s a baby monitor. All of those devices are usually interconnected, and they’re largely not secure.

So when we think about the threat, think about hackers turning off the security systems in a set of homes, either to enable a physical attack or as part of pressuring people for a ransom payment or, you know, a large number of parents in a particular city suddenly hearing their baby monitors—you know, a message coming out, or somebody take pictures and posting those on the dark web of a child sleeping, so a lot of very personal threats to people’s—the devices in our homes, our offices, and of course, our schools, as well.

So the word you used was perfect when you said nutrition labels. We know that there are three parts of the system. There is consumers or people working who are shopping for devices, whether online or at home, and we have no way of knowing is a given device secure. We have companies who say, you know what? I can bake in some security, but how do I differentiate my product to that a consumer knows it may have cost me a little more to do that, and a consumer knows that the device is secure. And then you have the government role. The government does bring some trust. We set standards in some cases. So how do you bring those three together?

And in the U.S., the Cyber Trust Mark is the way we’re bringing those three together where any Internet of Things devices that meet the NIST standard for cybersecurity and are tested under an FCC program can bear the Cyber Trust Mark label. We’ve registered and trademarked the label with—the chief of the U.S. Patent and Trademark Office called and said, Anne, you’ve got to make sure to do that. You don’t want people putting the label on if it’s not secure. The FCC had a five-zero vote on the program, so broad bipartisan support. It’s going through some of the legal processes, and our goal is to have labeled devices in stores and online by the end of the year.

A large swath of companies have already signed up to submit their products to be tested, so it’s really an exciting time to—and our vision, fundamentally together, using the different tools available to us in the EU and the U.S., which Roberto will talk about, is to change the dynamic. Right now, you are shopping online. You have no idea are you bringing a safe device into your home or into your office. Let’s change that to where, when you are shopping, it has an underwriter label. It has an ENERGY STAR label. It has a nutrition label, and then you know right away, should I buy this device or should I set it aside because I’m bringing too much risk into my life.

EDELMAN: So, Roberto, the EU, as Anne mentioned, has a separate set of tools to do this, and obviously most recently, the Cyber Resilience Act, you know, political agreement on it. For those that aren’t following every single word of the Cyber Resilience Act, help us understand what it the domestic context here in the EU for this set of affairs, and then how should we think about that as the baseline for what will ultimately be—and we’ll talk about it in a second—deeper cooperation on this kind of labeling scheme.

VIOLA: Thank you, David. First of all, many thanks to the Council on Foreign Relations—to you for inviting me and inviting me with Anne. And with Anne, really we share a sincere and intense cooperation on all the tech front, and I really thank Anne for being a friend of Europe.

To give you why we got to the point to propose a legislation on connected devices, which was eventually approved, I give first a couple of figures, and then two anecdotes. The first couple of figures I want to give you is when interviewed, 70 percent of the Europeans said, we would like to buy devices which are secure—also from the cybersecurity point of view. And here we are not talking about the super sophisticated devices, eh? We are talking baby monitoring or any other connected—fridges or whatever. Then, according to a survey of Euroconsumers, two-thirds of devices sold in Europe have issues, half of it very serious cybersecurity issues.

And now comes the two anecdotes. The first one, believe it or not, the Spanish, Italian, French police caught a network of criminals that they offered the Netflix of cameras, so with kind of subscription plan to people to watch private homes, and also to follow the people in hotels and other things, and according to the type of service, having different prices.

I find it shocking, frankly—really shocking. And the source of the problem was cheap cameras sold at zero security features. And admittedly, I mean, we have many of these devices around us in our homes, and this, frankly, is not acceptable.

The other case is another criminal organization that was—that specialized in getting information about the connected keys of cars and stealing the car at gas stations. So people were coming in, having a little pause. They were, with radio devices, able to capture the essential of the key and then stealing the car. And I have countless other examples.

So this is serious. This touches the life of every consumer. That’s why, although it’s a complicated legislation, it was not that difficult to get political support for this law, which was voted by all the political groups. So the law was voted in March, and we’ll enter it fully into force in three years from now. Why such a long time, such a long lag? Because, I mean—(inaudible)—need to adopt, we need to stabilize the standard and enhance the cooperation—the transatlantic cooperation. That’s the work we are trying to do, to have the set of common standards that can be used for certification.

So what would be the end result of this law, this every connected device? One, there will be C marking—the marking you see about the fact that you don’t get burned or you don’t get an electric shock in touching the device. We’ll also mean that cybersecurity certification has been passed for this particular device. So we need to make sure that we have the standards, we have the labs, and everything in place in the next three years.

And the law applies to connected devices and to stand-alone software. The law does not apply to open-source software except if the software is part of the problem.

EDELMAN: So now we’ve heard labels by the end of the year, hopefully; law in force in the next three years with companies adapting. These are pretty short timelines—maybe not in tech land but certainly in government land.

You know, we think about this question of the market because fundamentally these are private-sector-produced devices, and one of the areas that we have looked to long in the U.S. government and elsewhere, of course, is the power of procurement. It seems like there have been a few statements that are implicit in what we’ve heard today. One is the potential of the market maybe not keeping up with what is perceived to be the national security imperative. Anne, you said that eloquently previously, as well.

Tell us about, if you would, how you are thinking about this interface between the power of procurement and the federal government’s—as the largest IT buyer in the world—ability to improve what is needed while at the same time recognizing that, as we all said, some of us are buying baby monitors on a fairly regular basis and don’t have that level of security. What is that interface, and is one leading the other, or is one just not going to get us there right now?

Anne?

NEUBERGER: That’s an interesting question because what you see in the EU-U.S. partnership that we’ve built here on this topic is we share a common vision, right? We want people shopping—whether for their homes, their schools, their offices—to be able to buy secure devices. We want to drive down the level of threat that Roberto described so eloquently by having those devices be defensible.

But they are not today. But we have different tools in our two systems to get to that end goal. So the goal of the roadmap we’ll talk about later is agreeing on the end goal, and then how do we bring our tools together for maximum impact to get there as fast as possible.

So in the U.S. we tend to—in that tension—I think in a positive tension between innovation and security, we tend to lean on the innovation side, so regulation of tech has often taken a long time, if it happened at all, and I think there are sobering lessons from that in a number of cases. What we’ve started to do in the U.S. is, first, establishing the program. It’s a voluntary program. We believe the incentives are in there enough because we hear consumers want secure devices. We hear companies say, we’ll do it but we want to be able to differentiate ourselves. And because government is one of the largest tech buyers in the world, we can prime the pump by, for example, saying the U.S. government is looking to buy labeled connected devices by 18 months from now, right?

The first time we did that was in the president’s executive order of May 20, ’21 where President Biden released his first cybersecurity executive order. It applied the lessons learned from SolarWinds where we learned that, if software is not built with a secure software framework—how you write code, how you compile code—then it may—there are multiple mechanisms for sophisticated attacks in that development timeline, and indeed, the Russian SVR did a pretty neat hack that was very hard to find.

So in the president’s first executive order, he tasked the National Institute for Standards to define a software development framework, and then said the U.S. government will use it—begin using it by X timeframe. And companies will have to attest to us how they are doing, so give us examples and proof of how they are doing that. That just went live—those attestation requirements, and that’s really where the proof is in the pudding, right—the technical requirements using the power of government procurement, and then a way to show that one is really doing it.

So we’re going to look to do the same in the Cyber Trust Mark program. It is a voluntary program for everyone, but from a government perspective, we want to use the power of money and then give it enough of a time frame to prime the pump. We have to give enough time for devices to go through certification, to be able to, and then make that we’re waiting at the end as a large market.

EDELMAN: So, Anne, you said innovation, and of course, one of the areas that we think about as, you know, really driving a lot of the innovative engine is the ability—particularly in software, but hardware as well—for smaller companies to play a major role in this space, to push innovation forward. And I’m wondering if I could ask you to brief, speak a little bit to what extent—you know, we heard about this earlier this morning how, you know, an increasing list of compliance requirements, increasing list of attestations do create burden on companies, and that sometimes it’s the large companies that have the muscle memory and the expertise of going through those government attestations, those government processes.

What mechanisms are being put in place, if any, right now to think about how the smaller players—that might have the more secure device but not the thousand-persons compliance department—could actually play a meaningful role in advancing security?

Roberto, I know you’ve probably been thinking about this in the context of the EU law. Could you tell us, and Anne, as well?

VIOLA: So a quick comment on public procurement, and on the other side of the Atlantic, same thing. A public procurement is an important driver for our economy. In Europe it’s 60 percent—16 percent of the GDP—European GDP, so it’s—public procurement market in Europe probably number one around the world. And our two markets are the largest ones in public procurement.

Of course when the CRA will be obligatory, you cannot procure something which has not the C marking, but this would be three years from now. In the meantime, we are convincing every entity—public procurement—that price is not the only thing that matters. I mean, security is very important. But we have three years of kind of the land of nowhere where we will insist the public procurement must be following also the security paradigm.

We have started, for instance, with our supercomputers, where it’s not only, I mean, the technical performance, but also security does apply. We have been sued by a company; we won in court. We do the same now in telecom. We have said very clearly that we are not going to spend European money on unsecure hardware; neither we will contract with companies that have this kind of suppliers.

So it’s a journey. It’s not, I mean, something we’ll fix in one day. But our intention is indeed to make sure that the public procurement market, it’s very much aligned with the idea of security.

On the issue of—(inaudible)—a gigantic issue, of course, during the discussion of this legislation and also during the discussion of the AI Act—and, you know, there’s not much answer that I can give you, I mean, or reassurance—yes, OK, it’s going to be simple. But the reason is this is security, so think about a small company that produces a pacemaker. I mean, if there is a problem, I mean, this costs the life of the person. So I cannot tell easily to an SME, look, I mean, it’s fine; you can actually do things in an unsecure way because you are small. In security, unfortunately, everyone has to adhere to kind of level of standard, which is, I mean, the maximum possible.

Now, what we are trying to do is do sandboxing, so—(inaudible)—these companies into a larger environment, and bring them through the process in a way that they will be ready to comply. Also, the CRA has a number of waivers on the documentation and other things for smaller companies. So, but frankly, that’s why, I mean, I’m not giving the kind of political answer, yes, of course we care, and would simplify everything. Frankly, this—(inaudible)—of reality (touch ?). This is security, so there is a limit of what you can do.

EDELMAN: And Anne, on the U.S. Cyber Trust Mark, how are we thinking about small businesses playing here?

NEUBERGER: I think the way Roberto said it was really perfect, right? The risk is significant. Everybody has to meet the standard, but we can be flexible on the paperwork.

EDELMAN: And is that indexed risk? I mean, after all, I mean, you raised the case of a pacemaker, which of course, you know, the evocative medical example. Baby monitors, important—looking at children in the house, probably a dramatic example. Other pieces in the OT space, not all of them are connected to critical infrastructure.

Is there a way in this process to think about risk index that is fit for purpose of the device, or is it a single standard across that is irrespective of how it will be put into practice, given the complexity of the systems?

NEUBERGER: A very good question, and certainly there’s a difference between Roberto’s system and ours in that, on the U.S. government side, we’re using procurement for government procurement only, right? So pretty much, if you are developing—and we use critical software, and we defined what that was—if you’re developing critical software for the U.S. government, you’ve got to meet the secure software framework in how you do it, right, because otherwise, then your software, as we’ve learned, can be used to hack, gain entry, and then others can ride that as well.

But regarding the attestation, we had flexible ways on how companies can do that attestation, and frankly, they can use technology to do it, right? If you’ve built the software securely, you can build the artifacts in to then generate them to show that you’ve done so.

EDELMAN: Got it. So this has not been—oh, please, Roberto.

VIOLA: Just I want to mention that the CRA is a gradual approach. There’s a list of this called mission critical. There are components or products which can be updated that requires—(inaudible)—attention. So the CRA is made in a way that not everything is—I mean, adds riskiness the same way.

EDELMAN: So you mentioned this is a journey. It’s a journey that the U.S. and Europe are both traveling, possibly to the same destination, which we’ll talk to—talk about in a moment. But in the meantime, this has obviously been done with a deep amount of bilateral collaboration across parties. This is not unprecedented, right, the idea that you might have mutual reciprocity or recognition. Obviously, we’ve thought in other legal contexts, the U.S.-EU Privacy Shield being in a data protection context.

Other countries, as I understand—Singapore—has a mutual recognition agreement with Germany and possibly others as well. I know the political statement was made of the desire to, not unify, but create mutual recognition between these standards so that—and you can describe a little bit more a U.S. device that meets these U.S. standards is able to be mutually recognized in Europe and vice versa. This is new. This was announced in January—at least it was new to those of us on the outside.

Can you talk to us a little bit more about where that cooperation stands? I mean, it’s a little bit of building the plane as we fly it here, but obviously the goal is to have unification of—if not standards, of mutual recognition.

Anne, tell us about where that process is and, Roberto, maybe where we go from here.

NEUBERGER: Absolutely. So it starts with some common principles. Every mom and dad should be able to know that when they are shopping for a baby monitor, it’s a secure one. You know, every office that’s buying a connected camera should be able to buy a secure one. So starting with that, that’s independent of what country it is around the world.

The second principle is if we want to incentivize companies to do it, the more markets that they can get the product tested once and then sell it in as many markets as possible, that’s an incentive, right? And so—and that’s the incentive both for the companies and for governments because we want this to scale quickly. There are so many different kinds of devices, and we want everybody to be doing that.

So beginning with those common principles, where we are in the process is to say in different countries we have different standards that are completed or different kinds of devices, so the work that’s happening is at the technical level, comparing notes. Where does the EU have a standard where the U.S. doesn’t? Is it good enough for the threat we have today because some of these may be old standards that have to be updated? If so, can we just use that? Is there one that the U.S. has where Europe looks at that and says, you know, it looks pretty good, let’s just use it, because the goal is speed and security at the same time. We can aim for perfect security, but it will take us, you know, three years to build the standards, or start and keep getting them updated. So that’s the practical work that we’re doing.

And countries have also different ways that they are approaching a specific risk. I’ll give you an example. In the Singaporean approach, Singapore was really a leader in this space. They’re a smaller market, right? They don’t—they don’t—they import a lot of devices. They don’t necessarily generate a lot of devices. So the question of which devices actually get tested to see if it indeed meets the security standard was one they grappled with. And do you accept a company saying—essentially self-certifying and saying, yeah, my product meets the standard, or do you want an independent test. Singapore made rules about what they did.

In the U.S. program for internet-connected devices, we want all the products to be tested once. We believe that self-certification, you know, is not—is not adequate, and we believe we have a big enough market, and now, jointly, a transatlantic market to incentivize that. So that’s the way, you know, both—so to sum up, starting with the common principles, looking at what’s already in place that we can just leverage quickly, and then managing for different cultural decisions regarding how much, for example, things are tested, and how much one can use the power of market to, you know, raise the bar as much as possible.

EDELMAN: So Roberto—one, what are the barriers to getting that done in the next couple of years, and two, where does it go from here? Is this going to begin with U.S.-EU and expand more broadly to the world, to other allies and partners? Is this a big enough market, what, being a sizeable percentage of global GDP in and of itself? How do you think about those two?

VIOLA: Yes, first, one thing I want to say about the work we are doing, it’s not just a kind of wish that we are making, but we signed an agreement in January, Anne and myself, to really commit to work together. And this is, I think, an important signal. This is serious; it’s not just the usual thing that when, I mean, two friendly countries meet, and they say we want to work together. Now we are already at the step—there’s an operational agreement and we want to work.

What are the obstacles? First, I think, as Anne said, that we need to identify quickly a set of standards that we can use for certification. NIST is really doing great job, (CNECT ?) also is working, and we will be very pragmatic. I mean, we will take the best—we will not, I mean, trying to play a game—(laughs)—you know, I have—50 percent have to be American standards, 50 percent European. I’m very happy to have 100 percent American standard if they are good ones. So this is the first thing to do.

The next, I think, is companies will have to play with us; otherwise, it does not work because, I mean, I have no legal power for three years, so this is really a long time, I mean. And this thing, if we want to see the light in few months—and I really commend Anne and all the effort to have this trust label in a year, and this would be my dream on a voluntary basis in Europe, too. This means that companies will have to cooperate with us. And this is not a given because there is still the feeling, oh, what is the trick here. I mean, there’s no trick. I mean, we are trying to really be helpful to create a transatlantic market here.

And then I come to, you know, second question. If we have—if we succeed, I think this will frankly be the gold standard in the world. I mean, when you look at the smart phone, there is this efficiency in marking. I mean, and these are trusted markings around the world. And that, I think, will be an asset for every company participating to the scheme, and for other jurisdictions to see the way we are doing things because, for instance, the CRA and the U.S. initiative is not requiring company to hand over the software to public authorities. And some jurisdictions do that, and that’s not OK.

These schemes do not ask company to handle company secrets or to have open-source licensing to whoever in their country; again, since we see and, unfortunately, we have to act in some cases in WTO. So it’s very respectful of private initiative, of innovation. It’s about safety, respecting the initiative of companies. So I hope this really becomes the gold standard.

EDELMAN: In just a moment we’re going to turn to members for their questions, so start thinking about your questions. But before we finish this portion, I do want to ask about another technical context that is taking up a lot of time on this stage and others, and that is AI. And obviously there are some parallels here. We’ve seen major regulatory steps in the U.S. and the EU on AI: EU AI Act, U.S. executive order. We’ve seen the development of two huge markets that are increasingly growing, and real risks about some of the security areas, and what we don’t know in many areas about standardizing the safety and security there.

So I’m wondering if you could just speak a little bit to this comparative question. Do we think the AI market, in five years in the EU and the U.S. should look like that dream that you just described of a single unified market for OT devices that have a certification, or not? Is there a fundamentally different set of baselines that we’re developing, even if the principles are shared?

So Anne, do you want to speak to that?

NEUBERGER: But first I would say there’s a sobering lesson learned, right? The conversation we’re talking about, using Roberto’s example, you know, where people can just order essentially compromised cameras and watch people in their homes. That’s because, you know, we didn’t have cybersecurity standards as devices were taking off. So now we’re starting to, frankly, start plugging them in to billions of existing devices already deployed and trying to create a market, either through voluntary ways, to start—the U.S. model—or the regulatory approach, which will also take some time, right?

So tech that becomes so core to every part of our lives, we’ve got to think about security at the beginning because then it’s baked in, it’s less costly all around, and we can buy down a lot of the risks that we’re doing, right? This is something we both deal with every day, frankly, in the risk across water, power systems, and pipelines. Nationally we are—whether it’s China’s prepositioning in our grid, whether it’s Russian hacking as part of, kind of payback—which thankfully we have not seen a lot of following its invasion of Ukraine—the level of vulnerability across critical infrastructure—I’ll say this strongly—is the cost of some of our own decisions of not baking in that security earlier on.

So I think, as we look at artificial intelligence, you know, the way the EU AI Act has looked at different levels of risk and laid in, at a high level, some of the principles, actually doing the work now, saying, OK—we were talking about that earlier—let’s take a particular sector of critical infrastructure where we know AI will bring real promise to that sector, whether it’s managing the load in a more efficient way; on the energy side, whether it’s optimizing what kind of devices get preventive maintenance because they are getting more wear and tear than expected; or they’re in an area where the weather has been tougher than expected; doing the work, both in advance, to say how models are trained; what kind of data is their transparency; how much are they red teamed before they are deployed; how much visibility and ongoing maintenance is there after they are deployed; where do people make decisions versus automated decisions. That’s the detailed work that we need to do now in an AI space, and I think doing it together similarly, where we have common principles of the risks we want to buy down. How we get there may be different. I think there’s a lot that applies to that space.

EDELMAN: So Roberto, do you see a world where, five years from now, a U.S. AI company is able to pass U.S. safety certifications and immediately enter the EU market?

VIOLA: You offer me a dream—(laughter)—and I’m a dreamer, so that would be excellent.

Let me explain—I will be as quick as I can—why the IA Act and the CRA, and what we are discussing, as you rightly said, are very connected. The AI Act is a product of legislation. It’s not about the Armageddon, the robots conquering the humanity, or all these kind of things. It’s a very pragmatic piece of legislation—(inaudible).

During the pandemic we were in such a need that to help with technology we proposed AI for the hospitals—for example, cities can—and for cleaning robots. And we took whatever we could take from the markets. And the feedback from doctors was, OK, it does seem in market. It doesn’t create an electric shock. But the system really is no good.

So let’s take the example of the pacemaker again. If this thing has a “C” marking, or in the future whatever market you will have for AI in the United States, this means that it’s safe from electric shock, it’s safe from cyber security, and the algorithm maybe it’s Gen AI looking at a repeat pattern, has been tested by a lab. So the data set is known. I mean, the algorithm performance has been disclosed, and independent verification took place. So the thing is safe, in the meaning we have about safe products. From the AI perspective, from the cyber perspective, and from the electrical perspective. And my dream is indeed that this means—with the world, this is safe means the same in the United States and in Europe.

EDELMAN: Excellent. Well, with that at this time I’d like to invite members to join our conversation with their questions. As a reminder, this is on the record. If you would kindly please state your name and affiliation before your question. And with that, we’ll start here in Washington with a show of hands. Yes, please.

Q: Antoine van Agtmael.

Anne, a question for you. As I was listening to this, you immediately recognize this all makes common sense. But then, when you think it’s true, we live in a world that—I mean, I grew up in a world that was globalizing. We’re now living in a world that is deglobalizing. There is more and more trade within the blocs than between the blocs. Could this have a really serious effect on future trade, and on, kind of, the sins of protectionism that various actors in the world have, when this is implemented? Because this is quite broad. It’s no longer this little garden that we’re protecting.

NEUBERGER: It’s a great question. The cybersecurity standards that we’re putting in place, you know, minimum cybersecurity standards, that the products have to be patched and maintained, any company can meet that. Now a great deal of data is collected on many of these devices. So, you know, we have a public rulemaking that will go out shortly regarding should there be devices from certain countries that don’t have adequate privacy controls that, as a result, even if, for example, data is encrypted and saved by the company, users may still not feel comfortable. But at the very least, transparency on what data is collected by a device, this would be game-changing because that doesn’t exist before. To use David’s nutrition label, for the first time you’ll actually—literally, you know, there’ll be a label that tells you here’s everything about this device. Here’s how often it’s patched. Here’s when the company will no longer be maintaining this device, end of life, you’re on your own.

So this is what we have to be working through. And the way we’ve been designing this program is, you know, putting out questions for comments so that companies can weigh in, private individuals can weigh in as well, because there are factors that we’re considering. We know the environment we’re in today is untenable. Billions of unsecured devices, a great deal of threat from criminals and countries alike. We know we have a model that can work. And the question is, how much do we want to buy down risk? Some of the issues we’re dealing with, for example, is, quite frankly, under Chinese cybersecurity law there is a great deal of a close relationship between the Chinese government and what they can compel companies to do. So it’s a very different approach to cybersecurity than we would talk about, which is protecting the consumer, the individual, and the user. But the government is at a distance. The government is the standard. The government is the certifier. Those are the issues we’re grappling with.

EDELMAN: Great. Thank you. Next question. Glenn, please.

Q: Good. Thank you. Glenn Gerstell. Thank you to the panel.

What’s the role of liability in this area, as a solution? There’s, of course, lots of attention paid, albeit differently between the U.S. and the EU on regulation. But the National Cyber Strategy said that regulation in the U.S. was a part of the solution, but it also talked about the idea of imposition of liability. So does the panel have any opinion on whether holding software and hardware manufacturers liable for buggy software, buggy devices is part of the solution, or is that going to create more problems? Thank you.

EDELMAN: Roberto, do you want to begin?

VIOLA: This is a great question. Without entering into complication, tort law and liability in the European Union is not uniform. We have no power to have a uniform liability regime. But we have the so-called directive power. So we can set the framework, but then national legislation, national jurisdiction have to do the rest. What we think—and that is a little bit the same for CRA, so for connected devices and for AI, if a company is compliant with the CRA or with the IAC (ph), of course, in court can claim it has been really following a strict certification procedure, passes the certification procedure.

So let’s say part of the liability is taken off by the independent verification that took place. So unless the company was hiding something to the certification process, it’s already a different case altogether. So in a way, we say, I mean, by complying ex ante it also shields the companies towards third parties because then it remains a problem which is a problem, and we have now a big discussion about AI liability, of the unintended consequences. But there cannot be any intended consequences if the device has being certified. I mean, so that’s—but this is an extremely complex discussion. The advertisement that, if you allow me, I do, is that by complying to trust marks, companies have part of the liability taken off by compliance.

NEUBERGER: I think that’s very well put. It is a difficult space, because the irony of the space is that today there isn’t liability for creating buggy, insecure software. And as a result, the practical cost of that goes to the user, and goes to the average citizen when we’re talking about software in a critical sector like health care, where if a device is compromised or if a set of devices are compromised that can lead to, for example, an entire network being encrypted and hospital services delayed in some way. So the goal of the cyber trust mark is to set a reasonable standard that companies can meet. And then once they put a label on, that label can then be enforced. But we believe the power of the market incentivizes enough. We’re hoping the market says, yes, we want those secure devices. So when companies are weighing the market incentives of knowing that there’s a consumer market waiting—and we know that’s there—versus the liability of meeting the standard, it makes sense.

EDELMAN: Further questions? Right here, please.

Q: Hi. Zaid Zaid from Cloudflare.

Thanks so much for coming here today to talk about collaboration between the EU in the U.S. on cybersecurity. In both—I want to just kind of broaden the conversation a little bit. In both the EU and the U.S., we’ve seen—we’ve seen both jurisdictions take actions that can sometimes make it a little bit challenging for companies to provide—to provide certain services to folks in the EU and in the U.S. In the EU, there are negotiations on the Cybersecurity Certification Scheme, or EUCS. There have been some calls by member states on sovereignty requirements that would make it a little bit challenging to provide cybersecurity across borders.

And then in the United States, there’s been this push by the White House for U.S. cloud providers to collect and verify the identity of non-U.S. users, including those in the U.S. It makes it a little bit challenging for companies in the U.S. to provide services to Europeans, with the privacy expectations that come along with selling to European customers. So I just would love to hear from both of you all, with the collaboration that that you all are working on, and in thinking about the EUCS sovereignty requirements and then the data collection on behalf of U.S. cloud companies, how can—how can you get around those barriers that that some of these regulations might create?

VIOLA: Again, another very important question. Let me give you first a general answer, and then maybe comment on the European cloud certification scheme. The general answer is this is why we are testing a different way of doing things with this cyber liability, because if we start to stabilize the idea of the transatlantic market for tech, then, I mean, within the two closest ally, companies can rely on the same rules and the same way of complying. And this is a big asset, I think, because these are the two largest markets of the world when it comes to tech.

Now, the EUCS is not there. And the reason why it’s not there is that it we are not yet convinced that it is mature enough. And there are discussions. There are people thinking that there should be more so-called sovereignty requirements. They look at FedRAMP. There are other ideas that, I mean, it should be simply a certification scheme for cybersecurity. And we are progressing in this conversation. And the scheme will be adopted when things will be stabilized. And the contribution of companies is very important. And we are taking every contribution very seriously. But the work is still going on.

NEUBERGER: So cloud has become a core underpinning of our national digital infrastructure. In some ways cloud can improve cybersecurity. You know, when you’re moving from devices with thousands of servers or computers that a team—an on-premise team has to maintain, patch, and keep up, moving to the cloud can make it a lot more efficient and effective to do that. However, that brings tremendous responsibility on cloud companies as they become storehouses of corporate data, national security data. And we’ve seen a number of cases where the cybersecurity just doesn’t meet the mark of what’s needed, right? Because the old joke about why do people rob a bank? Because that’s where the money is. Why do people hack the cloud? That’s where all the data is today, right?

FedRAMP is remarkably out of date. The cybersecurity requirements were last updated ten years ago. That’s unacceptable. The threats evolved many, many times in the ten years. And there are rules that essentially, if you’re in the FedRAMP process, you don’t need to get out. That’s not OK. So there’s things we need to do on the U.S. government side to make sure that we’re using the power of procurement effectively. On the requirements regarding know your customer, the reality is that both criminal and national adversaries, you know, what we’ve seen them do is compromise or sometimes just buy a cloud account, conduct cyberattacks, (glean back ?) the data, and then move it from there.

And that process of buying or compromising a cloud account, conducting a cyberattack, getting back the data, and then moving, it is faster than our law enforcement process can work, than the process we have in government to find that, et cetera. Cloud companies use AI expressly for their services. They know who’s buying services. So there’s a responsibility companies have to determine, is this a malicious user? And if so, much like we call on banks as part of our public-private partnership on security to know a customer, know who’s opening an account, know if it’s a money launderer, know if it’s a—the same responsibilities there for cloud companies. And that’s what the Executive Order 1373 is doing.

EDELMAN: Alan.

Q: Alan Raul, Sidley Austin and Harvard Law School.

I’d like to follow up on Glenn’s point about liability and Mr. Viola’s comment about tort law and the inability, I think, to preempt it. It’s really the question of safe harbor. Are the trust marks going to provide some sort of a safe harbor, at least from, perhaps, government investigation and liability? And a more complicated question even than safe harbor is, Ms. Neuberger, you talk about, you know, if the trust mark is there, consumers, for example, will know that the product is safe. But is it really safe? We know that there are a lot of hacks that are perpetrated on companies that follow the rules and spend lots of resources on the NIST framework and elsewhere, that there are zero-day attacks, and so on. Or, you know, developments where a static trust mark symbol may become out of date because of evolution of the threat actors’ capabilities. So, one, how do we keep pace with zero days and evolving threats? And is a safe harbor going to be provided as an incentive to take the steps of going for the mark? Thank you.

VIOLA: Maybe I can go first, because on this issue, zero days, this has been one of the biggest discussion in the Cyber Resilience Act legislative process. It’s also—I should say, during this process on zero days, we had a lot of informal discussion with Anne, and she was very helpful, I mean, in terms of the thinking. Because the CRA tells companies, if there’s a zero-day vulnerability, exploit it. So it means that you know that you don’t have the patch and you know that the threat actor has already exploited it. You should inform a national security agency about it. And companies where the original formulation was a bit broader then we have listened and narrowed it. But companies don’t like it. I mean, to be clear.

Saying that—the other side of the medal of this disclosure of zero-day vulnerabilities, you give threat actors yet another possibility. I mean, because the information will spread. If there’s a leak of this information, it would be a catastrophe. Now, the final version of the law says: You give confidentially this information to one cybersecurity agency, the cyber security agency where you have certified the product. And also you tell this cybersecurity agency, don’t tell others because I’m going to fix the problem. I know already how to fix the problem. It’s just matter of time, and short time. But then if the company realizes there’s no fix in the short term, then there’s a systemic risk all over Europe. And that the other cybersecurity agencies have to be informed on a confidential basis. So and that’s what the law says now.

So, indeed, I mean, simple compliance is not enough. I think we need an active management of what kind of outcomes there could be and what kind of situation we might—and zero days is one of the most threatening case we might face. Now, we then—I mean, a company that has an exploited zero days and that created damage to someone will have to face damage compensation due process? I would say, yes. I mean, and there’s no safe harbor that can shield them. I mean, but at least in this way the public damage, the externality of this damage, is limited, because a public authority is informed that this is going on. So this is the solution we found. I think it’s a good solution. I thank Anne for all the suggestions. And I think, frankly, companies thinking about it should also feel that it’s even better for them that, in a confidential—on an ongoing confidential basis—one authority knows about it.

NEUBERGER: You said it’s so well. There’s a range of the sophistication of cyberattacks. And our goal is to make it riskier, costlier, and harder for an attacker. I think, you know, other than very classified national security systems where we can invest a lot of money to make it, we believe, almost impossible to successfully attack—because those are important national secrets—there’s also a balance, right? So we want to—you know, we recently had—the Iranian government hacked a set of water systems in the country. And as we’re talking about it, you know, we kind of chuckled and said, they don’t even deserve to be called cyberattacks, because they leverage the default password which, oh, by the way, it was on the company’s website. And oh, by the way, it was 1111. (Laughter.)

So there’s a standard of care that companies producing tech devices have as well. And I loved Roberto’s point of there’s also the public branding piece. But where when a cyberattack happens, and it’s a zero day, I think there’s an understanding that how long does it take the company, from the time the zero day was—from the time they were made aware of the zero day—until they’ve patched the device? And do they have a way to push out patches in an automatic way? That’s the expectation. On the other end, to your point of the spectrum is, you shouldn’t have default passwords baked in, in that way. You shouldn’t have no ability to maintain devices. You sell it, you’re done.

So that’s what we’re trying to do. We’re trying to move to higher-end on the spectrum, to make it riskier, costlier, and harder. And, to your point, that’s why when we talk about the cyber trust mark we say, you know, more cybersecurity, more cybersafe, or adequately. We’re saying—we’re using the government standard to say, this device meets the standards, right? To say, we’ve raised the bar to a reasonable—to a reasonable bar of what’s needed for the particular kinds of uses we’re talking about.

EDELMAN: Further questions? Please.

Q: Hi. John Ackerly, co-founder and CEO of Virtru Corporation.

There has been, so then, a lot of discussion today in both panels about threat detection, remediation, raising the bar in terms of being able to—you know, so then access these certain devices. At the end of the day, in many circumstances, it could be really—you know, you have to assume breach. So how do you both in this context so then think about the data itself, what really matters in terms of the threat? How to tag data, ensure that the data has high integrity, and ultimately if it’s capturing private data in the context of cameras that the data piece is actually addressed as kind of front and center?

NEUBERGER: To be very frank, and I’ll be very direct on this because it’s a topic I feel very strongly about, all data has got to be encrypted. We recently had the hack of Change Healthcare, a major American exchange. Records—sensitive medical records of eighty million Americans were stolen. If that data had been encrypted, even if it’s stolen the best high-performance computers today cannot break AES 256—which is the common commercial standard. It is inexcusable that we are in 2024, with the level of let alone national adversaries, criminal cyberattacks against critical infrastructure, and the data like that is not encrypted.

So folks often talk about, you know, ransomware attacks, I mean, and the companies at the end, whether it’s a hospital, as victims. The real victim is the American whose medical data is now at risk of being—a sensitive medical procedure. Think about treatment for alcoholism. Think about mental health treatment. A lot of people may not want that public. And they deserve the right to know their privacy is protected. So that’s a fundamental principle. So I don’t want to even get to data—tagging data. Encrypt the data, so that even if it’s stolen we’re not at the risk of, whether it’s national level blackmails we’ve observed Australia and Scotland deal with or individuals, we’ve observed in a number of such cases as well.

VIOLA: Yeah. In Europe, there are two different legislation that force a company which has been attacked and that has been a data leak to notify the public. That’s the data protection regulation and the NIS directive. And of course, if the data have not been encrypted, and there are sensitive data according to the data protection regulation, the liability of the company is very serious in terms of sanctions and in terms of class actions, or whatever can happen.

That said, I cannot agree more with Anne. We have a constant discussion with the law enforcement community. I think there’s no alternative to end-to-end encryption. I mean, we cannot have a backdoor for the friends and not a backdoor for the foes. If there’s a backdoor, it’s a backdoor. And the system is flawed. So I think robust encryption is really what is needed. And there cannot be—I mean, robust encryption, but, I mean, if it comes someone, it’s OK. You get the little key, and you can open the data. Because then it’s the end of a secure system.

And so we always have been struggling with this, trying to convince the law enforcement authority that you can do some post processing. I mean, companies can cooperate, use high performance computing ex post. But don’t break encryption. And we went even further, because now we have a recommendation on post-quantum encryption. It’s a long journey. I mean, it’s not tomorrow. But we need to start this journey.

EDELMAN: So one final question. Ten years ago, Lisbon summit, we created a U.S.-EU Cybersecurity Working Group. Very beginning of what would ultimately become, with a little interregnum, this deeper collaboration we’re seeing right now. That was ten years ago. Here we are, ten years. We’re seeing new regulatory legal frameworks. We’re seeing regular collaboration. And we’ll be seeing in particular the day-to-day engagement you’re having, in addition to the high-level, formal pieces, the agreements that have been signed, this new trust standard that is being developed. Indulge us for a second. Ten years from now, we’re back on the stage. What’s the one most different thing about the collaboration between the U.S. and EU relationship on cybersecurity that we hope is there ten years from now?

VIOLA: Well, I mean, I was there ten years ago—(inaudible)—

EDELMAN: (Inaudible)—sadly. (Laughter.) We’re all getting older.

VIOLA: And it was really an attempt to kind of meet and have nice coffee and discuss. And this was the beginning. We went through different political phases of the relationship, EU, U.S. Now we are at the peak of being determined to make this something very serious and concrete. And I hope this. No one can tell what is the future. This is the case, that we learned. Because, I mean, what we are trying to do, it doesn’t take one day, to be—to be clear. I mean, it takes time. And it takes constant, I mean, political willingness that I hope will be there. So if this is the case, and companies also support this, I think we can have the landing spot, which is a secure transatlantic market for connected devices for AI-powered devices. That will be my little dream in the box.

EDELMAN: Anne, last word?

NEUBERGER: We have really hard problems. And we have a shared vision of how to get there, right? Digital infrastructure, our economies ride on it, or national secrets right on. And the way we are today, which is assuming nonsecure, and then trying to lock the doors after, trying to do the threat intelligence sharing, trying to warn a company that’s been hacked—we can do so much better. So I think this partnership, particularly on the trust mark, is a test to see can we use the power of the joint market, independent of the different tools we have, to change the conversation to where the default is secure for homes, businesses, and schools? And I think we have the pieces of that together.

The foundational collaboration that our heads—you know, our leaders have put in place—President Biden and the European Union’s leadership as well—as well as then the practical work at both technology leadership level and then at the working group, comparing standards doing that work to change that equation. And then nothing drives deeper partnership than outcomes and successes, because then people invest in them more. So I think that the partnerships that have been put in place, whether it’s the TTC, whether it’s the Partnership on Critical Minerals are on trade, this technology one is a core pillar of that. And I think the practical work we’re working to do here will generate the outcomes that deepen that further.

EDELMAN: OK. Well, that’s a great note to end on. With that, we’ll come to a close. A reminder, the audio and transcript of today’s meeting will be available soon, posted on the CFR’s website. I want to thank CFR for hosting us. And please join me in thanking our distinguished panelists, Roberto and Anne, for being with us today. (Applause.)

NEUBERGER: And to David!

(END)

This is an uncorrected transcript.

Top Stories on CFR

Syria

Trump’s decision to lift sanctions on Syria and meet with its new president is a major shift in U.S.-Syria relations, but it may not be an indicator that Syrian refugees should return home any time soon. 

 

United States

The Trump administration’s efforts to nullify birthright citizenship for millions of U.S.-born children could overturn a nearly 160-year legal precedent.