Micah Zenko, senior fellow at CFR, discusses the use of red teams—groups enlisted to identify weaknesses and anticipate threats—by the military, intelligence community, and private sector, and outlines best practices for employing these teams effectively, as part of CFR's Academic Conference Call series.
Learn more about CFR's resources for the classroom at CFR Education.
FASKIANOS: Good afternoon from New York, and welcome to the CFR Academic Conference Call Series. I’m Irina Faskianos, vice president for the national program and outreach at CFR. Thank you all for joining us for joining us for the first call of this semester. Today’s discussion is on the record, and the audio and transcript will be available on our website, CFR.org.
We are delighted to have Micah Zenko with us to talk about the use of red teams. Dr. Zenko is a senior fellow at CFR. Previously, he worked for five years at the Harvard Kennedy School, and in D.C. at the Brookings Institution, Congressional Research Service, and the State Department’s Office of Policy Planning. Dr. Zenko writes the CFR blog Politics, Power, and Preventive Action, which covers U.S. national security policy, international security (scene ?), and conflict prevention, and he also has a column on ForeignPolicy.com. His most recent book, “Red Teams: How to Succeed By Thinking Like the Enemy,” was released by Basic Books in 2015. You can follow Dr. Zenko on Twitter at @MicahZenko.
Micah, thank you very much for being with us. I thought we could begin—if you could talk to us about why you wrote this book, and what is it we need to know about red teams.
ZENKO: Wonderful. Thank you so much, Irina, for the opportunity.
And thank you to everybody out there who’s listening in. I know you have a lot of other things you could be doing with your time, so I’m grateful that you’re taking the time for this—for this conversation. And I look forward to your questions a great—a great deal after I get through my pitch initially.
The reason I wrote this book was that, for all of the sort of attention and interest in various communities in the military, in the intelligence community, in the private sector, in homeland security—certainly in the hacking world—about red teams, nobody had written a book that was a sort of comprehensive description of the—of the—of the use of it, had provided specific typology for what red teaming is, and—or had done—sort of provided some best practices.
So, when I talk about red teaming, I’m talking about three specific types of activities, like simulations. This is before a scheduled or likely event. All the relevant actors will come together to try to assess all the ways that that event could go wrong, and what are the sort of contingencies and worries and threats they need to plan for.
So one example of a simulation that I really focus on in the book was here in New York at the NYPD headquarters—One Police Plaza, downtown—the commissioner, beginning under Ray Kelly and continuing today under the current commissioner, they hold what are called tabletop exercises. This is where all of the senior NYPD commanders, all of the senior New York City government officials, as well as some other relevant people come together to think through all the ways that an upcoming event could go catastrophically wrong. So they do this, for example, before the marathon events, where they have the people who run the marathon, like the New York Roadrunners; they have people from—that do the bridges and the—and the tolls and the tunnels. And so it’s quite a fascinating thing to witness. I’ve been able to attend several of these. People get to sort of learn how each other is prepared for these scenarios, and the point is they learn collectively. And the point of it is, is that, in the absence of it, everybody would be less well prepared and less smart for the situation. So that’s one type of red teaming.
A second one is what are called vulnerability probes. And these are just trying to break into some defended system, the whole point being that the people who run your IT network or the guards at the front door of your building are the least likely to conceive of all the ways that a motivated adversary can break in. So you have individuals assume the role of somebody who is a malicious or evil sort of actor, try to break in to some defended system.
One very notable example of this was last year at TSA screening checkpoints at airports across the country, and these tests were done at six different airports. They had people from the Department of Homeland Security—they literally took them off their desk and they said, try to smuggle a banned weapon or explosive through this checkpoint. And they succeeded 67 out of 70 times. Now, the point of the vulnerability probe is not to embarrass or humiliate the staff by demonstrating their weaknesses; it’s to uncover vulnerabilities that they cannot uncover on their own, and then to provide corrective measures to how to deal with them to make it less likely that somebody who is—who is malicious would be able to find the same vulnerability.
The final type of red team—sort of in many ways the most interesting—is what’s called alternative analysis. So this is based upon the assumption that, you know, if you—I always tell people nobody goes to work every morning and decides on their own what to do that day, right? You come in with a series of beliefs, of values, of standard operating procedures. And over time, lots of social science has demonstrated that people who are very different in their backgrounds and their makeups—demographically perspective—tend to think alike. Groupthink is a fact of any group that comes and works together closely over time.
Similarly, another sort of phenomenon in a lot of organizations is hierarchy. We know from lots of studies that we care tremendously what our bosses think about us. And as a result of that, people are very unlikely to voice up dissenting and challenging viewpoints. Most people don’t do it because they think they’ll be retaliated against or they think it’s pointless—actually, more people do it because it’s pointless. So, as a consequence of that, every organization has all these assumptions that really go unchallenged, blind spots that are never identified, and they never really think about adversarial perspectives. So this is true whether you’re in a military command, whether you’re in the intelligence community, the private sector as well.
So one thing that you can try to do is a—is a type of alternative analysis to step outside the skin of the employees. Because the motto of my book, which is true everywhere you are, is: you cannot grade your own homework. Right? The people who run the company, the people who do the day-to-day operations, the people who develop the strategies, they are in love with the strategies, the operations, and the day-to-day. They cannot conceive of it in different and alternative ways. You cannot step outside of your skin one day and sort of think critically or think differently. We all sort of like to believe we can, but in fact, lots of evidence suggests that that’s very difficult.
So, for example, if you’re a pharmaceutical company and you have a large strategic point—decision coming up—for example, a drug going off patent—that’s like a big deal, you care tremendously about how a bunch of people will respond, like the marketplace, like regulators, like competitors, like dedicated consumers. And one of the things that they’ve learned is that if you on your own try to think about all those ways, if your own sort of strategy and operations teams try to model this, they actually miss a lot because they cannot really put their—put their feet in the shoes of the—of the other. And so you hire an outside group. They’re called business wargamers. They come in and they force you to assume the role and identity and motivations of all these different competing groups. And it has a really demonstrable different outcome because they conceive of the problem and the challenge in a new way.
Another example of alternative analysis which I focus on a lot in the book is what’s called the Red Cell. This is a unit within the CIA that does not do normal, mainline authoritative analysis. Most of what the intelligence community does is to try to describe the world as it is for policymakers. The problem is that policymakers over time say that this starts to, you know—it looks like The Economist, you know; it looks like The New York Times. It’s pretty much what they think about the world. But they want people who make them think differently. And so the Red Cell is autonomous from other analytical arms. They task—they basically decide whatever they want to work on. They do not have to go through the coordination process of distributing their products with others and make sure they toe the sort of party line on various thematic issues. And what they write is very, very different. And I interviewed everyone from multiple CIA directors to national security advisers to secretaries of defense, and they loved the Red Cell—everybody loves it. And the reason is, is because it makes them think differently about the world and about various problems in a way that they cannot find from other parts of the intelligence community.
So I’ll just end with a couple of the, you know—books like this, you—I wanted to not just write this to explore it, but to provide some guidance for senior leaders in lots of other fields. I mean, the book has—I did over 200 interviews and it took five years to write the book, but it’s not just descriptive. It’s about trying to help senior leaders and managers, whether you’re in the C-suite of a business, whether you’re a colonel running an army brigade combat team, whether you’re becoming a policymaker in the State Department, or—et cetera—how could I use and think about red teams?
And so I have a whole chapter that lays out sort of six best practices of what tends to work, but I’ll just mention two of them up front.
The most important best practice of all is that—is what I call “boss’s buy in,” which is that a senior leader or a boss has to care about the red team. They have to care about what they’re doing. They have to signal that what they would do is listened to. They have to signal to everyone who matters that the red team is a priority for the senior leader, and he or she has to tell people that. Because without that, the red team will not get the access that it needs, it will not be funded and resourced sufficiently, and it will just be a sort of check-the-box exercise. So that’s like one really best practice, is that somebody senior has to care.
And another one is that the institution has to be willing to hear bad news, right? Because a red team will find problems with what you’re doing, and it’s very easy to be defensive and be judgmental and sort of push off bad news from you.
So, for example, a lot of institutions now, because their computer networks are at risk, will hire a group of outside penetration testers—otherwise known as sort of white-hat hackers—who try to break into your computer networks. And they don’t just try to break in for the sake of it, but they try to break in and cause the greatest pain possible for your institution. So they want to get access to your—to your customer’s data. They want to get access to the most embarrassing things they can find. They want to get access to privately identifiable information and everything else. They can either dox it, they can sell it to competitors, they can embarrass and humiliate you, they can hold you hostage by gathering your information and then encrypting it, and then only agreeing to give it back to you if you—if you pay some sort of ransom. So you hire people to assume the role of malicious hackers. And this takes about two weeks, generally, depending on how long you want to do it, and they always find problems. I mean, they break in—I mean, I can tell you stories of—the ways hackers break in are always different that you think. But there are certain common things, you know, like spear phishing, weak or unhashed passwords, too many people having sort of privileged access to a network system—so there’s lot of—but there’s lot of other ways they break in.
Now, the point is that they then provide the IT staff, the board of directors, whoever it is who need to be—(inaudible)—with a series of corrective, remediative measures that are prioritized. So if there’s a really bad problem, you might need to throw a lot of money at it, and it might take six months. But there are a lot of oftentimes very quick fixes that don’t cost money, they just cost a little time and effort to educate employees or to reconfigure your network in some way, and those can all be done very quickly. But lots of companies hire outside penetration testers, they find vulnerabilities, and then they just dismiss them out of hand. They’re not interested—they’re not interested in making the repairs because they don’t believe—they don’t believe the bad news that they were provided. So if you’re not willing to hear bad news, you shouldn’t red team at all.
So I think I’ll stop there and turn it over to some questions, of which there are many. There are lots of stories. There are lots vignettes. There are lots of very interesting personalities who do this. So I look forward to getting your questions and your feedback. Thank you.
FASKIANOS: Thank you so much, Micah. That was great overview.
Let’s open up now to questions from the students.
OPERATOR: Thank you. At this time, we will open the floor for questions.
(Gives queuing instructions.)
Our first question will come from the University of Notre Dame.
Q: I was wondering if you ever see situations where the red team is more capable than the potential adversary, and thus exposes vulnerabilities that are potentially unnecessary to close, given where people who’d exploit that in real life are?
ZENKO: That’s a great question. And, yes, that happens, but a good red team should try to avoid that.
I mean, one of—one of the lessons—one of the people I interviewed in my book is a guy name Steve Elson, who’s a former Navy SEAL. And he sort of describes—one of the things that Navy SEALs do is they try to break into the most secure areas in the United States, including things like Camp David, Navy nuclear submarine sites. And he sort of described to me that basically, because they’re so tactically proficient and aggressive, they break into anywhere. But the problem is that, when you break in with overwhelming force that a plausible adversary won’t have, the defenders just roll over and they die, and they stop learning. And so he sort of described that one of the things that they learned over time is the first time you do a red team engagement, you try to break in at a very simple, low level. And you probably will, but then it allows the defended system to increase their level of defenses, to improve. And then the next time you break in at a higher level, and then you finally break in at the—at the sort of highest level you can.
But this also happens in the cyber world. A lot of—I have stories of military commands—the National Security Agency has a unit called the Tailored Access Operations center, or TAO. TAO is the sort of elite hackers within NSA. Among other things the TAO does is they try to break into the CIA’s computer networks. They try to break into military command networks. And there are lots of times when they break in, but then the military command says, oh, but you’re just too good; what you’re doing is what somebody off the street couldn’t do. And they always—and they—and so one of the things that they demonstrate is they actually—they actually buy malware off the—off the Web, they demonstrate that they bought it off the Web, and then they demonstrate how they used it with screenshots so they—you can’t say, well, we did this—we came up with this zero-day exploit on our own, right?
So that’s actually a really big concern. And because people who are really good at breaking in can break into just about everything, that’s not the point. The point is to—the point is to improve the security institution. But that’s a—that’s a great issue. That does happen from time to time.
FASKIANOS: Thank you. Next question.
OPERATOR: Thank you. Our next question will come from Georgetown University.
Q: Hi, Micah. My question is—this might be a little bit generic, but I’ve read some of your work on drone proliferation and I found it to be really useful. But in general, I was wondering if you could maybe comment on what you’ve found—in the extent of your interactions with the red teams what you’ve found to be the gravest threat to U.S. security, and also how to strike the balance between paranoia and prudence when interacting with the red teams. Thanks.
ZENKO: Sure. So, I mean, my—I think it’s funny, because most people in most red teams would exclusively say something like—one of my favorite quotes was the former secretary of defense, Robert Gates, says, the single greatest national security threat to the United States is the two square miles between the Capitol, the White House, and the Pentagon. And I basically agree with that. I mean, the problems with—the single greatest threats that the U.S. faces are self-created because we simply will not execute common-sense plans or strategies or fund them accordingly, largely because of just political disagreement. It’s not really even bureaucratic dysfunction. It’s just we won’t do it, right?
And that’s true of most red teams. I mean, most really interesting red teaming is about—I always say it’s much harder to red team yourself that it is to red team an enemy. So it’s not that hard to assume the role of a Taliban mortar team that is trying to attack some forward operating base in Afghanistan. They do that—I mean, there are guys that go out and do that all day long, and they demonstrate weaknesses in airbases. And that’s pretty easy to do, and then to interpret the results and defend upon. What it’s harder to do is before a critical event happens, like an attack or some catastrophic scenario, to try to understand what is it about our thinking that is wrong? And I just tell people, the hardest thing to do is metacognition—thinking about thinking. Why is it that we believe these assumptions? Why has this become the conventional wisdom within our institution? And so that’s actually much, much harder. And sort of—I have found that oftentimes there is—the problem is almost all internal.
And I’ll just—I’ll just leave with one great example. I wrote a blog post, which you can—which you can Google. It’s called “Red Team Reading List,” and it’s all the best things I ever found on red teaming. And I—and I recommend it to you if you’re interested in just diving into this world a little bit more. But one of my favorite reports was, in 2014, General Motors hired a law firm to give—they gave them complete access to do an assessment of the GM ignition-switch failure, which has killed unnecessarily over 200 people. It’s cost the company hundreds of millions of dollars, it cost the last CEO their job, lots of senior executives their jobs. And one of the things that this study found was that the problem wasn’t anything to do technically with the switch. It didn’t have to do with how customers were using it, how people drove, or the manufacturing. It was completely about the organizational culture, which was that the board signaled to every senior leader that the single most important thing was quarterly profits. They emphasized that over and over again. And they told the engineers and the safety individuals that when you found faulty switches—when you found engineering problems—they instructed them to use very specific language that downplayed and mitigated the risk to customers. And then they said, if you can’t do that, I want you to form specific subcommittees, and then subcommittees, to just study problems to death, because they didn’t want to go back and fix problems, right? They wanted to keep rolling out cars to increase profit share and overall quarterly—the market share and overall quarterly profits.
And so it’s a great—it’s a great study because it really condemns General Motors for the organizational culture that the board promoted. And there was no red team in General Motors. There was nobody tasked to look for problems, to question assumptions, to challenge the culture, to identify blind spots like that. Now, if there had been, they might have found problems with this. They also might not have. Red teams are rarely determinative. But it’s often the case that the biggest threat to any sort of institution in any competitive environment is how they think, and how they act, and how they behave.
FASKIANOS: Thank you. Next question.
OPERATOR: Thank you. Our next question will come from the University of Texas.
Q: Hi. I’m Henry. My question is, how do organizations decide how many resources to dedicate for red teaming these contingencies, given their budget constraints?
ZENKO: Yeah, so that’s a great question. Red teaming is never a core business practice, right? I mean, nobody has red teaming built into their business plan, necessarily. It’s a—it’s often described as a, quote, “nice to have, but not a must-have.” And so lots of companies, lots of military commands, lots of, you know, homeland security sectors will never red team because they just don’t want to put the resources at it. But the point is that it’s—the way that red teaming happens is either it’s mandated—so there are required red team like engagements that you must do in the payment card industry world, or for HIPAA requirements for protecting health care data—or there is a big event in your sector.
So the Target hack is a great example of this. This was two years ago and it was several hundred millions of dollars. They’re still fixing it up. The Target computer—I mean, if you know—if you know anything about the Target hack, it was basically an outside heating and air conditioning vendor who had network access to the entire Target network to monitor and control heat and cooling levels in stores. So these hackers broke into this outside vendor, which then got them into the Target network. And the—I mean, there was so much—the network was largely unsegmented. So one of—my favorite story is later there was a penetration test done of Target by Verizon, and one of the big vulnerabilities they found was that—(chuckles)—these hackers from remote exploits were able to get access to the swiped payment card at aisles of individual stores by going through the deli meat slicer at a different store, right? That’s really the way to think about the Internet of Things and the way networks are connected. So when an event like that happens to Target, everybody else suddenly says, oh my God, we’ve got to throw a lot of problems at this.
Now, in the cyber world I can tell you most of what people throw money at is wasteful. It’s basically yesterday’s defenses. It’s against—it’s highly expensive infinite detection software that is good against the old hacks but not against the most likely new ones. And so red teaming is a relatively cheap thing to do. Like, you can—for 10 grand you can do a two-week engagement for a small or medium-sized company. Larger companies have open-ended engagements of really, really skilled people against their whole supply chain, against their executives, against the board—you know, whoever they want to try to target and identify vulnerabilities about. But you’re right, it is—it is not a—it’s rarely a mandatory thing. And somebody senior has to either get it, want to do it, or there has to be some problem in your sector, or else you won’t do it.
FASKIANOS: Thank you, Next question.
OPERATOR: Thank you. Our next question will come from New York University.
Q: Hi. Can you hear me?
FASKIANOS: Yes. Yes, we can.
Q: My question is, in terms of military decision-making, how well would a red cell team work? Because we deal with an entrenched culture, but also one that requires security clearances. So how do we get outside advice, as well as get that outside advice listened to?
ZENKO: Well, I mean, as you—as you know, it’s one of the biggest problems in a—in a command structure. I mean, command structures are very hierarchical. It’s very—you face tremendous pressures from senior leaders.
One of the Marine lieutenant colonels I interviewed for my book, he has this great line he told me, was that, you know, when you’re a rising officer, your ability to mind-read your boss is more praiseworthy than your ability to think critically. And in the military, your bosses have tremendous influence over your day-to-day satisfaction. They write recommendation reports. They play a large role in whether or not you’re promoted. And so it’s very easy to try to mind-meld with the boss. It’s also the case that when you work day-to-day very closely with individuals, especially in highly stressful occupations where you put your lives at risk, you tend to cohese together very closely—unit cohesion, this, in effect, does matter a lot. And when you care about unit cohesion and getting the job done, you are very unlikely to challenge the person you sit next to every single day for 14, 16 hours a day, locked away in some skiff. And this is a huge problem in the operational planning world.
We know more and more that plans—people who write campaign plans don’t just fall in love with the plan; they sort of become the plan. And they are unable to rigorously test and evaluate it. You need an outside set of eyes that can review the courses of action and try to determine whether this—you know, it’s a mission that can be accomplished. And so one of the things that they’ve done at—both in the Marine Corps at—in Virginia, and in Army at Fort Leavenworth—where I’ve been seven times and I took the two-week course there—is you now create what are people who are red team additional skill identifiers. So they get specific eight- or 16-week training in red teams. They then embed themselves in a—in a Marine Expeditionary Force in the planning cell, or they embed themselves in an Army Brigade Team either at the—in the strategy or the operations, or as a—as a command element in the executive office of the commander. And their whole job is not to help with the day-to-day plans and operations, but their job is to provide value by challenging and testing and evaluating what everybody’s doing.
And they don’t do this in a way, again, to embarrass and humiliate. And they don’t do what’s called the seagull; you know, you don’t fly in at the last minute and, quote, “crap on the plan” and fly away. You’re there to help everybody with what they’re doing in making it better. But I can tell you, in the Army and the Marine Corps they worry about this tremendously because they realize that Army and Marine Corps leaders tend to think very much alike. They tend to be very action-oriented. They are tremendously mission-focused. But they largely don’t think they need—everyone sort of thinks, like, well, I think critically; I don’t need somebody else to come in and think critically for me. But trust me, they do. Just because you’re in the Army or the Marine Corps doesn’t necessarily mean that you have the ability to think critically or independently better than anyone else. So I would say they’re the two of the services that are really at the forefront of creating these what are called additional skill identifier red team individuals, who come in and serve the role of a red team.
FASKIANOS: Thank you. Next question.
OPERATOR: Thank you. Our next question will come from Fordham University.
Q: My question is, should we use—how do you feel about hack backs? Should we use the red team skillset to go on the offensive against an enemy of some sort or an invasion into a company?
ZENKO: Well, that assumes that you have perfect attribution and you’re willing to violate the law because, as you know, hacking back as in by—which is normally referred to as causing specific damage or disruption to a—to somebody else’s network, is unlawful. And it also assumes that you have sort of perfect attribution of the source. So attribution is better than it was a year ago, and a year ago it was better than it was two years ago. We’re getting better at it. But I would be very skeptical about doing that if I was the IT community. And furthermore, the red team—red team does not monitor the use of detection software. Their job is not to—(inaudible)—is blue team, or the—or the core IT staff. So the red team can help identify vulnerability by assuming the role of plausible adversary, but they on their own should probably not go back on the offensive.
I always say, the red team should not supplant or take the place of the normal operational staff, right? Their job is, if they find problems with a—with a procedure or a strategy or a plan, their job is not to write a new strategy or plan. Their job is to provide some concrete recommendations that can be acted upon to sort of mitigate the vulnerabilities and the—and the blind spots that they uncovered. But it’s unlikely that they, on their own, should become their own sort of operational element.
FASKIANOS: Thank you. Next question.
OPERATOR: Thank you. Our next question will come from St. Edward’s University.
Q: Hi there. This is David Maury (ph) from St. Edward’s University in Austin, Texas.
I was wondering if you could address a question I have about composition of red teams. I’ll use Red Cell as an example. Do you see it as more effective to have a constant staffing of a red cell, or do you think that it’s more of a—that it’s more effective to have it be an ad hoc arrangement where individuals are pulled from that position to participate in a red cell for a little while and then go back to their normal jobs? And also, secondly, what can students do now to develop the skills that would be valuable for an employer in terms of—and how would you present your skills as red cell, or would you even do that?
ZENKO: So I always tell—I always say that nobody should red team forever. Red teaming—I mean, the goal is you should be exposed to red team methods, values, principle. You should be taught how to think critically, to understand cognitive biases. You should be trained in metacognition sort of recognition. But you—nobody should be red team forever. And in fact, in the—in the cyber world, most people, when they’re young, they try to break into systems, they try to hack. And then, when they get older, they sort of move over to a sort of more senior position, where they assume the role of the blue team, which is trying to make things run day to day.
And similarly, the CIA Red Cell, for example, analysts come from all over the intelligence community to serve on it. It holds about 12 or 13 people. And they, on average, do—only are on the Red Cell for two years. You come from some analytical unit that does—like you’re the China person or you’re the environment person. Then you go on the Red Cell for two years. You work on very different things. And then, after two years, you go back to your normal job. So I generally think people shouldn’t do this forever, but it’s very useful for people to be exposed to it.
I would say it’s very hard to know how to get into this profession because most of the people who do it sort of come to it ad hoc. I always say really good red teamers are both sort of old enough that they don’t care anymore and young enough that they don’t know any better. The truth of the matter is that most people can’t red team because they care too much about the impressions they make, they care about their careers. Most people don’t challenge and they’re not dissenting voices in institutions. That’s really hard. That’s hard to do day to day to day.
We all like to believe that we’re critical thinkers. We all like to think that we think outside the box. The truth of the matter is that we aren’t. Very few of us can be that for any sustained period of time. And in fact, the people who hire—the people who run red teams and hire people, they often have very sort of interesting tests and skills and exams that they give people. And I can tell you, in the hacking world—both in the cyber world and in the physical penetration testing, people who try to break into buildings for a living—like, some people just can’t do it. And some people will sort of say, like, try to get into this construction site and get to the top, and I want you to take a picture with your GoPro camera and come back without getting caught. Other people will say, like, you know, if you needed to disrupt some computer network and you had $100, how would you do it? And you had 24 hours. And it might be something as simple as—or if had to break into some secure system, how would you do it? And, you know, I’ve heard people say, like, well, what I would do is I would get a job that night delivering pizzas, I would assume the role of the pizza delivery person, do surveillance, understand how the security was, and then try to come back in under another guise the next day or something.
So it’s hard to get into this field. It’s mostly in the homeland security world because they are hiring more and more people to try to break into—for example, like, there are 400 nuclear reactors in the United States. Every year, one of them—all of them have to go through one of these sort of tests. Hospitals, people try to break into hospitals all the time now. Manufacturing centers. If you have, like, high-end technology manufacturing or additive technology, that has such value in the marketplace that people try to steal it.
One of the best hackers in the world is a guy named Chris Nickerson. He runs a company called Lares Consulting. And he’s a really, really smart red teamer, and he does red teaming for like Fortune 10 companies. And the way he put it to me: you know, you’re the number-one company in the world at something; why wouldn’t the number-two company try to beat you by stealing your stuff? And they do. So they hire people like Chris to try to steal it before the number-two company does.
So, I mean, I would just learn about the field. Like, look at the blog post I mentioned, “Red Team Reading List,” because that has a lot of the sort of first-step readings into this field. I would also try to have lots of different experiences and backgrounds. People tend to like people who have traveled, have looked at—have read or studied a lot of history, have diverse professional experiences. They like people who have experienced failure—like, organizational failure, experiencing that directly tends to matter a lot to people who compose red teams. So, yeah.
FASKIANOS: Thank you, Micah. Let’s take the next question.
OPERATOR: Thank you. Our next question will come from San Diego State University.
Q: Hi, Dr. Zenko. So I have kind of more of an abstract question, not so much a bureaucratic or concrete one. But in your interactions with the red team, is there a sort of ethics to the red team in terms of how far they’re willing to go during a simulation to achieve results? Because, you know, as—it would make sense that, you know, you can only achieve results insofar as, you know, like, as much as you can yield. But would there ever be kind of, like—are there any rules as to what the red team can and can’t do in a simulation? Like, how far are they willing to go to achieve even more valuable results or to do some simulations that can yield much more useful information? Is there a line that can be crossed, or is there kind of a—kind of a line that shouldn’t be crossed? What would you say to that?
ZENKO: So most red team engagements are very much constrained by a lot of the demands of the employer. So, for example, if I have a medium-sized company of like 500 employees and I have an IT staff of 12 people and a security staff of 12 to 15 people, you hire a red team to try to—well, first you have to understand what it is you hold valuable. And the red team will work with senior leaders to say, what it is you—what is it that you value most? Because most leaders don’t even know what they value most, you know. Is it—is it your market share? Is it your reputation? Is it your customers’ data? Do you care most about profits? You’d be surprised how many—and I’ve talked to senior business leaders who don’t even really understand what they—so, first, a red team helps you understand what should you care about protecting and securing most. Then, without tipping off the IT staff or the security team, they try to get access to it.
Now, the problem is that many senior leaders don’t want the red team to succeed, so they will constrain the red team. For example, if I’m a red teamer, I look at any company and they have just a wide open attack surface. I can break in cyber, physical, social engineering—in many, many different ways. So they will say, OK, you can try to break in, but you can’t do it during normal business hours because we don’t want to disrupt our company. And then they’ll say, and you can’t come in through our vendors because our vendors don’t have great IT systems or much cybersecurity. And you can only come in through these IP portholes, these two IP addresses. And you can only really come in, actually, at night between midnight and five a.m. So now you’ve constrained the red team—which, if I’m a motivated adversary, I’m going to break in whenever. I don’t care if I disrupt; and, in fact, the point is to disrupt.
So that’s one really big problem, is that they’re constrained to begin with, artificially, to a large extent. Even with those constraints, basically every red team still breaks in. They demonstrate vulnerabilities. They get access to what you value, almost every single time, and they’ll do it without you detecting and knowing it. And so that’s one—that’s one thing to think about.
The other problem is that I always say a red team should not engage in fratricide of the organization. And one of the examples I have in my book is there was a Fortune 10 financial company, and these guys were doing a weeklong engagement trying to break in using various pieces of pretty good malware. They were scanning for vulnerabilities, and it was like Friday afternoon. And the guy who ran it was telling me that they were—they just wanted to wrap it up for Friday, so they basically escalated the attack quite quickly, and they shut down this large financial company’s computer networks for about 25 minutes, just doing the red team engagement. So you should never do an act of fratricide like that. And knowing how to pitch it, right, whether too weak or too strong, is something that takes—is something that takes a lot of practice.
FASKIANOS: Thank you. Next question.
OPERATOR: Thank you. Our next question will come from Ridgeway Center at the University of Pittsburgh.
Q: Hi, Micah. We really have enjoyed your presentation to this point. We just wondered if you had any thoughts or advice for us as we try to incorporate the concept of red team analysis into our existing curriculum at the Ridgeway Center.
ZENKO: That’s a great question. I always tell people, you know, red teaming can help you think differently about how you operate sort of anywhere. And I’ve been sort of surprised at the number of people who have come to me since I wrote the book and said, including, like, nonprofits, grant-giving institutions, people who have never red teamed before, who recognize that they are unsatisfied with—they are generally unsatisfied with how things are going or they believe things could be made better.
So, you know, one of the things I always recommend is—and if you look at that red team reading list blog post I mentioned, one of my favorite readings is out at Fort Leavenworth they have created something—now they have seven versions of it—called the Applied Critical Thinking Handbook. And the Applied Critical Thinking Handbook is basically a good sort of—it’s what the Army uses to sort of introduce people to red teaming. And they have all of these liberating structures, which are ways to elicit ideas and to do some really ideational rigorous exercises that on your own you cannot do, right.
So one of the things they teach you is at the Army, at Leavenworth—the school is called the University of Foreign Military Cultural Studies, but now the nickname is Red Team University—is that, like, you cannot—the group you work with every day cannot suddenly go on some retreat or cannot sit around a table and think hard and think critically, like they always say thinking critically is like riding a stationary bike. You will not suddenly—you just can’t do it, right. I mean, you can—you might work on it at the margins, but you can’t do it.
And one of the things that liberating structures do is they are very good exercises that are led by a facilitator or a moderator who forces you to question, challenge assumptions, identify blind spots. It really deals a lot with the problems of hierarchy and sort of organizational pathologies and biases that have entered into your field.
So, I mean, I would recommend just looking at some of those and thinking about how you could apply them to how you operate, because—and I’ve actually led some of these now, and they’re really quite powerful. One of my favorite ones is called weighted anonymous feedback. One of the things that we know is that organizations tend toward senior leaders and either the most senior person in the room or the person who makes the most money, if they speak first, they will set the tone for how everybody thinks and the ideas that emerge in any sort of group discussion.
And one way to get around that is you basically ask individuals to throw out ideas, to brainstorm, to think about different ways—your example is to challenge curricula, new readings, new courses, different approaches to teaching, what it is specifically that people are unsatisfied about but they might not be willing to say because of sort of group-think pressures. And then you compile them and then you ask people to rank them on four-by-six cards, one through five. And then you sort of tally up the results.
And one of the more fascinating things that you find when you do these exercises, it’s often the case that the youngest and sort of least senior person in the room will have the best ideas, almost every time. But without doing that exercise, you will never uncover those ideas, because that person will likely be unwilling to raise them, especially if they challenge conventional wisdom within the institution.
So there are lots of these sort of liberating structures. You can apply them to lots of different—lots of different problem sets and scenarios. But as I always say, the red team is never determinative. The red team provides new insights and information to decision-makers, but nothing that the red team produces will radically change the organization. It is not a silver bullet. It is a way to provide sort of serious critical thinking to problems that you recognize and that, on your own, you cannot be—you cannot sort of find new approaches or solutions to them. So that’s—(inaudible). Don’t expect a silver bullet, but expect a new way of thinking about problems.
FASKIANOS: Thank you. Next question.
OPERATOR: Thank you. Our next question will come from George Bush School of Government.
Q: Hi. Thank you for your time. My question is, seeing as red teams are brought in to think differently than the blue teams, how do you ensure the diversity of your red team addresses problems that you don’t know you have?
ZENKO: Right. That’s the hardest thing, because the red team, even in the—I mentioned the business war-gaming world. The people who do—like pharmaceutical companies, they tend to exclusively do pharmaceutical companies, because it’s such a technical—you have to have so much knowledge about biology, manufacturing, patents, et cetera, that you just do that field forever.
So you want a red team that has some understanding of your organization, its limits, its resources, and the problems it faces. However, if they are too aware of what the blue team faces, they become captured, right. They are sort of captured by, again, the organizational pathologies, the biases, and they can’t sort of see the problem differently. So that is a really difficult thing to balance.
I will tell you that on—the CIA Red Cell was formed two days after 9/11 when George Tenet called in these senior leaders and he basically told them I want you to find people who think differently and to, quote, piss off seniors. And that’s what they were instructed to do. And they intentionally found people who knew nothing about terrorism.
None of the original—I think it’s 10 members of the first Red Cell, formed right after 9/11, none of them were terrorism experts, because they recognized that the people who were doing terrorism day in and day out, some of them were sort of—you know, it’s what they call in the intelligence world the tyranny of expertise. We know that experts know their issue to the ground level. They know the ins and out of it. They know the debates. But experts are the least likely to see discontinuities and wrinkles. They’re the least likely to see it from a truly alternative perspective.
So I always say, you know, what you point out is a—it’s a good thing to balance. And one of the ways you can try to mitigate that is you should want some diversity in the red team. For example, I know of a large—in Washington right now there’s this effort to red team China’s reaction to the U.S. pivot to East Asia. And initially these people were going to—they basically formed the red team by getting mostly China experts. And I sort of said—or even getting Chinese nationals who would be willing to do this.
And I sort of said, like, why would you want people who only know China, right? I mean, I want people who know history. I want people who know about great-power transitions. I want people who are comparative to the scientists who can look at different issues of great-power rise or sort of hegemonic transitions, if you know the sort of IR literature. And I want people who know about the sort of limits of authoritarian governments. I want people who know about, like, Albania, right, which was very similar to Maoist China in the ’60s and ’70s.
And so they sort of took on some of their advice and they got a very, very diverse background. And I think it’s had some sort of impact, because if you only get people who are specialists in a very technical field, they’re going to sort of go to their common technical, I would say, technique to poke holes in the problem or to hack it in a certain way or something. But they’re not going to see it in a broader holistic way. And that’s why, again, you need that sort of diversity in composition. That’s a great point.
FASKIANOS: Thank you. Next question.
OPERATOR: Thank you. Our next question will come from Norwich University.
Q: Hi, Doctor. I just have a quick comment. It kind of ties into the red team, a tale of how a general didn’t listen to the criticism in that. Do you understand? So I was charged with implementing—I did 30 years in the Marine Corps, by the way—with trying to implement this new system. And the general that was in charge of implementing it asked what would be the right way of going about it. And so clearly my answer to him was, you know, until you get buy-in from the generals that run the colonels who take the system out into the field, nothing is going to happen, because once we get it in theater, they’re going to resort back to what is comfortable and what they actually know.
So my question for you, with all this great knowledge and information that you have—it’s really interesting—my question is who currently in our government and administration, DOD, is listening to you and taking in all this information with a plan to implement?
ZENKO: Yes, sir. I mean, that’s a great—that’s a great question. I mean, there is—I will say in the Army and the Marine Corps they get it and they care, and they’re trying to expose as many majors who go through, like command and general staff college or down at Quantico at Marine Corps University and sort of lieutenant colonels. They try to expose as many people to it as possible. And actually at Leavenworth, the number who have gone through it escalated through the roof.
Now, there are little bits of red teaming in the Pentagon. So the chairman of the Joint Chiefs has something called the Chairman’s Action Group. They do some red teaming, but mostly they get captured on the day-to-day requirements of their boss and they really don’t get to carve out time to think critically, because one of the biggest problems to red teaming in government, in the military, is the tyranny of the in box, which is that today’s problems push out tomorrow’s thinking. That’s a problem over and over again. So if you want a real red team, they have to have the space and the time and the empowerment so they’re not retaliated against when they find bad news.
I can also tell you, within OSD, Office of Secretary of Defense, in the policy unit, there is a group there that collects best practices of red teaming, and they actually use some of it to test and—to test some strategies and to test some larger sort of force-structure development issues. There’s a bit of that. There’s not a lot of this that goes on, frankly, in places like the White House and the State Department, although there is an Office of Policy Planning in the State Department where there are some people who try to do some of this.
But your—to get back to your bigger point, where the command climate does not allow red teaming, it will not succeed or flourish. And lots of commanders in the military or outside of it try to stop it very early on, because they perceive it as a threat to their command and their authority.
And if you get a chance, in my book I have an entire chapter dedicated to the Marine Corps under—General Jim Amos, the commandant in 2010 tried to implement red teaming in the Marine Corps, and there were a lot of senior leaders who tried to resist it because they didn’t want to commit the billets to it. They didn’t want people to go away from their command to go to school to learn this stuff. They didn’t quite understand it. It took a lot of time to get people to care. And the successive commandants have gotten a little better buy-in, but there is often a lot of cultural resistance from seniors. That’s absolutely true.
FASKIANOS: Thank you. Next question.
OPERATOR: Thank you. Our next question will come from Patterson School, University of Kentucky.
Q: Hi. I was reading some of your writing on the evolution of the CIA’s Red Cell. Do you think that it’s headed towards more of an authoritative analysis for specific issues, or do you think it will remain as the kind of strategic arm and look at creative issues as well?
ZENKO: That’s a great question. And the reason it’s an important question is because, as you know, last year the director of the CIA, John Brennan, basically decided to go away from the sort of holy wall that had been between the analytical arm of the CIA and the operations arm. I mean, there is some blending in the counterterrorism world, some blending in nonproliferation. But now the point is to basically smoosh them together.
Now, the reason you had that wall was because the people who are engaged in operations should not be the ones evaluating their effectiveness. Again, you cannot grade your own homework. And so a lot of people are frankly worried that the analytical side of things will become captured by trying to achieve sort of mission effectiveness, to achieve operations. So that is sort of a worry.
I can tell you, though, that the Red Cell is being protected under this reorganization plan that’s going on right now. In fact, the woman who ran the Red Cell told me recently that the Red Cell is going to, quote, get redder. It’s going to try to do less support for analytical arms. As I point out in that case study, one of the things that the Red Cell came to do over the years was when, for example, the East Asia or the Near East arm of the CIA had some big upcoming problem set, they would ask the Red Cell for some help or they would ask the Red Cell to provide an independent set of eyes on their own analysis.
Now, the problem with that is now you’re sort of back-filling the normal day-to-day authoritative analytical arms. You’re not doing truly independent self-tasking analysis. But I’ve been told it’s going to be protected, and I can tell you that it’s still a very valuable—it’s perceived as a very valuable place to be employed. When openings come up, there are tons of senior analysts who apply to work there. And their products—they still turn out, you know, 50, 60 products a year; are tremendously valued by senior leaders. Basically, everyone who can reads every single thing they create, because by definition they’re different than what they read day to day.
FASKIANOS: Thank you. Next question.
OPERATOR: Thank you. Our next question will come from University of Notre Dame.
Q: Hi. My question is kind of a question that Philip Mudd, the former director for analysis at the Counterterrorist Center, asked, is what do you want the people to do with the Red Cell reports after they’re given? So what’s the plan after those reports come out?
ZENKO: Right. You know, as I pointed out earlier, the red team is never determinative. The red team never provides information that, having read it, senior leaders go—or policymakers in the case of the CIA—they read it and they go, aha, now I know exactly what to do, right. That’s not what red teams do. They—but the point of the matter is that they’re shared with the leaders who can do something with that information, especially when they face a strategic decision, when they face a new emerging challenge.
And the other thing red team is really useful for is what are called stuck policy problems. These are problems that policymakers have faced for a long time. They’re sort of on this path-dependent sort of problem set. A good example of this is people are red teaming. They’re trying to red team right now terrorist use of the Internet, right. This is something we’ve been talking about for 14 years, that terrorists are really good with the Internet. They’re really good at messaging. Why are we so bad at it?
And it’s quite fascinating, because people are literally trying the same things that were tried eight years ago, 10 years ago. I mean, I remember all of these countering violent extremism ideas, and they’re literally trying the same things over and over again. So there’s really no institutional learning. There’s very little strategic change in what’s going on. And so this is really, really ripe for red teaming. It’s really ripe to have independent set of eyes who have no vested interest in the outcome. And so that’s where they can have—they can make some difference.
So, again, the key is the senior leader has to buy in at the beginning or else people won’t read what’s produced. So that’s generally what happens. Once it’s produced, they buy in, and then they decide to act upon it or not. But a red team analysis is no better than any authoritative analysis in that it does not, again, determine any policy outcomes. The point is that it provides a different perspective on a problem set that the policymaker had not received before.
FASKIANOS: Well, Micah, unfortunately I think we are just about out of time. If you want to leave us with one final thought, I invite you to do so.
ZENKO: Yeah. Thank you. Again, thanks very much. As I always tell people, I’m very, very easy to find on Twitter, on email. If you have additional questions, please feel free to be in touch. I would love to give thoughts and feedback if you get a chance to buy and read the book, because I didn’t write it to not get feedback. I didn’t write it to hide the information. I like hearing about how other people are exposed to it and use some of its findings. So please consider this just the start of a longer conversation if you’re interested. Thank you.
FASKIANOS: Thank you, Micah Zenko. We really appreciate it.
I commend the book to all of you and to follow Micah on Twitter. As I said at the outset, Twitter @MicahZenko. And also tune in to his blog on the CFR.org website as well as his column on ForeignPolicy.com.
And thanks to all of you for your great questions and your preparedness for this discussion.
Our next call will be on Wednesday, February 24th, from 12 to 1 p.m. Alyssa Ayres, senior fellow for India, Pakistan, and South Asia at CFR, will discuss U.S.-India relations.
And I also encourage you to follow us on Twitter @CFR_Academic for information on new CFR resources and upcoming events.
So thank you all for your participation. And we hope that your semester is off to a great start, and we look forward to your tuning in for the next call.