As the details of the hacking of the U.S. Office of Personnel Management (OPM) became public in 2014 and 2015, the refrain from the press, Congress, and the general public was: how could this happen? How could hackers, probably from China, have stolen what one former official called, "crown jewels material … a gold mine for a foreign intelligence service"—the personal data of 18 million individuals, including the sensitive information on federal employees? After reading Red Team: How to Succeed by Thinking Like the Enemy, the excellent new book by my colleague Micah Zenko, you are likely to ask, why doesn’t it happen more often, and is there anything to be done to make sure it does not happen again?
There were, of course, large problems with cybersecurity at OPM. The agency did not have a professional information technology security staff until 2013, mechanisms to detect intrusions, or deploy two-factor authentication and encryption. The Inspector General warned Congress of "persistent deficiencies in OPM’s information system security program," including "incomplete security authorization packages, weaknesses in testing of information security controls, and inaccurate plans of action and milestones."
While Zenko does not talk about the OPM hack, his subject is institutional myopia and how to prevent it. The failures of OPM were extreme, but not unique. Almost everyone thinks they are an independent thinker who clearly sees strengths and weaknesses and speaks truth to power. Almost every boss says they foster an environment that rewards creative solutions, risk taking, and questioning. In reality complacency, conformity and "yes men" prevail. Cognitive and organizational biases combine so most fail to identify the faults and blind-spots that can cause calamitous failure.
One possible solution is to deploy red teams—a "structured process that seeks to better understand the interest, intentions, and capabilities of an institution—or a potential competitor—through simulations, vulnerability probes, and alternative analyses." Zenko provides a useful, and entertaining, history of red teams in the military and private sector. The cast of characters includes CIA directors, military commanders, hackers, social engineers, New York police, Federal Aviation Authority investigators, and others.
Brian Krebs just reported on a little known Department of Homeland Security program that offers penetration tests to critical infrastructure industries so they "can shore up their computer and network defenses against real-world adversaries." The National Cybersecurity Assessment and Technical Services scans " target’s operating systems, databases, and Web applications for known vulnerabilities, and then tests to see if any of the weaknesses found can be used to successfully compromise the target’s systems." These are exactly they types of programs described in Red Team, but Krebs also identifies some of the potential pitfalls of the red teaming process. The DHS digital intrusion service covers a set of vulnerabilities that in the end could provide cover for doing nothing. Krebs quotes Alan Paller, director of research at the SANS Institute: "The problem is that it measures only a very limited subset of of the vulnerability space but comes with a gold plated get out of jail free card: ’The US government came and checked us.’" Zenko is sensitive to Paller’s concerns, and in the book offers several examples of red teaming failures, and guidance on how they should, and should not, be deployed.
Perhaps one of the greatest limitations of red teaming is that there seems to be no accepted metric for their impact. As Zenko notes several times, red teams struggle to quantify what their efforts saved the targeted institutions in dollars and personnel costs. Still, another OPM-like hack is bound to happen, and Red Team is essential reading for those who want to understand how thinking like the attackers can help improve cybersecurity defenses.