from Net Politics and Digital and Cyberspace Policy Program

Q&A With Peiter Zatko (aka Mudge): Setting Up the Cyber Independent Testing Laboratory

December 18, 2015

Cyber Net Politics Mudge Q&A CFR
Blog Post

More on:

Cybersecurity

In June of 2015, noted cybersecurity researcher Peiter Zatko, better known as Mudge, announced he was leaving Google’s advanced technology and projects group to start what he then dubbed a cyber underwriter’s laboratory at the behest of the White House. In this Q&A, I catch up with Mudge and discuss the origins of the project, its goals, and what he has been up to since launching the effort.

Question: Last spring you caused a lot of excitement among the cyber twitterverse when you tweeted that you’d been asked to start a cyber underwriter’s laboratory. Can you give us the backstory on that and tell us what you have done since?

The government had been looking into how they could better understand risk and strengths in software and systems. They were trying to figure out what an entity or organization that provided such a capability would look like and how it would be structured. It seemed that everyone they talked to kept pointing them to me for various reasons, from my work in the l0pht with consumer advocacy, up to my experiments in government contracting and research structures with programs like Cyber Fast Track.

So, I received a call from the White House. They asked if I would be interested in being the head of this new organization if it were to be created within the government. I think I surprised them when I said, “No, not if it is designed to be a government entity.” I explained the challenges I saw if the organization were to be created within the government and why I felt that doing this as a commercial entity had an incorrect incentive structure that would ultimately undermine the whole effort.

We were already starting to setup a similar organization as a non-profit, a fact that I shared with the White House. I said I would go raise the money and get government funding (where appropriate) and that I would keep them in the loop and share the progress and results with them. I don’t think they believed me, so I think they were a bit surprised when we pulled it off. We incorporated the Cyber Independent Testing Laboratory (Cyber-ITL) this fall and are in the process of filling for 501(c)3 status. The laboratory has received funding from DARPA to conduct a feasibility study.

Q: Stepping back for a minute, what’s the problem that this effort is trying to solve?

In the computer security realm, we have been trying for decades to get the general public to care about security. Now they do care, but they have no way of differentiating good security products from bad ones. In fact, some of the most insecure software on the market can be the very security software that is supposed to protect you.

Some adversaries have processes and procedures to determine which software is easiest to exploit. Our organization tries to quantify the resilience of software against future exploitation.

Q: What’s the ultimate outcome you are trying to achieve? What does success look like?

There are many tiers of success. Four goals we hope to achieve are:

  • Consumers having the ability to comparatively distinguish safe products from unsafe, secure from insecure;
  • Pressure on developers to harden their products and engage in defensive development practices;
  • The ability to quantify risk; and
  • Take away low-hanging fruit, such as the more insecure product development practices, and thus begin to devalue parts of the exploit market.

Q: You pitched this as a private non-profit receiving government support. Why is that the right model? Why shouldn’t this be a government function?

A project like this needs significant transparency to ensure trust of the public. It needs to be non-partisan and with commercial money out of the picture to ensure there are not perverse incentive structures that would work against the goal of publicly disseminating impartial information about commercial (and open source) products.

Consumers Union (Consumer Reports) is the closest example in this space, and I have modeled the Cyber-ITL along those lines.

Q: Conversely, why should the government be involved at all?

The government is a major consumer of commercial software. They have the same challenge of needing to quantify the risk in their software and systems. Like other consumers they are currently missing information that would allow them to make more informed purchasing decisions. Since they have a vested interest in seeing this effort succeed, it’s reasonable for them to fund some of the scientific research elements. We can get funding from charitable organizations and other non-profits, but it isn’t reasonable for non-governmental organizations to foot the entire bill when this will be a useful tool for government.

If the Cyber-ITL efforts provide the value we hope they will, this provides the government with a stick by which they can hold suppliers and vendors to account.

Q: There are a number of private, for profit companies that work on aspects of this problem. I’d mention two: Veracode, which does automated code analysis, and NSS Labs, which rates the effectiveness of security software. How do you see this new entity as competing or complementing these efforts?

I see Cyber ITL solving a different problem, but where its output will likely be used by the companies you mention to augment the services and deliverables they provide their customers.

Remember that Cyber ITL is quantifying the security hygiene of a piece of software (without requiring source code). You can think of this as somewhat akin to the nutritional label you find on food. The job of these labels is to provide the information that enables intelligent and informed decisions irrespective of what type of consumer you are.

By comparison, code analysis companies point out very specific implementation issues so developers can choose whether to spend the effort to fix the problem assuming they agree with the code analysis company’s analysis. The consumer does not know which of of the various products, that may or may not have gone through code analysis, has better or worse security hygiene. There are also often non-disclosure agreements signed between the code analysis companies and the product developer that prevent the sharing of information showing which software is more securely designed.

In the case of NSS, they evaluate whether a piece of security software does what it claims. For instance, does intrusion detection product A actually identify the threat signatures it claims to? While this is valuable, it does not tell the consumer how much risk they are taking on if they were to deploy intrusion detection product A. There have been instances where security software is some of the most insecure deployed on a network. As a result, we’re now seeing a more visible trend in the exploit market where adversaries are actively targeting the security solutions as their means to compromise systems.

Up
Close