The expansion of the digital economy has brought sustained increases in productivity, but also new risks and vulnerabilities. A common concern in such a dynamic–and increasingly “smart” environment involves baseline cybersecurity and privacy standards. Exactly what constitutes “reasonable” cybersecurity has long vexed both businesses and policymakers. After all, even some of the most sophisticated operators have fallen victim to cyberattacks. For example, in December 2020, FireEye–a leading cybersecurity firm that serves a who’s who list of clients around the world – was breached by Russia’s Cozy Bear group. Weeks later, analysts determined that the attackers had gained access to the cybersecurity supply chain through the vendor SolarWinds and had compromised nine U.S. government agencies and more than one hundred firms, in addition to FireEye.
More recently, a slew of cyberattacks, including the Log4j exploit, and a new round of wiper attacks on Ukrainian critical infrastructure, have showcased the common vulnerability of systems due to both an expanding attack surface and the continued rise of sophisticated nation-state actors, such as Russia, sponsoring cyberattacks.
All these incidents underscore that any organization can be breached regardless of the cutting-edge array of cybersecurity best practices that they have deployed or how much they spend. Although cyber risk can never be eliminated, it can be better managed through incentivizing–and even requiring–technical and organizational cybersecurity best practices. However, there is little, if any, agreement over what constitutes cybersecurity best practices, especially in the absence of Congressional guidance.
In our recent working paper, we investigate the concept of “reasonable” cybersecurity and present results from a new survey on a representative set of companies in Indiana. While there is no formal definition of “reasonable” cybersecurity in either the judicial system or marketplace, we survey the efforts that states have taken to fill the vacuum left by policymakers.
In particular, many states have passed laws encouraging and requiring companies operating within their jurisdictions to improve cybersecurity practices. For example, some states, like California, mandated more stringent standards for manufacturers of Internet-connected devices. Other states, like Ohio, have elected instead to provide safe harbors, minimizing liability for companies in the aftermath of a data breach, as long as those companies invest in a pre-determined list of recognized cybersecurity standards and frameworks–such as the National Institute for Standards and Technology (NIST) Cybersecurity Framework.
Admittedly, all these approaches–beyond the ones that we have noted–have their own costs and benefits that merit a deeper analysis. In our paper, we argue that both the public and private sectors would benefit from a standard of care that sets a minimum threshold for “reasonable” cybersecurity practices, allowing organizations and states to exceed them if they so choose.
We also present results from a survey that we fielded on 336 companies in partnership with the Indiana Executive Cybersecurity Council and the Indiana Business Research Center. We find that 66 percent of respondents work at an organization in a critical infrastructure area. Furthermore, 44 percent are from organizations with 10 or fewer employees, 15 percent with 11-50 employees, 21 percent with 51-250 employees, and 20 percent with 250 or more employees. We oversample organizations with fifty one or more employees, so we are likely underestimating the adherence to cybersecurity best practices.
Interestingly, respondents articulate significant concern about cybersecurity attacks: ranking between zero (very unlikely) and one hundred (very likely), the average respondent estimates a value of fifty for the likelihood of a cyber incident, and a value of fifty nine for the expected harm of the incident. In order to be able to contextualize the expected harm from a cyber attack, we also asked respondents to rank the likelihood that their organization would face other types of challenges (such as a fire, workplace injury lawsuit, and insider theft), and the harm they would face from each type of challenge. We found that the expected harm from a cyber attack is roughly comparable to the expected harm from a natural disaster, which the average respondent ranked as having an expected harm of 62.
While 82 percent of respondents say that their organization has taken steps to prevent cyber incidents, 94 percent of respondents from medium and large organizations (more than 50 employees) reported that their organization took prevention steps in comparison to 74 percent of respondents from small organizations (50 or fewer employees).
A similar pattern can be seen for planning to mitigate the effect of a cyber incident should one occur. Eighty seven percent of respondents from medium and large organizations reporting cyber mitigation planning compared with 56 percent of respondents from small organizations.
When asked about their organization’s cybersecurity practices, respondents most frequently indicated that their organization engaged in automated updating, remote backups, and multi-factor authentication. Although externally developed cybersecurity decision-making frameworks–particularly the NIST Cyber Security Framework–have been described as having the potential to set standards for cybersecurity practices, only 30 percent of respondents indicated that their organization used such a framework. About half of respondents said that their organization had cyber risk insurance, concentrated among medium and large organizations (70 percent), rather than small ones (36 percent).
Frequent variation in cybersecurity practices across small and medium/large organizations suggests that policymakers should define reasonableness in the context of an organization’s size and sophistication, with certain universal best practices and guidelines that all organizations must follow at a minimum.
Scott Shackelford is the Chair of Indiana University’s Cybersecurity Risk Management Program, Executive Director of the Ostrom Workshop, an Associate Professor at Indiana University, and a former term-member of the Council on Foreign Relations. He holds a Ph.D. in politics and international studies from the University of Cambridge, and a JD from Stanford Law School.
Annie Boustead is an assistant professor at the School of Government & Public Policy at the University of Arizona, where her research focuses on surveillance, cybersecurity behavior, and drug policy. She holds a PhD in policy analysis from the Pardee RAND Graduate School and a JD from Fordham University School of Law.
Christos A. Makridis is a research affiliate at Columbia Business School’s Chazen Institute and Stanford University’s Digital Economy Lab. He holds doctorates in economics and management science & engineering from Stanford University.