Lucas Ashbaugh is an intern with the Digital and Cyberspace Policy program at the Council on Foreign Relations. Alex Grigsby, assistant director of the Digital and Cyberspace Policy program, contributed to this post.
Every quarter, Net Politics publishes Report Watch, which distills the most relevant digital and cyber scholarship to bring you the highlights. In this edition: the effects of internet censorship in China, the malicious uses of artificial intelligence, and U.S. Cyber Command's strategy to achieve domain superiority.
“The Impact of Media Censorship: Evidence from a Field Experiment in China” by Yuyu Chen and David Y. Yuang.
Chen and Yuang set out to measure the effects of providing an uncensored internet to Chinese students. They gave 1,800 university students free access to a tool that allowed them to circumvent the Great Firewall. Chen and Yuang then monitored the students’ behavior to see whether they would access politically sensitive information, and if that would affect their political views and behavior. A sub-set of participants were modestly urged to access this information using newsletters and quizzes covering sensitive content from the New York Times.
Chen and Yuang found that simply providing participants with privacy and circumvention tools did not significantly change their browsing behavior much, largely because the Chinese government “fosters an environment in which citizens do not demand such information in the first place.” However, when prodded by newsletters and quizzes, participants were dramatically more likely to access politically sensitive information, spending 435 percent more time on foreign sites. Further, not only did these students change their views and behaviors, they were more likely to spread their knowledge to their peers.
Additional research would be necessary to determine whether Chen and Yuang’s findings are unique to China or whether their conclusions can be applicable to other countries with strong internet censorship, like Ethiopia, Saudi Arabia, and Uzbekistan.
"The Malicious Use of Artificial Intelligence: Forecasting, Prevention, and Mitigation" by Miles Brundage et. al.
A group of experts from the University of Oxford, the University of Cambridge, Center of a New American Security, the Electronic Frontier Foundation, and others, led by Miles Brundage, explore the malicious uses of artificial intelligence (AI) and machine learning (ML), and what can be done to mitigate them. The authors identify three buckets of threats.
First, they argue that AI and ML will likely make it cheaper for malicious actors to "complete tasks that would ordinarily require human labor, intelligence, and expertise." For example, if a military wanted to conduct a cyberattack or use a lethal autonomous weapon, AI could help identify targets or exploit vulnerabilities at a much faster rate than humans. Similarly, if a troll farm wanted to disseminate political messages online, AI tools could help create political messages without the need of a large staff with language training or extensive in-country expertise.
Second, AI will introduce new threats, largely because it could be used to carry out attacks that humans previously deemed unfeasible. Successfully forging someone's voice or image has traditionally been difficult, but new technologies are now making that possible (e.g. deep fakes). Absent new authentication means, it may be impossible determine what is genuine or fake. Another example of a new threat the authors envision is malware that can "think" for itself without requiring human commands, potentially useful when compromising air-gapped networks.
Third, the authors argue that the nature of threats will change thanks to AI's ability to scale, in turn making attacks more effective, finely targeted, and difficult to attribute. For example, successful spear phishing emails require some form of reconnaissance where an attacker identifies the target's social network and the sort of content he or she is most likely to click on. AI could help automate this process, allowing an attacker to conduct highly targeted attacks, at scale.
To mitigate these risks, the authors propose four recommendations: close collaboration between policymakers and researchers to prevent the malicious use of AI; having AI researchers consider the dual-use nature of their work and raise the alarm when "harmful applications are foreseeable"; using best practices from other fields with dual-use applications, like computer security, for AI research; and include more actors in the discussion.
The Dialogue Continues Over Shaping the U.S. Cybersecurity Strategy
"United States Cyber Command’s New Vision: What It Entails and Why It Matters" by Richard Harknett and "Triggering the New Forever War in Cyberspace" by Jason Healey
United States Cyber Command recently released a strategy to “Achieve and Maintain Cyberspace Superiority,” sparking a debate among cyber policy experts about offensive operations, deterrence, and strategic stability. The strategy recognizes that the majority of cyber operations purposefully remain below the threshold of an ‘armed attack’ and describes cyberspace as a domain of persistent contest. Deterrence has failed, and the strategy is skeptical that it will ever work in cyberspace. As a result, U.S. Cyber Command will prioritize offensive activity to contest an adversary’s capability. “Defending forward as close as possible to the origin of adversary activity extends our reach to expose adversaries’ weaknesses, learn their intentions and capabilities, and counter attacks close to their origins,” says the strategy. “Continuous engagement imposes tactical friction and strategic costs on our adversaries, compelling them to shift resources to defense and reduce attacks.”
Writing in Lawfare, Richard Harknett, who was a visiting scholar at Cyber Command and has written in the past about the failure of deterrence in cyberspace, sees the strategy as a “significant evolution in cyber operations and strategic thinking.” Harknett praises the vision for its advocacy of defending foreword, matching aggression by blunting adversarial actions before they reach their targets in the United States. He explains that “it is not contradictory to assume that in an environment of constant action it will take counter action to moderate behavior effectively.”
In the Cipher Brief, Jason Healey views Harknett’s endorsement and the new Cyber Command strategy with skepticism. Healey argues that the strategy fails to consider a number of risks, namely that matching adversaries’ aggression with more aggression is inherently escalatory, and that the strategy plays into arguments that the United States is “militarizing cyberspace.” Though he agrees that persistent engagement exists in cyberspace, accelerating the engagement is bound to lead to mistakes and miscalculations that could escalate conflict. Instead of persistent engagement, Healey supports a focus on cost-effective and scalable defensive solutions, a tactic he refers to as ‘leverage’. In addition to a stressing defense, Healey argues that diplomatic efforts to ensure stability in cyberspace—confidence building measures, norms, greater transparency—are preferable to seeking superiority.