After the terrorist attacks in Manchester and London, the United Kingdom and other governments ratcheted up their rhetoric about terrorist exploitation of information and communication technologies (ICT). This week the UK Home Secretary meets with tech company leaders in Silicon Valley about this problem. In response, governmental, private sector, and civil society actors have promised more action that, this time, will be effective. This pattern has become all too familiar, and its reappearance in 2017 underscores longstanding questions about strategies against terrorist exploitation of cyberspace. More bluntly, are the latest developments in this area likely to produce better results?
Since emerging in the 1990s, terrorist use of the internet has morphed with the nature of extremism and terrorist activities. The most prominent change of the past year has been the self-declared Islamic State group’s retreat in the face of military operations by armies and irregular forces. As the Islamic State lost territory, cyberspace became more important to it. Developments on real-world battlefields raised the stakes in the virtual world, made progress against terrorism online harder, and intensified controversies among counterterrorism, cybersecurity, and human rights imperatives.
Generally, combating ICT terrorism involves “counter-content” and “counter-narrative” activities. However, American, British, and Australian acknowledgment in 2016 of offensive cyber operations against the Islamic State added a “counter capability” prong to the strategy. Despite actions across this “counter” triad, the director of the U.S. National Counterterrorism Center argued in May that the Islamic State’s global reach “is largely intact” and continues “to publish thousands of pieces of official propaganda and to use online apps to organize its supporters and inspire attacks.”
Although unprecedented, the U.S. offensive cyber campaign against the Islamic State has proved frustrating. The former director for counterterrorism at the National Security Council observed that U.S. officials were disappointed the campaign did not “land a major blow against ISIS,” was “much harder in practice,” and was not producing “jaw-dropping stuff.” The campaign also generated difficult, unresolved questions concerning the sovereignty of countries through which the Islamic State conducts its cyber activities. In short, counter-capability operations have not proved a “game changer” for combating ICT terrorism.
Government reactions after the latest Islamic State-inspired attacks criticized social media companies for failing to remove terrorist content and proposed regulation. Other companies began suspending social media advertising because ads were appearing next to terrorist content. These reactions dismissed past tech company efforts as inadequate, even when those efforts repeatedly expanded counter-content measures in response to pressure from earlier terrorist attacks.
In another turn of this cycle, leading social media companies created in June the Global Internet Forum to Counter Terrorism. The forum will build on previous counter-content initiatives, such as the Shared Industry Hash Database announced in December 2016. The forum will also focus on artificial intelligence as a counter-content capability, which responds to calls to use machine learning for this purpose and follows Facebook’s earlier announcement in June it will use artificial intelligence to identify and remove terrorist content.
The embrace of artificial intelligence represents the most important change to emerge from the latest recriminations against social media companies. However, relying on machine learning will exacerbate concerns that expanding counter-content measures harms freedom of expression without helping counterterrorism. As Rebecca MacKinnon argued, counter-content actions produce “collateral damage to free speech rights” when “there is scant evidence that social media crackdowns will actually prevent terror attacks[.]” Similarly, the UN special rapporteur on freedom of expression warned in March that “threats to digital expression and internet freedom are more pronounced than ever,” threats that include companies engaging in online censorship under government pressure.
Recent terrorist attacks have also renewed interest in alternative narratives to those extremists spread online. The Global Internet Forum to Counter Terrorism will foster knowledge sharing among existing counter-speech efforts. The UN Security Council approved in May a comprehensive international framework to counter terrorist narratives developed by its Counter-Terrorism Committee. In July, YouTube announced an initiative that redirects people searching for extremist videos to “videos debunking violent extremist recruiting narratives.” Doubling down on counter-narrative activities does not, however, overcome sustained doubts about the effectiveness of this approach. As the Counter-Terrorism Committee delicately put it, “[t]here is no doubt that developing effective counter-narratives is challenging.”
In addition, counter-content and counter-narrative campaigns scarcely address the Islamic State's increasing use of encryption and the dark and deep webs. Political calls for regulating encryption following the Manchester and London attacks reignited intense but unresolved squabbles among national security officials, cybersecurity experts, and human rights advocates.
Finally, efforts to counter ICT terrorism face deteriorating cybersecurity conditions around the world. The lack of U.S. leadership, fears about Russian cyber-meddling in elections, global ransomware attacks, the proliferation of government-sponsored hacking operations, and disintegration of consensus on international law’s application in cyberspace make collective action difficult. In this context, the internet might become the new Raqqa for violent extremism.