Influence Campaigns and Disinformation

  • Media
    A World Under the Influence
    Podcast
    With the rise of social media, influencers around the world have increasingly taken on the role of newscaster without a traditional media organization behind them. Some say it has democratized journalism, but with the rise of misinformation, influencers who capture massive audiences online also run the risk of spreading false or even harmful information. How much have influencers altered the media landscape?
  • Election 2024
    Meddling in U.S. Elections, Florida Reels from Twin Storms, Nobel Peace Laureate to Be Named, and More
    Podcast
    Intelligence officials warn of foreign meddling in the U.S. presidential and congressional elections; Hurricane Milton marks second straight weather blow to the U.S. southeast; this year’s Nobel Peace Prize winner is announced in Norway; and Slovakian Prime Minister Robert Fico vows to block Ukraine’s bid to join the North Atlantic Treaty Organization (NATO).  
  • Russia
    Russian Aggression Beyond Ukraine
    Play
    Panelists discuss Russia's broader strategy beyond Ukraine, including efforts to expand the conflict through cyberattacks and arson across Europe, as well as possible election interference in the United States.  
  • Ukraine
    Ukraine Pushes Into Russia, 2024 DNC Begins, Foreign Hacking Targets Trump and Harris, and More
    Podcast
    Ukraine’s surprise incursion of Russia’s Kursk region captures territory and stuns the Kremlin; the Democratic National Convention kicks off in Chicago with concerns of divisions in the party over support for Israel in its war in the Gaza Strip; U.S. intelligence is on high alert after foreign hacking attempts on both former President Donald Trump’s and Vice President Kamala Harris’s presidential campaigns; and Mexico turns down Ukraine’s request for it to uphold warrants to arrest Russian President Vladimir Putin by the International Criminal Court.   
  • Russia
    How the U.S. Can Counter Disinformation From Russia and China
    Attempts by Russia, China, and other U.S. adversaries to spread dangerous false narratives need to be countered before they take root. 
  • Cybersecurity
    Russia’s Becoming More Digitally Isolated—and Dependent on China
    Russia's growing focus on domestic digital development and its deepening reliance on Chinese technology have major impacts on human rights and cybersecurity in Russia, while creating new vulnerabilities.
  • Censorship and Freedom of Expression
    The Supreme Court Was Right on Murthy v. Missouri
    In a fight over social media, misinformation, free speech, and the role of government, this ruling isn’t about censorship; it’s about facts.
  • Qatar
    Why All the Criticism of Qatar?
    Qatari-owned "news" media like Al Jazeera and Al-Quds Al-Arabi are not independent news sources, and continue to show deep hostility to the United States in their biased reporting.
  • NATO (North Atlantic Treaty Organization)
    Reverberations From Ukraine
    The war in Ukraine marks a new era of instability in Europe. Countering Russia’s efforts will require a stronger, more coordinated NATO.
  • United States
    Digital Diplomacy's New Dawn—Decoding Foreign Disinformation and Fostering Resilience
    Play
    Liz Allen, U.S. Undersecretary of State for Public Diplomacy and Public Affairs, discusses her role in countering disinformation, combatting foreign malign influence, and fostering a resilient global information space. Please note there is no virtual component to this meeting. The audio, video, and transcript of this meeting will be posted on the CFR website.
  • Election 2024
    European Tech Law Faces Test to Address Interference, Threats, and Disinformation in 2024 Elections
    The European Union (EU) began implementing the Digital Services Act (DSA) this year, just in time to combat online disinformation and other electoral interference in the dozens of elections taking place in Europe’s twenty-seven member countries and the European Parliament elections taking place June 6 through June 9. To prepare, the EU conducted a stress test of the DSA mechanisms to address elections targeted by false and manipulated information, incitement, and attempts to suppress voices. The DSA has also opened investigations against Meta, TikTok, and X out of concern they are not doing enough to prevent these scenarios.  The DSA is a landmark piece of legislation not only because it is the most comprehensive regulatory effort to address digital threats to date and impacts the 740 million people living in the EU; its implementation will also inform other countries’ efforts to provide a secure and safe internet space. Even without additional legislation, the European law may induce the largest technology companies to voluntarily apply the same standards globally, as was the case with the EU’s Global Digital Privacy Regulation, which caused many platforms to routinely seek user permission for data collection and retention.   Tech companies’ responses to the DSA during the EU elections will be watched closely in the United States, where disinformation and electoral interference could roil the already contentious November elections. Despite years of debate, no U.S. guardrails have been implemented. Concerns over government censorship and free speech have stalled dozens of legislative proposals to require tech companies to address various threats in the digital space and risks arising from powerful new artificial intelligence (AI). The free speech argument overlooks the speech of those who are being doxed, threatened, attacked, and driven out of the public arena by vicious online actors—including women, who are far and away the most frequent targets of these attacks. Legislative action has also been impeded by concerns that overly burdensome regulation will inhibit tech companies amid a worldwide race to gain competitive edge through generative artificial intelligence and other innovations.  The DSA is a useful model constructed around three principles: due diligence requirements for tech companies, mandated transparency via public reporting of their compliance with those requirements, and the threat of hefty fines to ensure compliance and accountability. The size of the EU market is large enough that, as with the General Data Protection Regulation (GDPR), some tech companies may be incentivized to comply with the law’s provisions even without sanctions. The DSA’s strictest provisions apply to the world’s largest online platforms and search engines (those with more than 45 million users). These companies are required to routinely assess activity on their platforms and services for “systemic risks” involving elections, illegal content, human rights, gender-based violence, protection of minors, and public health and safety.  Companies have delivered initial assessments, which are publicly available, as well as information about the actions they have taken to comply. The EU website hosts a massive and growing online archive of hundreds of thousands of content moderation decisions made by the companies. In early enforcement actions, the EU has requested that Meta and other companies take down false ads and sought more information about their safety practices. For example, the EU queried X about its decision to cut its content moderation team by 20 percent since last October. Reduced content moderation on one of the world’s largest platforms is obviously a great concern given the number of elections and aggressive disinformation and interference campaigns by Russia and its proxies as it seeks to boost the fortunes of rising rightwing populists, Euroskeptic parties, and pro-Russian and anti-Ukraine candidates, as recently occurred in Slovakia and other Central and Eastern European countries.   Thus far, the DSA has not yet levied fines, but the threat alone of stiff penalties of up to 6 percent of gross revenues has led most companies to provide the required information. This treasure trove of information about how these tech companies are policing their own platforms is itself valuable; it enables governments and researchers to understand the effectiveness of measures being employed by trust and safety divisions of companies, some of which embrace the goal of a safe internet. The EU law explicitly seeks to guard free speech as well as innovation by companies, but the experience of implementation will inform lingering concerns about free speech, direct government decision-making, and censorship of content, including whether an authoritarian government could exercise control over their populations through digital policing and firewalls. Those concerns color the current negotiations at the United Nations over a Global Digital Compact, which is to be announced as part of the Summit of the Future in September.  The essence of the DSA is not to make content decisions directly but to set standards for due diligence and require companies to demonstrate that they are monitoring and mitigating risks via their own codes of conduct. Voluntary standards may vary, but the sharpest debates revolve around defining what constitutes illegal content. The EU has taken additional measures to harmonize laws regarding what is illegal content across the EU member states, which has been a difficult and contentious matter. The United Kingdom (UK) went through a similar multiyear debate over concerns about curtailing free speech before passing its Online Safety Act late last year. The UK law adopted some features in the DSA, including the due diligence reporting requirement and fines of up to ten percent of gross revenue. It defines the scope of risks more narrowly than the DSA, although the UK law does criminalize “extreme” pornography and may criminalize the creation of deepfake porn. Enforcement of the UK law awaits finalization of codes of conduct by year’s end. The EU also has moved to harmonize what constitutes illegal content as the laws of the twenty-seven member states currently vary greatly in defining what is illegal. Germany’s Network Enforcement Act, which was passed in 2018, is one of the world’s stiffest hate speech laws, which aims to stem rising neo-Nazi hate speech. The far-right Alternative fur Deutschland party has surged in state elections and exceeded the popularity of the leading Social Democrats in national polls.  The process of making the internet safer is iterative; several countries have revised their laws based on the experience of implementing them as well as evolving circumstances. For example, Germany amended its law in 2021 to stiffen its requirement that companies take down “clearly illegal” content within twenty-four hours. Australia has revised its online safety law twice since its initial passage in 2015, to require faster takedown of material deemed illegal and to greatly expand the law’s original focus on stopping child sexual abuse and exploitation and terrorist material. Speed of response is a critical factor in countering mis- and disinformation. Delayed action by tech companies has allowed viral propagation of material to proceed unhindered—as occurred in early January when deepfake porn of pop star Taylor Swift spread to 47 million viewers shortly after it was uploaded from the notorious 4chan message board.  That highly publicized episode drew attention to the disproportionate targeting of women and girls by internet violence, especially women in public life like politicians, journalists, and human rights activists, and minorities. The chilling effect on political participation has also been documented. The UK parliament rushed to act on deepfake porn after a number of women candidates were targeted this spring. Growing attention to the magnitude of the effects on women spurred the Biden administration to form a fourteen-country global partnership for action on online harassment and abuse. And last month, the EU concluded years-long negotiations to issue a directive on online gender-based violence and threats, including nonconsensual sharing of intimate images, deepfake porn, and other forms of attack. Member states are required to pass laws to implement the directive within two years.  The 2024 elections will serve as an initial test case for the DSA’s ability to rein in this wide variety of election interference, threats, and disinformation. Given the nascent regulatory architecture and companies’ varied compliance records, it is certain that further scrutiny and modification will be needed. Big tech will be required to provide public after-action reviews of the effectiveness of their measures to label AI-generated content, moderate discourse, identify foreign interference, and meet other guidelines for each country’s elections. These much-needed first steps will help light the way for others.  This publication is part of the Diamonstein-Spielvogel Project on the Future of Democracy.
  • Artificial Intelligence (AI)
    Artificial Intelligence in Journalism
    Play
    Amy Webb, Joan Donovan, and Mehtab Khan discuss implications for artificial intelligence in journalism, the risk of its spreading of disinformation and malign influence, and advice for using artificial intelligence in newsrooms with Carla Anne Robbins as part of the 2024 CFR Local Journalists Workshop.