- Blog Post
- Blog posts represent the views of CFR fellows and staff and not those of CFR, which takes no institutional positions.
Maya Villasenor is a former Digital and Cyberspace Policy program intern at the Council on Foreign Relations and an engineering student at Columbia University.
2020 was a plague year, and not only from an epidemiological standpoint: disinformation went mainstream, and conspiracy theories that previously would have been relegated to dark corners of the internet gained traction with a speed and tenacity that previously would have been unimaginable.
During a fraught election cycle, dubious theories of fraud sought to undermine the legitimacy of the vote and exposed cracks in the foundations of our democracy. Inspired in part by President Trump’s initial reluctance to acknowledge COVID-19, a worrying fraction of the internet believes that the pandemic does not exist. Masks are alleged to be deadly, and U.S. leadership perpetuated the falsehood that hydroxychloroquine could both prevent and treat COVID-19. Multiple studies have since linked COVID-19 misinformation to increased rates of illness and fatality.
QAnon and its sub-theories no longer remain at the peripheries of the internet. Catalyzed in part by a pandemic that drove the world online, QAnon communities that likely include millions of Americans have emerged on mainstream platforms such as Reddit, YouTube, Instagram, Facebook, and Twitter. Believers frequented presidential campaign rallies and protests sporting QAnon flags, hats, and shirts. Recognizing the growing real-world dangers of disinformation and misinformation disseminated by QAnon supporters, the FBI warned in late May that QAnon constituted a domestic terrorism threat.
Nonetheless, two newly elected congresswomen, Marjorie Taylor Greene (R-GA) and Lauren Boebert (R-CO), have endorsed QAnon, and myriad other politicians have echoed QAnon rhetoric without mentioning the group by name. President Trump and his allies have also repeatedly flirted with the conspiracy, and during a December meeting about keeping control of the Senate, President Trump reportedly described QAnon followers as people who “basically believe in good government.”
QAnon has now spread far beyond American borders. As of August, QAnon maintained a presence in at least seventy-one countries. In Germany, QAnon has been adopted by anti-vaxxers, anti-lockdown protesters, and the extreme-right Reichsbürger movement known for espousing antisemitic conspiracy theories. Supporters in the United Kingdom, often hailing from right-wing groups, have asserted that COVID-19 is part of a UN depopulation plan or “new world order.” Frustrated by strict lockdowns, members of the French Yellow Vest and Italian anti-vaxx communities have fueled local conspiracy theories and anti-government views with borrowed QAnon rhetoric. Unlike in the United States, however, no major European politicians have thus far endorsed the conspiracy.
Nonetheless, disinformation and misinformation in Europe is thriving. In addition to Russian campaigns aiming to sow anti-refugee sentiment and undermine NATO, state controlled, pro-government media in Hungary has aligned with Russia’s troll farms and fake news factories to disseminate disinformation. The European Union has repeatedly acknowledged shortcomings in reining in disinformation, and in addition to creating a (rather ineffective) Code of Practice for online platforms such as TikTok and Google, recently announced anti-disinformation provisions as part of the Digital Services Act and the European Democracy Action Plan.
Before the rise of social media, the distribution of information was highly asymmetrical, giving a small number of government officials and (often government-controlled) media entities enormous control over public discourse. Early hopes that social media would herald an era of free and fair information have proven misplaced. Filter bubbles (which are quickly morphing into filter networks) create and bolster political, social, and cultural divisions, and studies have repeatedly demonstrated that falsehoods often spread much more quickly than truths.
Social media companies have promised to flag disinformation and misinformation with warning banners, but enforcement has proven complicated and elusive. Many consider the warnings to be too little, too late. Addressing the highly-contested labels that accompanied dozens of President Trump’s tweets regarding the election, Arisha Hatch, vice president of the Color Of Change civil rights group, noted, “the addition of disclaimers to several of Trump’s posts on Facebook and Twitter is the lowest bar possible for these companies.” More stringent actions, such as the limits and bans facing QAnon and its subgroups, floundered due to a combination of poor enforcement and a growing portfolio of strategies to dodge content moderators.
The past year has taught us that disinformation will be one of the most significant and pervasive challenges of the digital era. Perhaps a critical error was our faith in the willingness and capacity of social media companies to mitigate, as opposed to exacerbate, disinformation. Social media platforms, whose algorithms are largely responsible for circulating, promoting, and amplifying falsehoods, derive profits from engagement and have little incentive to curb potentially viral and controversial content. Among the critical questions to answer in 2021 and beyond: Should social media simply amplify discourse, or actively shape it?