- Blog Post
- Blog posts represent the views of CFR fellows and staff and not those of CFR, which takes no institutional positions.
Maya Villasenor is an intern in the Digital and Cyberspace Policy program at the Council on Foreign Relations.
Section 230 of the Communications Decency Act, the 1996 U.S. statute that insulates internet companies from liability for most content posted by their users, has been repeatedly criticized by politicians from both sides of the aisle. Citing internet companies’ perceived lack of responsibility and accountability for the spread of known falsehoods, President-Elect Biden said in January 2020 that “Section 230 should be revoked, immediately should be revoked, number one.” Republicans, including President Trump, have also attacked the law, often arguing that social media companies, shielded by Section 230’s “Good Samaritan” protections, unfairly silence conservative voices.
Although Section 230 also has bipartisan defenders, it has become clear that new regulation addressing content moderation will be under consideration in the White House and on Capitol Hill in the coming years. Largely lost in the recent dialog, however, has been the potential impact on the global internet environment.
While Section 230 is a U.S. statute, much of global social media content is hosted by U.S. companies such as Facebook, Twitter, and Google. The policies, systems, and software that these companies implement in response to any new U.S. regulations regarding content moderation will create a foundation and precedent that will also have implications outside of the United States.
Under Section 230, U.S. social media companies moderate user content at their discretion (with a few exceptions [PDF]) and to protect their bottom lines. Governments in France, India, Brazil, Saudi Arabia, Turkey, and elsewhere have attempted to saddle social media companies with increased obligations regarding user content that allegedly encourages extremism or threatens national security, but courts and the European Union have thus far avoided requiring sweeping, proactive content moderation.
Amendments to weaken Section 230 would compel Twitter, Facebook, and their peers to hire scores of new content moderators and develop new, automated content moderation tools crafted to accommodate U.S. regulations that would likely be utilized globally. The existence of more active content filtering in the United States would indicate to other countries that social media companies are technologically able (and willing) to abide by local content moderation laws.
Foreign governments, especially autocratic regimes, would be emboldened to follow the United States’ lead, though with more onerous demands. In 2014, Saudi Arabia introduced “terrorism” regulations criminalizing broad categories of speech, including on social media networks. Although Saudi Arabia has already silenced dissidents with a smattering of one-off approaches such as threatening tech companies, troll farms, arrests, and murder, Twitter has thus far refused over half of the Saudi government’s information requests and actively worked to quell propaganda campaigns designed to stifle dissidents—efforts that would be undermined if the Saudi government could demand that social media companies apply augmented moderation tools developed for compliance with U.S. regulations to remove “unlawful” content.
Turkey, another country with a record of attempting to assert state control over social media, could also look to leverage tightened U.S. content moderation laws. In October, Facebook refused to comply with new Turkish legislation aimed at giving the government more control over social media content. Revisions to Section 230 (or, even absent any revisions, more burdensome interpretations of Section 230) would make it harder for companies to justify future refusals to similar requests.
Although early theories of the internet envisioned a borderless network unconstrained by governments, internet companies have already exhibited their ability to customize services to local rules. The European Union enforces “right to be forgotten” regulations, which prohibit firms from displaying search results that include “inadequate, irrelevant or no longer relevant, or excessive” personal data. Since 2014, Google has been asked to delist more than 3.8 million URLs. Over 46 percent of these requests have been granted, yet Google removes the URLs only for users within the European Union’s twenty-seven-nation bloc, demonstrating that content can be moderated as needed to placate foreign governments. While new Section 230 legislation would require moderation that is far more complex and multidimensional than simply limiting search results, social media companies would—within limits—almost certainly be able to retool their systems to comply.
Another concerning possibility is that the U.S. government could attempt to exert its newfound content moderation authorities beyond U.S. borders. Existing carveouts of Section 230 already allow the Department of Justice to prosecute social media companies for mere knowledge of violations—such as sex trafficking under FOSTA-SESTA—even if the associated content originated abroad and never passed through servers in the United States. Social media companies, aiming to comply with U.S. law, therefore block potential violations globally and are often overly inclusive, barring content having nothing to do with sex trafficking. A weakened or amended Section 230, tailored to U.S. politicians’ perceptions of social media’s form and function, could further allow the U.S. government to influence content posted in other countries. Foreign governments would in turn also expect greater extraterritorial control of online content—a demand that has already been made, though with little impact, by the European Union.
Thus, revising Section 230 would have consequences that reach far beyond our borders. It is undeniable that social media companies have not met the challenge of keeping their platforms free of posts that propagate falsehoods and hatred, and it is also fair to question whether a law originally enacted in the mid-1990s—a time before smartphones and social media networks—needs updating. However, in reconsidering Section 230, it is also important to be mindful of the risk of overcorrections and unintended consequences.