Since the September revelations that fake accounts linked to Russia bought $150,000 of political ads on Facebook during the 2016 campaign, discoveries of Russian activity on social media have emerged almost daily. The Russian purchases escaped notice amidst over $1.4 billion spent on political ads, much of it to promote paid advocacy posing as independent news or posts by outraged individuals. Disinformation was so prevalent during the campaign that, as Alexis Madrigal points out in The Atlantic, even the Pope talked about fake news.
The internet was supposed to make political ads more, not less, transparent. In fact, the Supreme Court majority invoked the internet in its Citizens United decision removing limits on corporate and union spending on political advertising, writing:
With the advent of the internet, prompt disclosure of expenditures can provide shareholders and citizens with the information needed to hold corporations and elected officials accountable for their positions and supporters … citizens can see whether elected officials are “in the pocket” of so-called moneyed interests.
The Court’s vision of an internet able to inoculate democratic debate against the influx of money was always unrealistic. It was rendered impossible even on TV by the Internal Revenue Service and the Federal Election Commission (FEC) allowing big donors to set up “ghost corporations”—entities that take advantage of a corporate structure permitting donors to remain anonymous.
But in a cruel irony, the internet itself became a means for undermining existing disclosure laws. Rules developed over decades required that all political ads disclose who paid for the ad and that TV stations make information on sponsors publicly available. But the FEC allowed big donors to evade these rules if they advertise on the internet. Candidates and sponsors also started using “dark ads” targeting subsets of voters with anonymous, negative, and false claims—some carrying contradictory messages to different voters—to further evade accountability.
And, the unique features of the internet allowed political content to be disguised so that it often isn’t recognizable as paid advocacy at all. TV and radio stations are required to disclose who pays for content. These rules date back to the payola scandals when music producers paid off disc jockeys to play their artists. But online, front groups could create personas such as @TEN_GOP, a Twitter account pretending to represent Tennessee Republicans that was in fact controlled by Russian operatives. Those seeking to influence political discussion purchase robotic accounts, or “bots,” that join ads in promoting content so that it appears to enjoy organic human support. Sites that pretend to be independent news organizations—but with none of the practices of traditional independent media—run inflammatory advocacy stories geared to provoke likes and shares to rise on the lists of trending stories. A recent Oxford Internet Institute study found that during the 2016 U.S. election, “Twitter users got more misinformation, polarizing and conspiratorial content than professionally produced news.” Twitter disputes these results.
The ease with which Russia exploited these weaknesses leaves no doubt that, at a minimum, more disclosure is needed. Senators Amy Klobuchar, Mark Warner, and John McCain introduced a bill translating TV political advertising rules to the digital realm by requiring tech platforms to include disclaimers in ads identifying their buyer and to prevent foreign nationals from purchasing political ads. Furthermore, the bill would require online platforms to make copies of the ads available to the public as well as disclose their price and target audience. Facebook and Twitter each responded with commitments to voluntarily make changes along these lines. Although these commitments in theory could be enforced by the Federal Trade Commission and state attorneys general, Senator Warner argues that legislation is needed to ensure other companies follow suit.
To make these steps more effective, current law requiring that political advertisers disclose the names of their donors should be updated and enforced. Online platforms and the FEC should require additional, standardized identifying information on expenditures along the lines of what legal scholars Jennifer Heerwig and Katherine Shaw have proposed to enable regulators and watchdogs to aggregate, sort, and search disclosure data. And the various transparency measures should address issue ads—not just ads about candidates—as Twitter has suggested.
Revelations of Russian attempts to sow division, even after the election, highlight the power of fake accounts, pages, and news to drive seemingly organic debate in ways that can harm our national security. Facebook has committed to using machine learning to take down fake accounts and pages. Other platforms will need to expose and take down fake accounts. Facebook also says it will provide more context for news sites and reduce monetization opportunities for fake news. The vice president of fact-checker Snopes, Vinny Green, has suggested that internet companies might explore how better to ensure that sites posing as credible outlets actually follow the editorial standards and principles traditionally employed by reputable journalists and organizations.
Disclosure measures of these kinds are critical but more inquiry and research is necessary. Thorny issues for discussion include the effect of dark issue ads on democratic debate and how to counter efforts that promote extremism on algorithmically-driven platforms.
As with other abusive behavior on the internet, the challenges will not remain static. Well-funded organizations, including state actors, are trying to shape online discourse and will adapt to efforts aimed at curbing their activities. Social media platforms can enlist the help of academics and researchers by providing them with anonymized data to study the effectiveness of countermeasures. Ongoing innovation will be needed to enhance disclosure and democratic debate on the internet.