Unpacking France’s “Mission Civilisatrice” To Tame Disinformation on Facebook
Nahema Marchal is a doctoral student at the Oxford Internet Institute and a researcher at the Project on Computational Propaganda. You can follow her @nahema_marchal
“I profoundly believe that we must regulate.” French President Emmanuel Macron told the annual session of Internet Governance Forum in Paris last week, appropriately titled The Internet of Trust. “This is the sine qua non condition for a free, open and secure Internet, as envisioned by its founding fathers.”
More on:
What might have sounded like an oxymoron to most people was, in reality, a simple statement of fact: the era of social media’s voluntary self-regulation is over. Macron’s remarks followed the announcement of a new, “experimental” partnership between the French government and Facebook. The cooperation, slated to take place in the first half of 2019, will see a delegation of French investigators work closely with the social network to monitor to how it tackles online hate. It is the first time that the normally wary technology company opens its door to regulators in such a way.
This unprecedented move marks Facebook’s latest effort to stave off criticism of how it is combating abuse and the spread of hate and disinformation on its platform. For the social media giant, it also signals a tactical shift away from exhorting self-regulation as the only means to police the industry to actually fighting for a seat at the regulatory table.
In recent years, European lawmakers have ramped up legislative efforts to rein in big tech. France is not only embedding itself inside social media companies but is also seeking new “fake news” laws to have courts to rule over the accuracy of media reports during elections. If passed, it would grant authority to the French national broadcasting agency to take any foreign TV station suspected to spread “false information” to alter the course of an election off the air. And on January 1st, 2018, Germany’s controversial Network Enforcement Law or NetzDG entered into force, requiring that sites remove “illegal” content, including defamation, incitements to violence, and hate speech within twenty-four hours of it being reported, or face fines up to fifty million euros.
Since the global outcry over data breaches and foreign election meddling, Facebook and others have taken non-negligible steps toward to addressing these issues, including shutting down accounts linked to Kremlin-backed trolls and increasing transparency around digital advertising. And research shows that some of these initiatives have paid off in stemming the wave of nefarious content on the social network. In September, an alliance of tech and advertising companies that includes Google, Facebook and Mozilla also pledged to abide by a code of conduct and work together to fight online disinformation. The group committed, for instance, to restrict advertising services to legitimate individuals and parties, prioritize truthful information in feeds and search results, and demote false content.
But these strategies are far from foolproof. The Oxford Internet Institute recently found that 25 percent of content shared on Twitter ahead of the U.S. midterm elections came from “junk news” sites—that is polarizing, misleading or conspiratorial sources that try to pass as professional news—up from 20 percent in 2016. Moreover, larger audiences than before interacted with junk content on Facebook. The type of rhetoric once only embraced by the more extreme fringes of the political spectrum has seemingly become more mainstream.
More on:
So what does this bode for the future? Following mounting pressures from policymakers worldwide, tech companies have been pushed to enforce murky and, at times, inconsistent content moderation policies on their platforms, often at the expense of legitimate political expression. Importantly, in the absence of a broader framework of public oversight to monitor their decision making processes and practices, social networks have an incentive to remove content in an ad hoc fashion to avoid facing legal, reputational or financial consequences. In that sense, a trend toward a French method of “cooperative” regulation that would involve formal consultations with government and advocacy groups should be welcomed.
Still, while social media executives and elected leaders continue to quarrel over carrots and sticks, propagandists and misinformation merchants are refining their tactics with impunity. If awkwardly phrased call to actions and dodgy social media profiles could have easily given away a Russian troll a year ago, many of these techniques are now obsolete. The viral spread of online propaganda has already moved from text to visuals—a reality that legislators have yet to fully understand. Digital tricksters aiming to sow discord and manipulate public opinion are also developing new and ever more sophisticated tools to avoid scrutiny: from infiltrating private messaging services like WhatsApp, to masquerading as real activists, and purchasing ads in foreign currencies so as to bypass existing restrictions. This is the state of information warfare today: the virus is spreading much, much faster than the vaccine.
If democratic leaders are serious about curbing online abuse and disinformation they, too, must adapt.