Germany Wants Greater Algorithmic Transparency to Fight Disinformation, But Its Approach Is Half-Baked
Alexander Pirang is a non-resident Fellow with the Global Public Policy Institute (GPPi) in Berlin.
Europe, with its zealous regulators at both national and EU level, has proven to be a difficult terrain for U.S. tech companies. This trend will continue when the European Union‘s General Data Protection Regulation (GDPR) comes into effect in May. The GDPR, among other provisions aimed at improved European-wide data protection, subjects social media platforms to stricter rules on obtaining user consent before processing personal data.
More on:
In the light of the Cambridge Analytica scandal engulfing Facebook, the GDPR comes at an opportune time and more regulatory action will undoubtedly be required to protect democracies from fake news and disinformation.
The German government’s latest proposal to curb online disinformation, however, is likely to run into significant hurdles. After it was reported that Cambridge Analytica harvested the Facebook profiles of tens of millions for “psychologic profiling,” Germany’s Justice Minister Katarina Barley set her sights on Facebook’s most closely guarded secret – “we also need more transparency regarding the algorithm,” she said after a meeting with senior Facebook officials on March 26, echoing a statement Chancellor Angela Merkel made in 2016. Last week, Barley added that users “need to know how they are being profiled” based on their data, suggesting that Facebook should open up to European Union regulators.
Hardly surprising, Facebook’s reaction to these demands was unenthusiastic. The company only promised to “consider the matter” and to “continue the conversation.” This back-and-forth recalls earlier meetings between the German government and Facebook. Barley’s predecessor Heiko Maas, now head of Germany’s Foreign Office, also initiated similar meetings with major social media companies on the issue of online hate speech. When it became clear that his calls for more compliance regarding the removal of illegal content had gone largely unanswered, the Justice Ministry responded last year by rushing a bill through parliament that has been sharply criticized as ill-conceived and hasty.
Now, the German government and Facebook risk repeating past mistakes.
While German politicians have reason to be concerned about the secrecy of algorithmic filters and the creation of filter bubbles harmful to democracy, political chest-puffing to placate a domestic audience risks wasting a window of opportunity. Closing in on Facebook’s trade secrets will likely be met with fierce resistance in the courts and result in a protracted legal fight. Political capital should rather be spent on more feasible fixes, such as mandatory labels for political ads and disclosure rules regarding the financial backers behind ad campaigns.
More on:
Moreover, forcing intermediaries like Facebook to become more transparent about the inner workings of their black boxes does not directly tackle the challenges associated with targeted political advertising and disinformation campaigns. On the contrary, it might even do more harm than good.
First, it is questionable whether complete algorithmic transparency would effectively make Facebook users more informed about why certain information appears on their news feed. Facebook already offers some high-level descriptions of how it works. Revealing the actual criteria and their relative weighting on which automated decisions are based would likely be too technical to make sense to the average user. The use of AI and machine learning further complicate matters, since the algorithms responsible for personalized content are constantly in flux.
Second, there is a risk that increased transparency might even be conducive to the spreading of disinformation. Learning more about the algorithm’s functioning would allow malevolent actors to fine tune the highly sophisticated tools available and target their various audiences ever more precisely. A similar trend can already be seen with efforts aimed at search engine optimization. There is a whole industry that has cropped up since Google first appeared, where marketers try to game Google’s search algorithm to increase the chances that their content appears first in search results.
Although forcing Facebook to open up its back box might not be a successful as German politicians would like, the company needs to be more forthcoming regarding the worrisome implications of its business model for the public square. The full-page ads Mark Zuckerberg took out in German newspapers last week, pledging to better protect user information, rang hollow after similar symbolic overtures in the past. The ruthless ways Facebook tracks user data across the internet shatters any doubt that the company primarily runs a business, not a social club or community organization.
If Mark Zuckerberg really wants to make good on his promise to “build a better service in the long-term,” then Facebook should aid the academic community in closing important research gaps on the effects of exposure to online disinformation. According to one expert report prepared for the European Commission, closing these gaps will require “improved access to data to a wide range of legitimate third parties, ideally independently governed, while complying with privacy and data protection requirements.”
More research would allow lawmakers to craft necessary policies on an informed basis. It would also provide insight into how Facebook algorithms work, holding them accountable without the confusion and potential for abuse full public transparency would bring. The research could also strengthen arguments that more autonomy should be ceded to users to choose the content they are exposed to on social media.
Absent better research on the effects of disinformation and the role of social media platforms, Facebook and other intermediaries risk heavy-handed regulation that will do little to fix the attention economy’s problems.