This blog post was coauthored by Catherine Powell, adjunct senior fellow at the Council on Foreign Relations in the Women and Foreign Policy program and the Digital and Cyberspace Policy program. Additionally, she is a visiting scholar at the Center on Human Rights and Global Justice at NYU School of Law, where she also serves as a non-resident fellow at the Reiss Center on Law & Security. This blog was also coauthored by Haydn Welch, program coordinator in the Women and Foreign Policy program at the Council on Foreign Relations.
For women in the public eye, cultivating an online presence is often necessary and far too often dangerous. Prominent Indian and Muslim journalist Rana Ayyub has faced online harassment, disinformation campaigns, and a slew of rape and death threats for her reporting on far-right Hindu nationalism. In 2018, a deepfake pornographic video that showed Ayyub’s face swapped onto the face of another woman went viral; Ayyub was later doxxed. As the online abuse against Ayyub persisted, United Nations experts released a statement earlier this year condemning not only the attacks, but the Indian government’s inaction on the disinformation campaigns and its own legal harassment of Ayyub.
Ayyub is not the only high-profile woman experiencing gendered disinformation campaigns. And the Indian government is not the only government to face criticism for how it addresses—or refuses to address—the vulnerability of women to online abuse. On June 23, I chaired a roundtable to delve into the question of gendered disinformation, free speech, and power with Dr. Mary Anne Franks, professor of law and Michael R. Klein distinguished scholar chair, University of Miami School of Law. Dr. Franks is an expert on the experiences women face when dealing with online harassment. In our conversation, I was able to explore with her (along with our stellar audience) whether and how the digital space creates new threats of abuse and harassment for women.
Dr. Franks explained that technology is able to aggregate, amplify, and anonymize abuse, turning online abuse into a sort of spectacle. Due to the tech sector business model which uses algorithms to drive engagement, posts that receive more views and comments are elevated—and posts that incite anger certainly drive engagement. This incentive structure has been criticized by various governments, and the U.S. government is no exception.
On June 16, the White House Gender Policy Council launched a task force to address online harassment and abuse, which Dr. Franks has been advising (and discussed in a recent interview on CNN). The interagency task force, which consists of a number of cabinet secretaries and other officials in the executive branch, has three main goals as identified by Dr. Franks. These goals are to gather data and measure impact; learn from and support existing efforts combating online abuse; and identify gaps in policy, legislation, and funding. Dr. Franks also said that the task force will track the link between online misogyny and offline violent extremism, including mass shootings.
In addition to the White House, Dr. Franks contends that Congress also has a responsibility to do more to protect women and minority communities from online harassment and abuse. Potential reform to Section 230 of the Communications Decency Act, passed in 1996, is a hotly contested issue that Dr. Franks says is necessary to combat online abuse. In essence, and subject to a few exceptions, Section 230 says that tech companies cannot be held liable for content posted on their platforms by third parties. Joining in from the audience, Rebecca MacKinnon, vice president of global advocacy at Wikimedia, notes that Section 230 also protects tech companies from lawsuits when the companies remove third party content containing, for example, hate speech, disinformation, and harassment.
Dr. Franks observed that while many claim a growing consensus across the political spectrum concerning greater regulation of the tech sector, this consensus is actually illusory. Dr. Franks said that broadly speaking, progressives and conservatives are interested in very different kinds of Section 230 reform. Progressives tend to focus on the part of Section 230 that grants tech companies immunity for third-party content they leave up. Conservatives, by contrast, tend to focus on the part of Section 230 that provides procedural protections against liability for third-party content that tech companies take down. Allowing people to sue tech companies for removing content from tech platforms would inhibit tech companies’ content moderation (which some conservatives claim, without evidence, are more targeted toward their ideological inclinations). Dr. Franks said that efforts to hold companies liable for the content they remove are suspect and likely violate the First Amendment, as tech companies themselves have a right to both freedom of speech and freedom of association. On the other hand, according to Dr. Franks, efforts to hold companies liable for the content they leave up are designed to protect people like Rana Ayyub and other targets of extreme online abuse.
To reform Section 230, Dr. Franks has advocated for two major changes. First, where Section 230 says, “No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider,” Dr. Franks has called for the word information to be replaced with speech. This change would expressly limit tech companies’ immunity for the expressive content provided by third parties, not the entire universe of third party behavior. Second, Dr. Franks has called for introducing a “deliberate indifference” standard to Section 230, which would allow tech companies to be held liable for injurious third party content on their platforms if they had knowledge of the content; the harm is foreseeable and preventable; and the companies still chose to do nothing.
Section 230 reform raises important questions relating to the freedom of speech. A common response to efforts addressing online harassment and abuse is that these efforts amount to censorship. From Dr. Franks’s perspective, this view fundamentally misunderstands free speech. Dr. Franks said that critics who characterize Section 230 reform as government censorship prioritize the speech of abusers and harassers over the speech of women and minorities. When online harassment and abuse targeting women and minorities is allowed to flourish online, the speech of the targeted individuals is drowned out, Dr. Franks said. She added that those targeted by online harassment campaigns are often forced to leave the platform or otherwise withdraw from public discourse. In other words, their speech is suppressed. In contrast to censorship tactics such as book bans or prohibitions against the teaching of certain subjects, allowing tech companies to be held responsible for online abuse expands opportunities for the speech of women, minorities, and other targeted groups.
From an international perspective, Dr. Franks argued that Americans should be willing to take lessons from other countries regarding how they govern free speech. Rebecca MacKinnon also discussed how American law governing free speech differs from global standards. The Universal Declaration of Human Rights, for example, contains a more nuanced understanding of free speech, MacKinnon said. MacKinnon also pointed out that European law requires large corporations, including tech companies, to conduct human rights due diligence.
In fact, tech experts who are working with at least some U.S.-based tech companies are looking to international standards. For example, the Facebook Oversight Board uses international human rights law to govern their decisions, since Facebook’s operations are global. However, the relative power and independence of the Facebook Oversight Board to Facebook, the company, is up for debate. So far, it is unclear whether self-enforcement by tech companies is sufficient to address the problems on online harassment and abuse.