The TikTok Trap
from Women Around the World and Women and Foreign Policy Program

The TikTok Trap

TikTok is an easy scapegoat, but the lack of tech regulation and data protection is the underlying cause of our collective anxiety in the digital age.
TikTok Chief Executive Shou Zi Chew testifies before a House Energy and Commerce Committee hearing entitled "TikTok: How Congress Can Safeguard American Data Privacy and Protect Children From Online Harms," in Washington DC, on March 23, 2023.
TikTok Chief Executive Shou Zi Chew testifies before a House Energy and Commerce Committee hearing entitled "TikTok: How Congress Can Safeguard American Data Privacy and Protect Children From Online Harms," in Washington DC, on March 23, 2023. Evelyn Hockstein/REUTERS

This post was originally published on CFR’s Net Politics blog.

In March of this year, the U.S. House of Representatives’ Energy and Commerce Committee questioned TikTok’s CEO Shou Zi Chew for an arduous five-hour long hearing. This came amidst a sense of nationwide panic about the dangers of TikTok for young people, and the fear that American data in the hands of a Chinese company could endanger our national security. The tenor of some of the lawmakers’ questions aside (i.e. “does TikTok connect to home Wi-Fi?”), the hearing itself captured two important trends surrounding the future of technology and the potential for regulation.  

More on:

Women and Foreign Policy Program

U.S. Congress

Social Issues

First, there seems to be a pervasive state of anxiety about the future of technology in the United States, and without any promise of widespread, substantive data protections, this anxiety has channeled into a wider moral panic on the state of "youth.” Simultaneously, there is an obsession over the perceived geopolitical threat of China, the focus on which distracts and even undermines the larger effort to protect our data and privacy.  

The response to TikTok has become a prime example of these trends at work. There is no doubt that TikTok poses real risks to young people’s mental health and that the app collects large amounts of personal user data which could threaten our national security. However, experts on technology and privacy, including Julia Angwin, whom I hosted for a roundtable recently, have flagged again and again that TikTok is not unique in these risks amongst other social media and tech giants. In May, the U.S. Surgeon General issued an advisory warning about the risks of social media use to youth mental health, urging tech companies to better enforce policies for adolescents, and encouraging lawmakers to “strengthen protections to ensure greater safety for children interacting with all social media platforms.” But barring any considerable response at the national level, TikTok has become a convenient scapegoat, with panic surrounding the app inciting a hodgepodge of state, local, and even university-level bans and restrictions that many argue miss the mark in addressing mental health and privacy concerns, and could violate first amendment rights.

In her remarks at CFR—and in a recent piece in the New York Times—Angwin also notes that the focus on TikTok’s threat to national security obscures broader data privacy concerns. Angwin writes, “Banning TikTok won’t keep us safe.” After all “[i]f China wants to obtain data about U.S. residents, it can still buy it from one of the many unregulated data brokers that sell granular information about all of us.” In our conversation, she pointed out that there is far more documented evidence of algorithmic manipulation and amplification on platforms like Facebook, in addition to high-profile examples of employees at U.S. tech companies—such as Google, Twitter, and Microsoft—misusing user data or spying on dissidents and others, charges all waged against TikTok. Instead, as she laid out in another recent piece for the Times, Angwin has called for a broader set of reforms, such as “algorithmic choice” where users play a greater role in curating their social media feeds.

With artificial intelligence’s entrance into the public discourse too, it is clear that we have not learned from these mistakes. While early conversations on the emergent technology focused on the potential risks to jobs or of widespread mis- and disinformation, regulatory efforts have devolved into debates over plagiarism on college campuses, “the death of the college essay,” and the “new arms race” between the United States and China. Again, while these impacts are certainly worthy of our concern, and their own sets of regulatory frameworks, they also serve to sideline the far-reaching employment and misinformation risks that could affect the larger population.

The ongoing Writers Guild of America (WGA) strike, where part of screenwriters’ demands have focused on protections from the use of generative AI, should be a warning sign for the potential risks to the economy for continuing to kick this problem down the road. The strike is predicted to cost California’s economy over $3 billion already. More broadly, with a recent Pew Research Center survey finding that nearly one fifth of U.S. workers have “high-exposure” jobs, it is not difficult to see how without more regulatory focus on the risks of generative AI to employment, strikes and shutdowns could spread across the labor force.

More on:

Women and Foreign Policy Program

U.S. Congress

Social Issues

This all comes as the European Union (EU) takes a markedly different approach to regulation, recently adopting the Digital Services Act (DSA), which is designed to hold internet platforms more accountable for their content and mitigate “systemic risks.” This includes requiring large platforms to file transparency reports, mandating access for external scrutiny, and restrictions on certain types of targeted advertising. Several of the designated Very Large Online Platforms (VLOPs), which are subject to additional scrutiny, have already struggled to comply with the new regulations, implemented August 25, in simulated “stress tests.” Reportedly, a number of VLOPs including Facebook and TikTok received warnings that their DSA compliance policies needed “more work” following their stress tests.

Whether the United States follows the EU’s lead, or develops other regulatory approaches, these issues have remained on Congress’ agenda. In late July, the Senate Commerce Committee advanced both the Children and Teens’ Online Privacy Protection Act (COPPA) and the Kids Online Safety Act amidst pushback from civil liberties groups and privacy advocates. It remains unclear whether there is enough momentum for such regulatory efforts to extend to the broader population, but as European Commissioner for Internal Market Thierry Breton argued while in Silicon Valley in July, “Technology has been ‘stress testing’ our society, it is now time to turn the tables.”

Alexandra Dent, research associate at the Council on Foreign Relations, contributed to the development of this blog post.

Creative Commons
Creative Commons: Some rights reserved.
Close
This work is licensed under Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International (CC BY-NC-ND 4.0) License.
View License Detail