This post was originally published on CFR’s Net Politics blog.
In April 2023, following much speculation, President Biden officially launched his re-election campaign via video announcement. On the very same day, the Republican National Committee (RNC) responded with its own thirty-second advertisement, which envisioned four more years under President Biden with greater crime, open borders, war with China, and economic collapse. It seems like a run-of-the-mill political attack at first glance, but in reality, is the first national campaign advertisement made up of images entirely generated by artificial intelligence (AI). And while the RNC has been transparent about its use of AI, it has nonetheless dragged the electorate into a new era of political advertising, with few guardrails and serious potential implications for mis- and disinformation.
In their 2018 Foreign Affairs article, “Deepfakes and the New Disinformation War,” Robert Chesney and Danielle Citron predicted that the “information cascade” of social media, declining trust in traditional media, and the increasing believability of deep fakes would create a perfect storm to spread mis- and disinformation. Their forecasts have already begun to play out. In January, a deep fake video circulated on Twitter that appeared to show President Biden announcing that he had re-introduced the draft and would be sending Americans to fight in Ukraine. The clip initially displayed a caption describing it as an AI "imagination,” but quickly lost the disclaimer through circulation, showing just how easily even transparently shared AI use can turn into misinformation.
Though Chesney and Citron focused on the geopolitical threats of deep fakes and large learning models (in the hands of Russia or terrorist organizations), it is not difficult to imagine how these same elements might go off the rails with political advertising. Even without AI-generated imagery, there has been something of a race to the bottom to produce the most provocative campaign ads. This is far from the first use of digitally enhanced images in campaign ads either. In 2015, researchers found that the McCain campaign used images of then-candidate Barack Obama in attack ads that “appear to have been manipulated and/or selected in a way that produces a darker complexion for Obama.”
As we have discussed in previous articles, these emerging technologies are likely to be most effectively used against vulnerable populations, such as women, people of color, and members of the LGBTQI+ community running for office. In a study of the 2020 congressional election cycle, a report from the Center for Democracy and Technology found that women of color candidates were twice as likely to be targets of mis- and disinformation campaigns online. In India, deepfake technology has been weaponized against female politicians and journalists, with many reporting that their photos have been placed onto pornographic images and videos and circulated on the internet. AI generated images and deep fakes in political advertisements could easily be used to sexualize female politicians, opinion makers, and other leaders, which research has shown can undermine women's credibility in campaigns.
There also arises the risk of what Citron and Chesney call the “liars dividend.” Increasingly realistic fake videos, audios, and photos could allow politicians to avert accountability for any problematic soundbite or video, claiming that it should have been obvious to viewers all along that such materials were AI-generated or a deepfake. In an era in which politicians can evade accountability due to negative partisanship, the addition of the liar’s dividend could provide the ultimate “get out of jail free” card.
Social media platforms have begun to roll out new policies to address AI generated content and deepfakes but have struggled to integrate these rules with existing policies on political content. Meta has banned deepfakes on its platforms yet remains steadfast in its policy of not fact-checking politicians. TikTok has banned deepfakes of all private figures, but only bans them for public figures if specifically endorsing products or violating other terms of the app (such as promoting hate speech). Deepfakes of public figures for the purpose of “artistic or educational content” though, are permitted.
In response to the RNC ad, Representative Yvette Clark of New York introduced the “REAL Political Advertisements Act” requiring disclosures for any use of AI-generated content in political advertisements. For its part, the Biden administration hosted tech CEOs at the White House earlier this month and released an action plan to “promote responsible AI innovation.” Last week, the Senate Judiciary Privacy, Technology, and the Law Subcommittee held a hearing on potential oversight of AI technology. Though many have lamented that there has not been more of a response from government to regulate the potential threats of AI more broadly, with another election cycle already beginning, and AI’s foray into politicians' own backyard it could light a necessary fire.
Alexandra Dent, research associate at the Council on Foreign Relations, contributed to the development of this blog post.