To Protect Democracy in the Deepfake Era, We Need to Bring in the Voices of Those Defending it at the Frontlines
Artificial intelligence is increasingly used to alter and generate content online. As development of AI continues, societies and policymakers need to ensure that it incorporates fundamental human rights.

By experts and staff
- Published
The Universal Declaration of Human Rights celebrates its 75th anniversary at a time of significant transformation in the field of artificial intelligence, especially in generative AI. The urgency of integrating human rights into the DNA of emerging technologies has never been more pressing. Through my role at WITNESS, an organization leveraging video technology for human rights, I’ve observed firsthand the profound impact of generative AI across societies, and most importantly, on those defending democracy at the frontlines.
The recent elections in Argentina were marked by the widespread use of AI in campaigning material. Generative AI has also been used to target candidates with embarrassing content (increasingly of a sexual nature), to generate political ads, and to support candidates’ campaigns and outreach activities in India, the United States, Poland, Zambia, and Bangladesh (to name a few). The overall result of the lack of strong frameworks for the use of synthetic media in political settings has been a climate of mistrust regarding what we see or hear.
Not all digital alteration is harmful, though. Part of my work involves identifying how emerging technologies can foster positive change. For instance, with appropriate disclosure, synthetic media could be used to enhance voter education and engagement. Generative AI could help create informative content about candidates and their platforms, or of wider election processes, in different languages and formats, improving inclusivity or reducing barriers for underdog or outsider candidates. For voters with disabilities, synthetic media could provide accessible formats of election materials, such as sign language avatars or audio descriptions of written content. Satirical deepfakes could engage people who might otherwise be disinterested in politics, bringing attention to issues that might not be covered in mainstream media. We need to celebrate and protect these uses.
As two billion people around the world go to voting stations next year in fifty countries, there is a crucial question: how can we build resilience into our democracy in an era of audiovisual manipulation? When AI can blur the lines between reality and fiction with increasing credibility and ease, discerning truth from falsehood becomes not just a technological battle but a fight to uphold democracy.
From conversations with journalists, activists, technologists and other communities impacted by generative AI and deepfakes, I have learnt that the effects of synthetic media on democracy are a mix of new, old, and borrowed challenges.
As we navigate AI’s impact on democracy and human rights, our approach to these challenges should be multifaceted. We must draw on a blend of strategies—ones that address the immediate ‘new’ realities of AI, respond to the ‘old’ but persistent challenges of inequality, and incorporate ‘borrowed’ wisdom from our past experiences.
First, we must ensure that new AI regulations and companies’ policies are steeped in human rights law and principles, such as those enshrined in the Universal Declaration of Human Rights. In the coming years, one of the most important areas in socio-technical expertise will be the ability to translate human rights protections into AI policies and legislation.