- Blog Post
- Blog posts represent the views of CFR fellows and staff and not those of CFR, which takes no institutional positions.
It’s a great time to be a politician and an even better one for engineers that specialize in machine learning that work for them.
Throughout history, candidates for office have always had limited tools to hone their message and secure votes (aside from outright bribing). Anecdotal evidence is not representative and surveys are imperfect. More often than not, politicians relied more on instinct rather than insight when campaigning.
Today, campaigns have the ability to comb large swaths of data to micro-target specific categories of voters, and develop messaging that will resonate most with them. These efforts began in earnest with the 2008 U.S. presidential election and were honed in 2012 and 2016.
What is good for a politician and their engineers, however, is not necessarily good for a democracy. There is evidence to suggest that artificial intelligence-powered technologies have been systematically misused to manipulate citizens in recent elections.
One example of such manipulation is the use of political bots to spread right-wing propaganda and fake news on social media. Bots are autonomous accounts that are programmed to aggressively spread one-sided political messages to manufacture the illusion of public support. This is an increasingly wide-spread tactic that attempts to shape public discourse and distort political sentiment.
Typically disguised as ordinary human accounts, bots have been responsible for spreading misinformation and contributing to an acrimonious political climate on sites like Twitter, Facebook, and Reddit. They are very effective at attacking voters from the opposing camp and even discouraging them from going to the voting booth.
For example, Pro-Trump bots regularly infiltrated the online spaces used by pro-Clinton campaigners to spread highly automated content, generating one quarter of Twitter traffic about the 2016 election.
Bots were also largely responsible for popularizing #MacronLeaks on social media just days before the French presidential election. They swarmed Facebook and Twitter with leaked information that was mixed with falsified reports, to build a narrative that Emmanuel Macron was a fraud and hypocrite—a common tactic used by bots to push trending topics and dominate social feeds while giving the impression that the messages promoted are from genuine people.
In addition to shaping online debate, AI can also be used to target and manipulate individual voters: during the U.S. presidential election, the data science firm Cambridge Analytica rolled out an extensive advertising campaign that targeted persuadable voters based on their individual psychology. This highly sophisticated micro-targeting operation relied on big data and machine learning to influence people’s emotions.
The problem with using AI in political campaigns is not the technology itself, but rather the covert nature of its use and the targeted messages that preyed on individuals’ psychological vulnerabilities. Different voters received different messages based on predictions about their susceptibility to different arguments. The paranoid received ads with messages that were mostly fear-based. People with a conservative pre-disposition received ads with arguments based on tradition and community.
The micro-targeting was possible thanks to voter data available from social media and data miners, which can often include lifestyle attributes, consumption patterns and social relationships. Every click online generates signals that could be readily accessed and analyzed to build unique behavioral and psychographic profiles.
A presidential candidate with flexible campaign promises like Donald Trump was, of course, particularly well-suited for this tactic. Every voter could receive a tailored message that emphasised a different side of the argument. There was a different Trump for every voter—the Trump campaign just needed to find the right emotional triggers for each person to drive them to action.
This is a disquieting trend. A representative democracy depends on free and fair elections in which citizens can vote their conscience, free of manipulation. Yet AI and related technologies threaten to undermine fair elections if it continues to be methodically used to manipulate voters and promote extreme narratives.
All is not lost. AI itself is not harmful. The same algorithmic tools used to mislead, misinform and confuse can be re-purposed to support democracy and increase civic engagement. An ethical approach to AI can work to inform and serve an electorate. New AI startups like Factmata and Avantgarde Analytics are already providing these technological solutions. For example, political bots can be programmed to spread information debunking known falsehoods, like the infamous WTOE 5 News article that falsely claimed the pope had endorsed Donald Trump. Similarly, micro-targeting campaigns can educate voters on a variety of political issues to help them make up their own mind. And most importantly, AI can be used to listen to an electorate to ensure their elected representatives can hear them.
The use of AI techniques in politics is not going away anytime soon—it is simply too valuable to politicians and their campaigns. However, they should commit to use AI as ethically and judiciously as possible to ensure that their attempts to sway voters do not undermine democracy as a whole.