AI in Context: Indonesian Elections Challenge GenAI Policies
This week (and throughout this year) tech companies will announce efforts and partnerships to support democratic electoral processes, “responsible” AI, and information integrity, among other laudable goals. These announcements demonstrate internal, high level commitments, and such commitments are valuable. Generally speaking, we should commend companies that have the courage to declare their values, and the wisdom to invest in efforts that will improve their long term value by mitigating the societal risks their products could create in the near term. We should also remain vigilant, and in this—the “year of elections” and the “year of AI”—consistently question whether companies that publicly committed to a global policy can in fact enforce that policy equitably across the globe.
Historically, that has not been the case, and we need only look to Indonesia to see how this can play out. The world’s third largest democracy will go to the polls within hours of this post, on February 14. In the United States, we’ll be celebrating Valentine’s Day with cute and cuddly teddy bears. In Indonesia, we’ll see a different celebration of the cute and cuddly: the gemoy rebranding of Prabowo Subianto, seventy two year-old Defense Minister and alleged human rights abuser, as a “cuddly grandpa” whose charming AI avatar may help Prabowo dance his way to victory next week, after two previous presidential campaigns failed.
More on:
It is one of many generative AI tools used throughout the Indonesian elections and powered by U.S. companies. Prabowo’s affable avatar was created using the well known AI image generator Midjourney. Meanwhile, nine senior campaign staffers in Indonesia told Reuters that “[m]any of the AI tools used in Indonesia's election are powered by OpenAI… That includes Prabowo's platform, according to his digital team's coordinator.” Those tools are being used “to create campaign art, track social media sentiment, build interactive chatbots, and target voters.”
Midjourney bans the use of its tools for political campaigning. Specifically, its terms of service state that “[y]ou may not use the [Midjourney] Services to generate images for political campaigns, or to try to influence the outcome of an election.” OpenAI announced a similar ban in January, stating, “We’re still working to understand how effective our tools might be for personalized persuasion. Until we know more, we don’t allow people to build applications for political campaigning and lobbying.”
This leads to a question that has plagued digital platforms for decades, and will increasingly challenge AI companies. What's better: to establish a policy you can't equitably or meaningfully enforce? Or to avoid establishing a policy—and consequent expectations—because you know you can't implement it? Historically, many companies have landed in the middle, which is where—fittingly—Midjourney seems to have found itself in Indonesia. From a policy perspective, Midjourney is, in fact, mid-journey. So too, it would seem, is OpenAI.
A policy like Midjourney’s—which prohibits the use of its tool “to influence the outcome of an election”—doesn’t establish a clear and enforceable rule so much as claim an organizational value. It reflects the beginning of an endless journey—one that will require resource allocations, an evolving lexicon, enforcement tools and protocols, shifts in product designs, organizational restructures, new hires, communications strategies, and a legion of other signposts along the way.
Midjourney is not unique in this regard. No company at this moment can truly prevent its AI tools from being used in contravention of stated policies. In part, that’s because no company can foresee how millions of humans—be they creative, hilarious, nefarious, or power-hungry—will harness the new opportunities AI tools create. In addition, these companies need to build sustainable revenue models, and their stated intentions may not align with their market fit or investors’ priorities. Finally, no digital platform has ever demonstrated the capacity to enforce its policies equitably across the globe.
More on:
Let’s start with the very messy reality of humans humaning at scale, especially using something like generative AI tools in a moment as sensitive as an election. Is it problematic to make a cartoon version of a political candidate using an inexpensive American technology platform, as Prabowo’s campaign did? Many would argue it’s not. Would it seem more problematic if the AI avatar wasn’t a cartoon, but looked and sounded exactly like a political candidate, such as Imran Khan’s campaign in Pakistan? Might your decision depend on whether the AI use was disclosed? Or who was creating and disseminating the avatar? Or why they were doing it? I could poll ten different people within my friend group and hear divergent answers on these questions, which illuminates the near impossibility of establishing a global policy for such a widespread and emerging technology. Your parody can easily be my disinformation. My heartfelt repost of an inflammatory claim can also be the successful implementation of your foreign influence campaign. We won’t be coming to collective agreements any time soon about where to draw the line.
Now, let’s consider that newer AI companies in particular are racing to dominate markets, build revenue streams, and clarify product fit. When the top Google search results for “Midjourney and political campaigns” return an array of advertisements for services that will help you use Midjourney to conduct political campaigns, it’s a pretty clear indication that your product fit and your stated policy intent may be at odds. We’re in what I’ve called a post-market, pre-norms environment for AI; that’s a fancy way of saying that an explosion of high power, low cost tools are rolling out across the globe with little to no collective understanding of how they’ll be used, and (per the last point) even less agreement as to how they should be used. Take that lack of norms, add a stack of different app developers and licenses and use cases across the globe, and you end up in a policy monitoring/enforcement/liability maze that would confound MC Escher.
Finally, let’s think about how broad the resulting user base is for AI companies. In response to Reuter’s inquiries about use in Indonesia, OpenAI said it was investigating the tools that Reuters had identified, but that “an initial review found “no evidence” of its tools being used in the election.” It’s not just possible, it’s probable, that OpenAI’s insights into how its models are being used around the globe do not align with how those tools are being used, and that its mechanisms for clarifying such use will require constant iteration as use cases and scale also evolve. For example, an Indonesian political consultant who developed one of the apps used by Prabowo’s campaign told Reuters he had built the app to use OpenAI’s GPT 4 and 3.5 models—and then sold the app’s services to 700 legislative candidates. Although his app “pulls together demographic data and crawls social media and news websites, allowing it to generate speeches, slogans, and social media content tailored to a constituency,” the consultant claimed that his app does not support “the creation of political campaigns” but instead “support[s] the decision-making process of candidates.” If you were a decision maker at OpenAI, would you deem his app to be built for “political campaigning and lobbying?” How would you respond once you heard the same developer say—as he did—that he viewed his app deployment in Indonesia as a beta test for use in India’s elections later this year?
Voluntary principles, policies, and self-enforcement play a critical role in helping shape understanding and consensus in a space where innovation has historically outpaced governance. In a year when many such commitments will be made, it’s also critical to remember the limitations inherent in those commitments, and the broader work necessary to make them meaningful. This includes independent, in-depth reporting such as Reuters’ in Indonesia, which can help clarify how AI use is evolving across different electoral environments. It also includes supporting local experts, journalists, content moderators, civil society experts, and electoral authorities, who can contextualize the role and impact of new tools within their communities and electoral environments. Finally, it requires support for independent research and accountability measures that clarify whether companies are living up to their stated commitments, or alternatively, investing in learning and adapting as quickly as they can. For AI, every election this year is a beta test—not only for how AI tools will be used, but also for how we’ll collectively respond. Let’s continue to commend legitimate efforts to engage responsibly—but let’s also challenge each other to invest in learning quickly, and responding equitably.
This publication is part of the Diamonstein-Spielvogel Project on the Future of Democracy.