Technology companies agree to combat election deceptions generated with AI

Technology companies agree to combat election deceptions generated with AI

Markus Schreiber / Associated Press

Several prominent technology companies signed an agreement on Friday to voluntarily adopt "reasonable precautions" to prevent the use of artificial intelligence tools to disrupt democratic elections worldwide.

Executives from Adobe, Amazon, Google, IBM, Meta, Microsoft, OpenAI, and TikTok gathered at the Munich Security Conference to announce a new framework on how to respond to deepfakes—AI-generated images—intentionally used to deceive voters. Another 12 companies, including Elon Musk's X, will also sign the agreement.

"Everyone recognizes that no tech company, no government, and no civil society organization alone can confront the advent of this technology and its potential nefarious use," said Nick Clegg, Meta's president of global affairs, in an interview before the conference.

The largely symbolic agreement focuses on increasingly realistic AI-generated images, audio, and video that "misleadingly alter or falsify the appearance, voice, or actions of political candidates, election officials, and other crucial stakeholders in a democratic election, or provide false information to voters about when, where, and how they can legally vote."

More: Nikki Haley criticizes Trump for not condemning Navalny's death and disapproves of his closeness with Putin

The companies do not commit to banning or deleting deepfakes. Instead, the agreement outlines the methods they will use to try to detect and label deceptive AI-generated content when created or disseminated on their platforms. The companies will share best practices and respond "promptly and proportionally" when such content begins to spread.

The vagueness of the commitments and the lack of binding requirements likely helped attract a wide range of companies, but some disappointed activists were hoping for something more concrete and firm.

"The language is not as strong as one might have hoped," said Rachel Orey, senior deputy director of the Election Project at the Bipartisan Policy Center, a Washington, D.C.-based research center. "I think we should give credit where it's due, and acknowledge that companies do have a vested interest in their tools not being used to undermine free and fair elections. That said, it is voluntary, and we'll be watching to see if they follow through."

Clegg said each company "has, rightly, its own set of content policies."

"This is not trying to impose a straitjacket on everyone," he commented. "And in any case, no one in the industry believes that it can address a wholly new technological paradigm by turning a blind eye and trying to solve problems that arise over and over and finding everything it thinks might deceive someone."

Several political leaders from Europe and the United States also joined Friday's announcement. European Commission Vice President Vera Jourova said that while such an agreement cannot be comprehensive, "it contains impactful and very positive elements." She also urged her political counterparts to take responsibility for not using AI tools deceptively, warning that AI-fueled misinformation could bring about "the end of democracy, not only in EU member states."

The agreement, reached at the annual security conference in the German city, comes at a time when more than 50 countries will hold national elections in 2024. Bangladesh, Taiwan, Pakistan, and recently Indonesia have already done so.

Attempts to interfere in elections using AI-generated content have already occurred. For example, automated calls mimicking the voice of U.S. President Joe Biden tried to discourage people from voting in the New Hampshire primaries last month.

A few days before the elections in Slovakia in November, AI-generated audio recordings posed as a candidate talking about plans to raise the price of beer and rig the elections. Fact-checkers quickly identified them as false as they spread on social media.

More: Republicans radicalize their anti-transgender stance to win votes among Christian conservatives

Politicians have also experimented with technology, from using AI chatbots to communicate with voters to incorporating AI-generated images into advertisements.

The agreement urges platforms to "pay attention to context and, in particular, to safeguard educational, documentary, artistic, satirical, and political expression."

The companies will focus on transparency in their content policies for users and work to educate the public on how to avoid being deceived by AI-generated false images.

Most companies have already stated that they are implementing safeguards for their own generative AI tools capable of manipulating images and sounds. They are also working to identify and label AI-generated content so that social media users know whether what they are seeing is real. However, most proposed solutions have not yet been implemented, and companies are under pressure to take further action.

This pressure is greater in the United States, where Congress has not yet passed laws regulating the use of AI in politics, leaving companies largely self-governing.

The Federal Communications Commission recently confirmed that AI-generated voice recordings in robocalls were illegal, but this does not include AI audio deepfakes when circulated on social media or in campaign ads.

Many social media companies already have policies to discourage the posting of misleading messages about elections, whether AI-generated or not. Meta stated that it removes false information about "dates, places, times, and methods for voting, registration, or participating in the census," as well as other false posts intended to interfere with someone's civic participation.

Jeff Allen, co-founder of the Integrity Institute—a nonprofit organization that aims to promote a better internet socially—and former Facebook data scientist, commented that the agreement seems like a "positive step" but that he would like social media companies to take other measures to combat misinformation, such as creating content recommendation systems that do not prioritize interaction above all.

Lisa Gilbert, executive vice president of advocacy group Public Citizen, claimed on Friday that the agreement "is not enough" and that AI companies should "restrain technologies" such as hyper-realistic text-to-video generators "until there are substantial and adequate safeguards that help us avoid many potential problems."

More: California under threat: new winter storm could cause flooding

In addition to the companies that helped negotiate Friday's agreement, other signatories include chatbot developers Anthropic and Inflection AI; voice cloning startup ElevenLabs; chip designer Arm Holdings; security companies McAfee and TrendMicro; and Stability AI, known for creating the Stable Diffusion image generator.

The absence of another popular AI image generator, Midjourney, was notable. The San Francisco-based company did not immediately respond to a request for comments on Friday.

The inclusion of X—which was not mentioned in an earlier announcement about the pending agreement—was one of Friday's surprises. Musk drastically cut moderation teams after taking over Twitter and has described himself as an "absolutist for free speech."

In a statement issued on Friday, X CEO Linda Yaccarino said that "every citizen and company has a responsibility to ensure free and fair elections."

"X is dedicated to playing its role, collaborating with its peers to fight AI threats while