December 24, 2024

Tech firms commit to prevent AI-driven election chaos.

5 min read

Google, Meta, Microsoft, OpenAI, and TikTok detail strategies for detecting and labeling deceptive AI content

Major technology companies agreed on Friday to voluntarily adopt “reasonable precautions” to prevent the use of artificial intelligence tools in disrupting democratic elections globally.

Leaders from Adobe, Amazon, Google, IBM, Meta, Microsoft, OpenAI, and TikTok came together at the Munich Security Conference to introduce a new framework for addressing AI-generated deepfakes designed to deceive voters. Additionally, twelve other companies, including Elon Musk’s X, are also joining the agreement.

Nick Clegg, president of global affairs for Meta, the parent company of Facebook and Instagram, stated in an interview before the summit, “Everyone acknowledges that no single tech company, government, or civil society organization can address the emergence of this technology and its potential malicious use alone.”

The agreement is largely symbolic, focusing on increasingly realistic AI-generated images, audio, and video that deceptively manipulate the appearance, voice, or actions of political candidates, election officials, and other key figures in a democratic election. It also targets false information provided to voters about the timing, location, and procedures for lawful voting.

The companies are not committing to banning or removing deepfakes. Instead, the agreement outlines the methods they will use to detect and label deceptive AI content when it is created or distributed on their platforms. It states that the companies will share best practices and provide “prompt and proportionate responses” when such content begins to spread.

The general nature of the commitments and absence of any mandatory requirements likely persuaded a wide range of companies to join, but it left advocates wanting more definitive assurances.

“The wording isn’t as robust as some might have hoped,” stated Rachel Orey, senior associate director of the Elections Project at the Bipartisan Policy Center. “We should recognize that the companies have a vested interest in ensuring their tools are not misused to undermine democratic elections. However, it’s voluntary, and we’ll monitor their actions to see if they fulfill their promises.”

Clegg mentioned that each company has its own set of content policies, which is appropriate.

“This is not an attempt to impose uniformity,” he said. “Moreover, no one in the industry believes that addressing a new technological paradigm involves ignoring issues and attempting to address everything that could potentially mislead someone.”

Several political leaders from Europe and the US also participated in Friday’s announcement. Vera Jourová, the European Commission vice-president, stated that while such an agreement cannot cover every aspect, “it includes very significant and positive aspects.” She also called on fellow politicians to refrain from using AI tools deceptively and cautioned that AI-driven disinformation could lead to “the demise of democracy, not only in EU member states.”

The agreement was reached during the annual security meeting in the German city, coinciding with more than 50 countries preparing for national elections in 2024. Bangladesh, Taiwan, Pakistan, and most recently Indonesia have already completed their elections.

Incidents of AI-generated election interference have already been reported. For instance, AI robocalls imitating the voice of US President Joe Biden attempted to dissuade people from voting in New Hampshire’s primary election last month.

Just before Slovakia’s elections in November, AI-generated audio recordings impersonated a candidate discussing plans to raise beer prices and manipulate the election. Fact-checkers rushed to debunk them as they rapidly spread on social media.

Politicians have also dabbled in the technology, using AI chatbots to engage with voters and incorporating AI-generated images into advertisements.

The agreement urges platforms to “consider context, especially regarding the protection of educational, documentary, artistic, satirical, and political expressions.”

It states that companies will emphasize transparency to users regarding their policies and will work to educate the public on how to avoid being deceived by AI-generated content.

Most companies have previously stated that they are implementing safeguards on their generative AI tools, which can manipulate images and sound. They are also working on identifying and labeling AI-generated content to help social media users distinguish between real and manipulated content. However, most of these proposed solutions have not yet been implemented, and the companies have faced pressure to do more.

This pressure is particularly strong in the US, where Congress has not yet passed laws regulating AI in politics, leaving companies to largely self-regulate.

Although the Federal Communications Commission has recently clarified that AI-generated audio clips in robocalls are illegal, this does not cover audio deepfakes circulated on social media or in campaign advertisements.

Many social media companies already have policies in place to discourage deceptive posts about electoral processes, whether AI-generated or not. Meta, for example, states that it removes misinformation related to “voting dates, locations, times, methods, voter registration, or census participation,” as well as other false posts intended to disrupt civic participation.

Jeff Allen, co-founder of the Integrity Institute and a former Facebook data scientist, views the accord as a “positive step.” However, he believes social media companies should take additional actions to combat misinformation, such as developing content recommendation systems that do not prioritize engagement above all else.

Lisa Gilbert, executive vice president of the advocacy group Public Citizen, argued that the accord falls short and that AI companies should withhold technology, such as hyper-realistic text-to-video generators, “until there are substantial and adequate safeguards in place to help us prevent many potential problems.”

In addition to the companies involved in brokering the agreement, other signatories include chatbot developers Anthropic and Inflection AI, voice-clone startup ElevenLabs, chip designer Arm Holdings, security companies McAfee and TrendMicro, and Stability AI, known for its image-generator, Stable Diffusion.

Notably missing is Midjourney, another popular AI image-generator. The San Francisco-based startup did not immediately respond to a request for comment on Friday.

The inclusion of X, which was not mentioned in an earlier announcement regarding the impending agreement, was one of the surprises of Friday’s agreement. Musk significantly reduced content moderation teams after acquiring the former Twitter and has described himself as a “free-speech absolutist.”

In a statement on Friday, X CEO Linda Yaccarino stated, “Every citizen and company has a responsibility to safeguard free and fair elections.” She added, “X is committed to doing its part, collaborating with peers to counter AI threats while also protecting free speech and maximizing transparency.”

Leave a Reply

Your email address will not be published. Required fields are marked *

Copyright © All rights reserved. | Specialdealsshop.com |