Sam Altman, CEO of OpenAI, attends the 54th annual meeting of the World Economic Forum, in Davos, Switzerland, on Jan. 18, 2024.
Denis Balibouse | Reuters
A group of 20 leading tech companies on Friday announced a joint commitment to combat AI misinformation in 2024 elections.
The industry is specifically targeting deepfakes, which use deceptive audio, video and images to mimic key stakeholders in democratic elections or to provide false voting information.
Microsoft, Meta, Google, Amazon, IBM, Adobe and chip designer Arm all signed the accord. Artificial intelligence startups OpenAI, Anthropic and Stability AI also joined the group, alongside social media companies such as Snap, TikTok and X.
Tech platforms are preparing for a huge year of elections around the world that affect upward of four billion people in more than 40 countries. The rise of AI-generated content has led to serious election-related misinformation concerns, with the number of deepfakes that have been created increasing 900% year over year, according to data from Clarity, a machine learning firm.
Meanwhile, the detection and watermarking technologies used for identifying deepfakes haven’t advanced quickly enough to keep up.
News of the accord comes a day after ChatGPT creator OpenAI announced Sora, its new model for AI-generated video. Sora works similarly to OpenAI’s image-generation AI tool, DALL-E. A user types out a desired scene and Sora will return a high-definition video clip. Sora can also generate video clips inspired by still images, and extend existing videos or fill in missing frames.
Participating companies in the accord agreed to eight high-level commitments, including assessing model risks, “seeking to detect” and address the distribution of such content on their platforms and providing transparency on those processes to the public. As with most voluntary commitments in the tech industry and beyond, the release specified that the commitments apply only “where they are relevant for services each company provides.”
“Democracy rests on safe and secure elections,” Kent Walker, Google’s president of global affairs, said in a release. The accord reflects the industry’s effort to take on “AI-generated election misinformation that erodes trust,” he said.
Christina Montgomery, IBM’s chief privacy and trust officer, said in the release that in this key election year, “concrete, cooperative measures are needed to protect people and societies from the amplified risks of AI-generated deceptive content.”
WATCH: OpenAI unveils Sora
Don’t miss these stories from CNBC PRO:
Image and article originally from www.cnbc.com. Read the original article here.