Tech, Government Officials Brace for the Impact of AI on the 2024 Election


If fake news has been a factor in the last two presidential elections, just wait until we get into the thick of things in 2024.

Since the last showdown for the White House, artificial intelligence has burst onto the landscape and it’s already introducing a bit of chaos. So as November draws closer, authorities and AI companies are collaborating to negate the potential problems the technology can create.

Earlier this month, the top AI companies agreed to sign an “accord,” agreeing to develop technology that will identify and control AI-generated images, videos and audio that attempt to trick or coerce voters. OpenAI, Google, Microsoft, Meta and more were all part of the agreement. Twitter/X did not sign.

The agreement calls for them to label suspected AI content and educate the public to the dangers of the systems. The companies, however, did not agree to an outright ban on deceptive political content generated by AI systems, which leaves a hole open for additional risks.

It was just last month that New Hampshire election officials were made aware of a robocall that used AI to mimic President Joe Biden telling voters to not go to the polls during the primary election and to “save your vote for the November election.”

State officials quickly labeled the calls as election interference.

Generative AI deepfakes are also appearing in campaign ads. The Republican National Committee released an AI-generated ad last April that showed a vision of the future of the U.S. if Biden was reelected, for instance (The ad did disclose it was made with AI in small print). And as the technology gets better and more convincing, it could be used to sway people’s vote. Candidates could appear to back policies they do not or video footage of events that never occurred could seem very real.

Several statessuch as Michigan, have passed laws requiring campaigns to clearly say which ads were created using AI. It also has prohibited the use of deepfakes within 90 days of an election, unless that image is clearly labeled as manipulated.

California, Minnesota, Texas and Washington have also passed laws to regulate deepfakes in political ads.

Even the Federal Communications Commission (FCC) is trying to get ahead of problems. Earlier this month, it announced that it would make sending unsolicited robocalls that use voices generated by artificial intelligence unlawful. The proposal identifies AI-generated voices as artificial under the Telephone Consumer Protection Act (TCPA). That makes them illegal under the existing law and gives state attorneys general the ability to pursue legal action against the companies behind them.

Last year saw roughly 55 billion robocalls in the U.S., a bit lower than the 2019 peak of 58.5 billion, as estimated by YouMail, which blocks the calls.

“Bad actors are using AI-generated voices in unsolicited robocalls to extort vulnerable family members, imitate celebrities, and misinform voters,” said FCC Chairwoman Jessica Rosenworcel. “We’re putting the fraudsters behind these robocalls on notice.”

The biggest risk of misinformation spreading, of course, comes through social media. To attempt to guard against this, Meta in November began requiring advertisers to disclose when they use AI to alter images and videos in political ads. Google has similar policies in effect as well.

The views and opinions expressed herein are the views and opinions of the author and do not necessarily reflect those of Nasdaq, Inc.



Image and article originally from www.nasdaq.com. Read the original article here.