Advertisement

California is racing to combat deepfakes ahead of the election

Donald Trump's account on X is displayed on a laptop screen, Kamala Harris' on a phone screen.
Manipulated videos and photos are a top concern ahead of the U.S. presidential election between former President Trump and Vice President Kamala Harris.
(Jakub Porzycki / NurPhoto via Getty Images)
Share via

Days after Vice President Kamala Harris launched her presidential bid, a video — created with the help of artificial intelligence — went viral.

“I ... am your Democrat candidate for president because Joe Biden finally exposed his senility at the debate,” a voice that sounded like Harris’ said in the fake audio track used to alter one of her campaign ads. “I was selected because I am the ultimate diversity hire.”

Billionaire Elon Musk — who has endorsed Harris’ Republican opponent, former President Trump— shared the video on X, then clarified two days later that it was actually meant as a parody. His initial tweet had 136 million views. The follow-up calling the video a parody garnered 26 million views.

Advertisement

To Democrats, including California Gov. Gavin Newsom, the incident was no laughing matter, fueling calls for more regulation to combat AI-generated videos with political messages and a fresh debate over the appropriate role for government in trying to contain emerging technology.

On Friday, California lawmakers gave final approval to a bill that would prohibit the distribution of deceptive campaign ads or “election communication” within 120 days of an election. Assembly Bill 2839 targets manipulated content that would harm a candidate’s reputation or electoral prospects along with confidence in an election’s outcome. It’s meant to address videos like the one Musk shared of Harris, though it includes an exception for parody and satire.

“We’re looking at California entering its first-ever election during which disinformation that’s powered by generative AI is going to pollute our information ecosystems like never before and millions of voters are not going to know what images, audio or video they can trust,” said Assemblymember Gail Pellerin (D-Santa Cruz). “So we have to do something.”

Advertisement

Newsom has signaled he will sign the bill, which would take effect immediately, in time for the November election.

The legislation updates a California law that bars people from distributing deceptive audio or visual media that intends to harm a candidate’s reputation or deceive a voter within 60 days of an election. State lawmakers say the law needs to be strengthened during an election cycle in which people are already flooding social media with digitally altered videos and photos known as deepfakes.

The use of deepfakes to spread misinformation has concerned lawmakers and regulators during previous election cycles. These fears increased after the release of new AI-powered tools, such as chatbots that can rapidly generate images and videos. From fake robocalls to bogus celebrity endorsement of candidates, AI-generated content is testing tech platforms and lawmakers.

Advertisement

The surge in deepfake images and videos online of U.S. presidential candidates Donald Trump and Kamala Harris have raised questions over whether the false information could impact the election.

Aug. 21, 2024

Under AB 2839, a candidate, election committee or elections official could seek a court order to get deepfakes pulled down. They could also sue the person who distributed or republished the deceptive material for damages.

The legislation also applies to deceptive media posted 60 days after the election, including content that falsely portrays a voting machine, ballot, voting site or other election-related property in a way that is likely to undermine the confidence in the outcome of elections.

It doesn’t apply to satire or parody that’s labeled as such, or to broadcast stations if they inform viewers that what is depicted doesn’t accurately represent a speech or event.

Tech industry groups oppose AB 2839, along with other bills that target online platforms for not properly moderating deceptive election content or labeling AI-generated content.

“It will result in the chilling and blocking of constitutionally protected free speech,” said Carl Szabo, vice president and general counsel for NetChoice. The group’s members include Google, X and Snap as well as Facebook’s parent company, Meta, and other tech giants.

California lawmakers are trying to get ahead of AI in the workplace, but are already playing catchup

June 19, 2024

Online platforms have their own rules about manipulated media and political ads, but their policies can differ.

Advertisement

Unlike Meta and X, TikTok doesn’t allow political ads and says it may remove even labeled AI-generated content if it depicts a public figure such as a celebrity “when used for political or commercial endorsements.” Truth Social, a platform created by Trump, doesn’t address manipulated media in its rules about what’s not allowed on its platform.

Federal and state regulators are already cracking down on AI-generated content.

The Federal Communications Commission in May proposed a $6-million fine against Steve Kramer, a Democratic political consultant behind a robocall that used AI to impersonate President Biden’s voice. The fake call discouraged participation in New Hampshire’s Democratic presidential primary in January. Kramer, who told NBC News he planned the call to bring attention to the dangers of AI in politics, also faces criminal charges of felony voter suppression and misdemeanor impersonation of a candidate.

Szabo said current laws are enough to address concerns about election deepfakes. NetChoice has sued various states to stop some laws aimed at protecting children on social media, alleging they violate free speech protections under the 1st Amendment.

“Just creating a new law doesn’t do anything to stop the bad behavior, you actually need to enforce laws,” Szabo said.

More than two dozen states, including Washington, Arizona and Oregon, have enacted, passed or are working on legislation to regulate deepfakes, according to the consumer advocacy nonprofit Public Citizen.

In 2019, California instituted a law aimed at combating manipulated media after a video that made it appear as if House Speaker Nancy Pelosi was drunk went viral on social media. Enforcing that law has been a challenge.

Advertisement

“We did have to water it down,” said Assemblymember Marc Berman (D-Menlo Park), who authored the bill. “It attracted a lot of attention to the potential risks of this technology, but I was worried that it really, at the end of the day, didn’t do a lot.”

Rather than take legal action, said Danielle Citron, a professor at the University of Virginia School of Law, political candidates might choose to debunk a deepfake or even ignore it to limit its spread. By the time they could go through the court system, the content might already have gone viral.

“These laws are important because of the message they send. They teach us something,” she said, adding that they inform people who share deepfakes that there are costs.

This year, lawmakers worked with the California Initiative for Technology and Democracy, a project of the nonprofit California Common Cause, on several bills to address political deepfakes.

Some target online platforms that have been shielded under federal law from being held liable for content posted by users.

SB 1047 would require AI firms to share their safety plans with the attorney general upon request and face penalties if catastrophic events happen.

Aug. 16, 2024

Berman introduced a bill that requires an online platform with at least 1 million California users to remove or label certain deceptive election-related content within 120 days of an election. The platforms would have to take action no later than 72 hours after a user reports the post. Under AB 2655, which passed the Legislature Wednesday, the platforms would also need procedures for identifying, removing and labeling fake content. It also doesn’t apply to parody or satire or news outlets that meet certain requirements.

Advertisement

Another bill, co-authored by Assemblymember Buffy Wicks (D-Oakland), requires online platforms to label AI-generated content. While NetChoice and TechNet, another industry group, oppose the bill, ChatGPT maker OpenAI is supporting AB 3211, Reuters reported.

The two bills, though, wouldn’t take effect until after the election, underscoring the challenges with passing new laws as technology advances rapidly.

“Part of my hope with introducing the bill is the attention that it creates, and hopefully the pressure that it puts on the social media platforms to behave right now,” Berman said.

Advertisement