A bipartisan group of senators introduced new legislation on Tuesday that would ban the use of artificial intelligence to generate deceptive content to influence Federal elections.

Sen. Amy Klobuchar, D-Minn., announced the new legislation – called the Protect Elections from Deceptive AI Act – during a Sept. 12 hearing held by the Senate Judiciary Committee’s Privacy, Technology, and the Law Subcommittee.

Sen. Klobuchar said the legislation was developed alongside subcommittee Ranking Member Sen. Josh Hawley, R-Mo., and Sens. Chris Coons, D-Del., and Susan Collins, R-Maine.

“Hot off the presses, Sen. Hawley and I have introduced our bill today with Sen. Collins … and Sen. Coons to ban the use of deceptive AI-generated content in elections,” Sen. Klobuchar said during the hearing.

The bill aims to identify and ban “deep fakes” – which use a form of AI called deep learning to create images or videos of fake events – depicting Federal candidates in political ads.

Sen. Klobuchar said the bill would work hand in hand with a watermarking tool that can label whether images have been generated with AI.

“So, this would work in concert with some watermark system, but when you get into the deception where it is fraudulent AI-generated content pretending to be the elected official or the candidate when it is not,” Sen. Klobuchar explained. “We’ve seen this used against people on both sides of the aisle, which is why it was so important that we be bipartisan in this work. And I want to thank [Sen. Hawley] for his leadership.”

Woodrow Hartzog, a professor at Boston University School of Law, agreed that the Federal government should look to implement a ban on these deceptive ads.

“I do think that bright line rules and prohibitions around such deceptive ads are critical, because we know that procedural walkthroughs … often get the veneer of protection without actually protecting us,” Hartzog told the senators. “So, to outright prohibit these practices, I think is really important.”

According to a press release issued after the hearing, the bill would amend the Federal Election Campaign Act of 1971 (FECA) to prohibit the distribution of “materially deceptive AI-generated audio, images, or video relating to Federal candidates in political ads or certain issue ads to influence a Federal election or fundraise.”

It would also allow Federal candidates targeted by AI-generated deceptive content to have the content taken down and would enable them to seek damages in Federal court.

However, consistent with the First Amendment, the bill has exceptions for parody, satire, and the use of AI-generated content in news broadcasts.

“Right now, we’re seeing AI used as a tool to influence our democracy. We need rules of the road in place to stop the use of fraudulent AI-generated content in campaign ads. Voters deserve nothing less than full transparency,” Sen. Klobuchar said in the press release.

Subcommittee Chairman Richard Blumenthal, D-Conn., thanked his colleagues for “taking the first step toward addressing the harms that may result from deep fakes,” but noted that banning anything brings up the question of the First Amendment, which is why disclosure is often the solution.

Nevertheless, Brad Smith, vice chair and president of Microsoft, pointed out that “2024 is a critical year for elections” and that something must be done quickly to address the issue of deep fakes.

“I think we have two broad alternatives. One is we take it down. And the other is we relabel it. If we do the first, then we’re acting as censors, and I do think that makes me nervous. But relabeling to ensure accuracy, I think that is probably a reasonable path,” Smith said. “But really, what this highlights is the discussion still to be had, and I think the urgency for that conversation to take place.”

As for identifying deep fakes, William Dally, chief scientist and senior vice president of research at NVIDIA, offered a solution. He explained to senators that one of “the best measures against deep fakes” is the use of provenance to authenticate an image or voice at its source.

This method would have devices sign off and authenticate something as genuine – whether it be a camera that recorded a video, or an audio recorder that recorded a voice recording. Then, people could verify the device the content originated from and know that it is not a product of AI.

“That’s sort of the flip side of watermarks, which would require that anything that is synthetically generated be identified as such, and those two technologies in combination can really help people sort out – along with a certain amount of public education – to make sure people understand what the technology is capable of and are on guard for that,” Dally said. “It can help them sort out what is real from what is what is fake.”

The new bill comes days after Sens. Hawley and Blumenthal introduced a one-page legislative framework for regulating AI. The framework calls on AI system providers to watermark or otherwise provide technical disclosures of AI-generated deep fakes.

Read More About
About
Grace Dille
Grace Dille
Grace Dille is MeriTalk's Assistant Managing Editor covering the intersection of government and technology.
Tags