A group of 30 House Democrats is raising concerns with private sector developers and users of artificial intelligence (AI) technologies about the use of AI to create “deepfake” content, and what those firms plan to do about identifying that content and pushing back against the risks that it poses.

The group of House members led by Rep. Derek Kilmer, D-Wash., stated their concerns in a Nov. 8 letter to top executives of 12 private companies including OpenAI, Anthropic, Google, TikTok, Amazon, and Microsoft.

“We write to express our concerns about the rise of synthetic media that is designed to manipulate or deceive online users, as well as to encourage collaborative efforts to develop solutions that would address its associated risks,” the House Democrats said.

“Following developments in generative artificial intelligence applications that have dramatically increased the ability of users to create and share synthetic media, significant concerns have been raised about the ability of bad actors to use these services for deceptive means, including sharing deceptive content on widely used platforms,” the lawmakers stated.

“The urgency of these concerns is amplified by the risks of misinformation and disinformation, particularly during an age when Americans increasingly receive their news primarily through online sources using social media platforms to reach users and audiences,” they said.

The letter draws attention to the coming 2024 presidential election, where false content along with a politically polarized society can create the “perfect storm for the proliferation of disinformation and misinformation by bad actors, which could further undermine faith in U.S. democratic institutions,” the members of Congress said.

The lawmakers asked for a response from the companies by Dec. 8 “about your efforts to identify, monitor, and disclose this content; the extent to which you have identified findings or trends regarding deceptive synthetic media; and how you have acted, or intend to act, to combat its associated risks.”

The letter comes as expert witnesses talked about the use of deepfake technology to target vulnerable people during a House Oversight and Accountability Committee hearing on Nov. 8.

“Existing harms are exacerbated by deepfake technologies. Women already face widespread threats from non-consensual sexual images or release of intimate partner images that do not require high-quality or complex production to be harmful,” stated Sam Gregory, executive director at WITNESS.

“Non-consensual sexual deepfake images and videos are currently used to target private citizens and public figures, particularly women,” Gregory said at the hearing.

Read More About
About
Jose Rascon
Jose Rascon
Jose Rascon is a MeriTalk Staff Reporter covering the intersection of government and technology.
Tags