
Sen. Mark Warner, D-Va., sent letters to major social media and artificial intelligence (AI) companies requesting they act against manipulated media, including deepfakes, ahead of the 2026 midterm elections.
Warner sent letters on March 16 to OpenAI, Anthropic, xAI, Meta, Adobe, ElevenLabs, Cohere, Microsoft, Midjourney, Canva, Snap, Google, Synthesia, TikTok US, Bluesky, Pinterest, and Reddit.
In his letters, Warner pointed to media manipulation techniques used by Russia-based actors during the 2024 U.S. elections. While he noted those efforts didn’t make a noticeable difference in election outcomes, generative AI capabilities “have grown tremendously in the intervening years,” raising concerns about both foreign and domestic misuse.
Ahead of the 2024 elections, federal intelligence officials warned that advances in AI could enable more realistic and scalable deepfake campaigns targeting candidates. In one case, a fake robocall using former President Joe Biden’s voice discouraged voting in New Hampshire’s primary.
Warner called on AI companies to take additional safeguards against misuse ahead of the 2026 cycle, especially impersonation and misinformation. Recommendations include embedding content credentials, metadata, and visible watermarks in AI-generated media, requiring downstream partners to preserve them, sharing detection tools with trusted groups, and creating rapid-response verification channels. He also emphasized the need for coordinated action across sectors.
“Particularly against the backdrop of an abrupt pullback in federal resources, an effective multi-stakeholder approach is needed to ensure that industry, state and local governments, and civil society adequately anticipate – and counteract – media manipulation techniques that cause harm to vulnerable communities, public trust, and democratic institutions,” Warner wrote.
Warner suggested offering clear reporting paths for victims while proactively tracking impersonation campaigns.
He also pressed social media platforms and content distributors to enforce stricter standards for manipulated media. That includes setting clear rules, screening uploads for authenticity signals, deploying detection systems for unlabeled synthetic content, and working with journalists, civil society, and election officials to improve verification and public awareness.
“Policymakers have on a bipartisan basis begun the process of developing measures to ensure that generative AI technologies … serve the public interest,” Warner wrote. “But the private sector can – particularly in collaboration with civil society and state and local election officials – dramatically shape the usage and wider impact of these technologies through proactive measures in coming months.”
Since the 2024 elections, there have been no comprehensive U.S. laws targeting AI-generated political deepfakes.