A group of 20 leading technology companies signed a pact at the Munich Security Conference Friday to help combat the use of harmful AI-generated content, such as deepfakes, meant to deceive voters in the 2024 elections.

Adobe, Amazon, Anthropic, Arm, ElevenLabs, Google, IBM, Inflection AI, LinkedIn, McAfee, Meta, Microsoft, Nota, OpenAI, Snap, Stability AI, TikTok, TrendMicro, Truepic, and X all signed the joint commitment on Feb. 16, titled the “Tech Accord to Combat Deceptive Use of AI in 2024 Elections.”

The companies are specifically targeting deepfakes, which use a form of AI called deep learning to create audio, images, and videos of fake events. Bad actors can create convincing deepfakes of candidates to try to disrupt elections – such as the recent deepfake robocalls of President Biden that voters received ahead of the New Hampshire primary.

“With so many major elections taking place this year, it’s vital we do what we can to prevent people being deceived by AI-generated content,” said Nick Clegg, president of global affairs at Meta. “This work is bigger than any one company and will require a huge effort across industry, government, and civil society. Hopefully, this accord can serve as a meaningful step from industry in meeting that challenge.”

Through the accord, the companies pledge to work collaboratively to detect and address deepfake content, drive educational campaigns, and provide increased transparency.

Specifically, the companies agreed to eight commitments:

  • Developing and implementing technology to mitigate risks related to deceptive AI election content, including open-source tools where appropriate;
  • Assessing models to understand the risks they may present regarding deceptive AI election content;
  • Seeking to detect the distribution of this content on their platforms;
  • Seeking to appropriately address this content detected on their platforms;
  • Fostering cross-industry resilience to deceptive AI election content;
  • Providing transparency to the public regarding how the company addresses it;
  • Continuing to engage with a diverse set of global civil society organizations and academics; and
  • Supporting efforts to foster public awareness, media literacy, and all-of-society resilience.

Brad Smith, Microsoft’s vice chair and president, explained in a blog post that the accord is a “vital step” to help protect elections, bringing together the companies that create AI services and those that run hosted consumer services where deepfakes can spread.

However, he said that the accord is just “one of the many vital steps we’ll need to take to protect elections.”

“In part, this is because the challenge is formidable,” Smith said. “The initiative requires new steps from a wide array of companies. Bad actors likely will innovate themselves, and the underlying technology is continuing to change quickly.”

“We need to be hugely ambitious but also realistic. We’ll need to continue to learn, innovate, and adapt,” he continued, adding, “As a company and an industry, Microsoft and the tech sector will need to build upon today’s step and continue to invest in getting better.”

Read More About
About
Grace Dille
Grace Dille
Grace Dille is MeriTalk's Assistant Managing Editor covering the intersection of government and technology.
Tags