A group of big-tech industry leaders warned today that artificial intelligence (AI) may one day pose an existential threat to humanity and should be considered a societal risk on par with pandemics and nuclear wars.

“Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks, such as pandemics and nuclear war,” a one-sentence statement released by the Center for AI Safety (CAIS) on May 30 reads.

The statement has been signed by more than 350 executives, researchers, and engineers working in AI – including Sam Altman, CEO of leading AI company responsible for ChatGPT, OpenAI; Demis Hassabis, CEO of Google DeepMind; Dario Amodei, CEO of Anthropic; and executives from Microsoft.

“AI experts, journalists, policymakers, and the public are increasingly discussing a broad spectrum of important and urgent risks from AI,” the non-profit said. “Even so, it can be difficult to voice concerns about some of advanced AI’s most severe risks.”

CAIS said the statement signed by hundreds today aims to overcome this obstacle and open discussion as well as create common knowledge of the growing number of experts and public figures who take “some of advanced AI’s most severe risks seriously.”

The non-profit’s mission is to reduce societal-scale risks from AI and notes eight examples of “catastrophic or existential risks” – including weaponization, misinformation, and power-seeking behavior.

The statement comes at a time of growing concern about the potential harms of AI.

In March, tens of thousands of technologists and researchers signed a different open letter calling for a six-month pause on the development of the largest AI models, citing concerns about “an out-of-control race to develop and deploy ever more powerful digital minds.”

That letter, which was organized by another AI-focused nonprofit – the Future of Life Institute – was signed by Elon Musk and other well-known tech leaders, but it did not have many signatures from the leading AI labs.

This month, Altman, Hassabis, and Amodei met with President Joe Biden and Vice President Kamala Harris to talk about AI regulation and committed to continue engaging with the administration to ensure the country benefits from AI innovation.

“In order to realize the benefits that might come from advances in AI, it is imperative to mitigate both the current and potential risks AI poses to individuals, society, and national security,” the White House said. “These include risks to safety, security, human and civil rights, privacy, jobs, and democratic values.”

The Biden administration also launched new AI initiatives this month that aim to promote responsible innovation of the technology while protecting Americans’ rights and safety – such as creating policies for AI use in the Federal government and the first steps to begin a National AI Strategy. 

While the 22-word statement released and signed by hundreds today offered no specifics on how AI could cause human extinction, Cybersecurity and Infrastructure Security Agency Director Jen Easterly warned last month that the United States needs to quickly determine the regulatory landscape for the development of AI technologies – which she said have the potential to become the most consequential, and perhaps dangerous, technologies of the 21st century.

“I think this is the biggest issue that we’re going to deal with this century,” she said. “The most powerful weapons of the last century were nuclear weapons,” Easterly continued. “They were controlled by governments and there was no incentive to use them. It was a disincentive to use them.”

“These are the most powerful technology capabilities and maybe weapons in this century,” she said. “And we do not have the legal regimes … or the regulatory regimes to be able to implement them safely and effectively.”

The letter came on the same day as the fourth edition of the EU-U.S. Trade and Technology Council, which will convene U.S. Secretary of State Antony Blinken and other global senior officials in Lulea, Sweden to discuss the biggest common challenges in technology – like AI.

Read More About
More Topics
Cate Burgan
Cate Burgan
Cate Burgan is a MeriTalk Senior Technology Reporter covering the intersection of government and technology.