The Biden-Harris administration this week announced the creation of the AI Safety Institute Consortium (AISIC), which will unite more than 200 AI creators and users, academics, government and industry researchers, and civil society organizations in support of the development and deployment of safe and trustworthy AI.

The AISIC – which was established in November following President Biden’s AI executive order (EO) – will be housed within the Commerce Department’s National Institute of Standards and Technology (NIST) component.

The AISIC announced its executive leadership on Feb. 7 and aims to focus on priority actions from the AI EO, including developing guidelines for red-teaming, capability evaluations, risk management, safety and security, and watermarking synthetic content.

“President Biden directed us to pull every lever to accomplish two key goals: set safety standards and protect our innovation ecosystem. That’s precisely what the U.S. AI Safety Institute Consortium is set up to help us do,” said Commerce Secretary Gina Raimondo in a statement.

“Through President Biden’s landmark Executive Order, we will ensure America is at the front of the pack – and by working with this group of leaders from industry, civil society, and academia, together we can confront these challenges to develop the measurements and standards we need to maintain America’s competitive edge and develop AI responsibly,” she said.

The department noted that the consortium represents the largest collection of test and evaluation teams established to date and will focus on establishing the foundations for a new measurement science in AI safety. The AISIC also includes state and local governments, as well as non-profits, and will work with organizations from like-minded nations that have a key role to play in developing interoperable and effective tools for safety around the world.

The AISIC’s inaugural cohort has more than 200 members, including big tech company names like Microsoft, Google, and OpenAI.

“Understanding that adopting AI in a safe and secure manner is a challenge for public sector agencies due to evolving guidance, standards for risk, and a shortage of resources, it’s of the utmost importance to offer proven solutions to the federal government to accelerate use of AI capabilities in a safe and secure manner,” stackArmor CEO Gaurav Pal said in a Feb. 8 statement. “stackArmor is honored to participate in NIST’s AI Safety Institute Consortium to help move its mission forward to better serve the federal government and the public.”

“ITI is pleased to join the U.S. AI Safety Institute’s Consortium. We?are supportive of this public-private partnership to?drive the research, development, and innovation necessary to advance the standards that support safe and trustworthy AI,” said ITI President and CEO Jason Oxman. “The Consortium’s work?will be critical to furthering AI innovation in the United States and globally. We look forward to collaborating?with NIST on this important effort.”

Read More About
About
Cate Burgan
Cate Burgan
Cate Burgan is a MeriTalk Senior Technology Reporter covering the intersection of government and technology.
Tags