The Commerce Department is rebranding the U.S. Artificial Intelligence Safety Institute (AISI) that it oversees by swapping in a new moniker for the organization – the Center for AI Standards and Innovation (CAISI) – that notably does not feature safety in its name.

That rebranding news came on June 3 from Commerce Secretary Howard Lutnick, who indicated that the rebrand won’t abandon safety concerns, but also will emphasize embracing AI innovation.

“For far too long, censorship and regulations have been used under the guise of national security,” Lutnick said, while pledging that “innovators will no longer be limited by these standards.”

“CAISI will evaluate and enhance U.S. innovation of these rapidly developing commercial AI systems while ensuring they remain secure to our national security standards,” the secretary said.

The agency explained that the rebrand will “ensure Commerce uses its vast scientific and industrial expertise to evaluate and understand the capabilities of these rapidly developing systems and identify vulnerabilities and threats within systems developed in the U.S. and abroad.”

AISI was created in 2023 by President Biden’s Executive Order on the Safe, Secure, and Trustworthy Development and Use of AI, with a mission to lead U.S. efforts on AI safety and trust, including evaluating advanced AI models.

The Commerce Department said that CAISI will continue to operate within the Commerce Department’s National Institute of Standards and Technology (NIST) component, and it will “serve as industry’s primary point of contact within the U.S. Government to facilitate testing and collaborative research related to harnessing and securing the potential of commercial AI systems.”

The Commerce Department said CAISI’s to-do list includes:

  • Working with NIST organizations to “develop guidelines and best practices to measure and improve the security of AI systems, and work with the NIST Information Technology Laboratory and other NIST organizations to assist industry to develop voluntary standards”;
  • Establishing “voluntary agreements with private sector AI developers and evaluators, and lead[ing] unclassified evaluations of AI capabilities that may pose risks to national security,” with a focus on “demonstrable risks, such as cybersecurity, biosecurity, and chemical weapons”;
  • Leading “evaluations and assessments of capabilities of U.S. and adversary AI systems, the adoption of foreign AI systems, and the state of international AI competition”;
  • Leading “evaluations and assessments of potential security vulnerabilities and malign foreign influence arising from use of adversaries’ AI systems, including the possibility of backdoors and other covert, malicious behavior”;
  • Coordinating “with other federal agencies and entities, including the Department of Defense, the Department of Energy, the Department of Homeland Security, the Office of Science and Technology Policy, and the Intelligence Community, to develop evaluation methods, as well as conduct evaluations and assessments”; and
  • Representing “U.S. interests internationally to guard against burdensome and unnecessary regulation of American technologies by foreign governments and collaborate with the NIST Information Technology Laboratory to ensure US dominance of international AI standards.”
Read More About
About
John Curran
John Curran is MeriTalk's Managing Editor covering the intersection of government and technology.
Tags