Rep. Ted Lieu, D-Calif. – alongside Reps. Zach Nunn, R-Iowa, Don Beyer, D-Va., and Marcus Molinaro, R-N.Y. – this week introduced the Federal AI Risk Management Act, a bipartisan and bicameral bill to require U.S. Federal agencies and vendors to follow the AI risk management guidelines put forth by the National Institute of Standards and Technology (NIST).

Sens. Jerry Moran, R-Kan., and Mark Warner, D-Va., introduced companion legislation in the Senate late last year.

As directed by Congress, NIST developed an AI Risk Management Framework (RMF) in January 2023. The AI RMF is a voluntary set of standards that organizations could employ to ensure they use AI systems in a trustworthy manner.

Federal agencies are not currently required to use this framework to manage their use of AI systems, but the new bill aims to change that.

The Federal AI Risk Management Act would require Federal agencies and vendors to incorporate the NIST framework into their AI management efforts to help limit the risks that could be associated with AI technology.

“As AI continues to develop rapidly, we need a coordinated government response to ensure the technology is used responsibly and that individuals are protected,” Rep. Lieu said in a Jan. 10 statement. “The AI Risk Management Framework developed by NIST is a great starting point for agencies and vendors to analyze the risks associated with AI and to mitigate those risks. These guidelines have already been used by a number of public and private sector organizations, and there is no reason why they shouldn’t be applied to the federal government as well.”

The bipartisan bill directs the Office of Management and Budget to establish an initiative to provide AI expertise to agencies. It also directs the administrator of Federal Procurement Policy and the Federal Acquisition Regulatory Council to ensure agencies procure AI systems that incorporate NIST’s AI framework. Lastly, the bill requires NIST to develop standards for testing and validating AI in Federal acquisitions.

“The rapid development of AI has shown that it is an incredible tool that can boost innovation across industries,” Sen. Warner said when the Senate bill was unveiled on Nov. 2.

“But we have also seen the importance of establishing strong governance, including ensuring that any AI deployed is fit for purpose, subject to extensive testing and evaluation, and monitored across its lifecycle to ensure that it is operating properly,” he said. “It’s crucial that the federal government follow the reasonable guidelines already outlined by NIST when dealing with AI in order to capitalize on the benefits while mitigating risks.”

The bill drew support from prominent players in the private sector and academia, including leaders at Microsoft and Workday.

“Implementing a widely recognized risk management framework by the U.S. Government can harness the power of AI and advance this technology safely,” said Fred Humphries, corporate VP of U.S. government affairs at Microsoft. “We look forward to working with Representatives Lieu, Nunn, Beyer, and Molinaro as they advance this framework.”

Sen. Moran had previously tried and failed to add similar language to the Senate’s year-end defense bill over the summer.

Read More About
About
Cate Burgan
Cate Burgan
Cate Burgan is a MeriTalk Senior Technology Reporter covering the intersection of government and technology.
Tags