A bipartisan pair of lawmakers is taking another stab at directing the National Institute of Standards and Technology (NIST) to collaborate with Federal and industry partners to develop guidelines for how third-party evaluators verify the testing and development of artificial intelligence systems. 

Sens. John Hickenlooper, D-Colo., and Shelley Moore Capito, R-W.V., reintroduced the Validation and Evaluation for Trustworthy Artificial Intelligence (VET AI) Act on August 1.  

The bill would require NIST to lead the development of “detailed specifications, guidelines, and recommendations for third-party evaluators to work with AI companies to provide robust independent external assurance and verification” of AI systems.   

The legislation passed out of the Senate Commerce, Science, and Transportation Committee last year before the end of the 118th Congress.  

Guidelines developed under the bill would enable evaluators to provide rigorous independent assurance and verification of AI systems’ development and testing. 

The guidelines also would address data privacy, mitigation of potential harms, dataset quality, and governance throughout the AI development lifecycle. 

“The horse is already out of the barn when it comes to AI. The U.S. should lead in setting sensible guardrails for AI to ensure these innovations are developed responsibly to benefit all Americans as they harness this rapidly growing technology,” said Sen. Hickenlooper, who serves as the ranking member of the Senate Commerce Consumer Protection, Technology, and Data Privacy Subcommittee. 

Sen. Capito added that “the VET AI Act is a commonsense bill that will allow for a voluntary set of guidelines for AI, which will only help the development of systems that choose to adopt them.” 

In addition to developing guidelines, the bill also would require NIST to study the AI assurance ecosystem including evaluating current capabilities, needed resources, and market demand. It would also establish an advisory committee to recommend certification criteria for AI assurance providers conducting internal or external assurance of AI systems. 

According to a release from Sen. Hickenlooper’s office, the bill addresses claims made by AI companies about how they train and conduct red-team exercises without independent verification, and also fill a gap in AI guardrails by helping to create “evidence-based benchmarks.” 

Read More About
Recent
More Topics
About
Weslan Hansen
Weslan Hansen is a MeriTalk Staff Reporter covering the intersection of government and technology.
Tags