Sens. John Thune, R-S.D., and Amy Klobuchar, D-Minn., are spearheading new artificial intelligence legislation that would aim to protect both consumers and entrepreneurs, and task the Commerce Department with new regulatory work. 

The legislation focuses on self-certification with risk-based guardrails compared to licensing, with the senators arguing that type of regime would create fewer bottlenecks within the Federal government and enable more innovation. 

The AI Research, Innovation, and Accountability Act of 2023 was officially introduced on Nov. 15 – though it has been in the works since at least this summer. The bill is co-sponsored by Sen. Thune’s fellow Commerce, Science, and Transportation committee members, including Sens. Roger Wicker, R-Miss., John Hickenlooper, D-Colo., Shelley Moore Capito, R-W.Va., and Ben Ray Luján, D-N.M. 

“AI is a revolutionary technology that has the potential to improve health care, agriculture, logistics and supply chains, and countless other industries,” Sen. Thune said in a statement.  

“As this technology continues to evolve, we should identify some basic rules of the road that protect consumers, foster an environment in which innovators and entrepreneurs can thrive, and limit government intervention,” he said. “This legislation would bolster the United States’ leadership and innovation in AI while also establishing common-sense safety and security guardrails for the highest-risk AI applications.” 

 According to the 58-page bill, the legislation aims to provide clear distinctions on AI-generated content and other identification for AI systems, including those deemed “high impact” and “critical impact.”   

The bill calls on the Commerce Department’s National Institute of Standards and Technology (NIST) component to develop recommendations for agencies regarding “high-impact” AI systems. The Office of Management and Budget would then be charged with implementing those recommendations.   

Consistent with the structure of NIST’s AI Risk Management Framework, the bill would require companies deploying critical-impact AI to perform detailed risk assessments. These reports would provide a comprehensive, detailed outline of how the organizations manage, mitigate, and understand risk. Deployers of “high-impact” AI systems would be required to submit transparency reports to the Commerce Department. 

Regarding critical-impact AI systems, the bill calls tech providers to self-certify with the Commerce Department’s standards. The department would have to comply with an outlined five-step certification process, including establishing an advisory committee and submitting to Congress a five-year plan for testing and certifying critical-impact AI. 

“Artificial intelligence comes with the potential for great benefits, but also serious risks, and our laws need to keep up,” said Sen. Klobuchar. “This bipartisan legislation is one important step of many necessary towards addressing potential harms. It will put in place common sense safeguards for the highest-risk applications of AI – like in our critical infrastructure – and improve transparency for policy makers and consumers.” 

The bill also would require large internet platforms to provide notice to users when the platform is using generative AI to create content the user sees. The Department of Commerce would have the authority to enforce this requirement.  

Finally, the bill would require the Commerce Department to establish a working group to provide recommendations for the development of voluntary, industry-led consumer education efforts for AI systems. 

“We’re entering a new era of Artificial Intelligence,” said Sen. Hickenlooper. “Development and innovation will depend on the guardrails we put in place. This is a commonsense framework that protects Americans without stifling our competitive edge in AI.” 

Read More About
About
Cate Burgan
Cate Burgan
Cate Burgan is a MeriTalk Senior Technology Reporter covering the intersection of government and technology.
Tags