
In Congress’s latest attempt to regulate artificial intelligence, House and Senate lawmakers are reintroducing legislation to promote public-private collaboration on testing financial sector AI systems.
“While exchanges are integrating AI into market surveillance systems to detect manipulation and protect market integrity in insurance, AI is reshaping how companies assess risk and deliver coverage,” said Mike Rounds, R-S.D., chair of the Senate Banking Securities, Insurance, and Investment subcommittee, during a hearing on July 30.
“When it comes to underwriting businesses, insurers are using machine learning to analyze structured data such as claims histories, industry benchmarks and even real-time operational data to better understand the company’s risk profile,” Rounds said.
That’s why Sen. Rounds is reintroducing the Unleashing AI Innovation and Financial Services Act with Sens. Martin Heinrich, D-N.M., Thom Tillis, R-N.C., and Andrew Kim, D-N.J.
The legislation was first introduced in Congress last August by House Financial Services Committee Chairman French Hill (R-Ark.) along with Reps. Ritchie Torres (D-N.Y.), Bryan Steil (R-Wis.), and Josh Gottheimer (D-N.J.), but it failed to garner support.
Today, Sen. Rounds announced that the bill also is being reintroduced in the House by Rep. Hill.
This time around, the legislation aligns with the Trump administration’s AI Action Plan, released last week, which calls for regulatory AI Centers of Excellence to test AI tools.
Sen. Round’s office pointed to specific mentions of the Securities and Exchange Commission (SEC) in the plan which listed the commission as a potential participant in those AI testing centers, adding that his bill would establish those labs within the SEC.
“This bipartisan bill that would create a venue for financial institutions and regulators to work together to test AI projects for deployment,” said Rounds. “By creating a safe space for experimentation, we can help firms innovate and regulators learn, without applying outdated rules that don’t fit today’s technology.”
The senator talked about the Predictive Data Analytics Rule proposed by Gary Gensler, former chair of the SEC, which would have required broker-dealers and investment advisers to identify and address conflicts of interest arising from their use of predictive data analytics and similar technologies.
Firms would have also been required to eliminate or neutralize any conflicts that place their interests ahead of investors’ and maintain written policies to ensure compliance which Gensler had said would require that “regardless of the technology used, firms meet their obligations not to place their own interests ahead of investors’ interests.”
The Gensler-proposed rule was “the wrong approach,” according to Rounds, who said the rule would have “imposed sweeping, unclear restrictions on financial firms developing or deploying AI without a workable framework.”
“That rule would have slowed innovation and raised compliance costs and blocked out smaller players,” added Rounds, calling for regulatory frameworks that support innovation and consumers – echoing similar regulatory approaches promoted by the Trump administration under its new action plan.
The bills being reintroduced in the House and Senate would allow select Federal financial regulatory agencies to experiment with AI tools in an AI “regulatory sandbox” so long as the tools pose no risk to national security and are consistent with anti-money laundering obligations and counter the financing of terrorism.
Tools should also provide a benefit to the public through improved financial services access or consumer protection, and enhance efficiency, according to bill text from the previous Congress’s version of the legislation.