The race to develop regulations for artificial intelligence technologies is on, and members of the House of Representatives are hoping to set up their own AI working group this month to help craft comprehensive regulations.

At a Washington Post Live event today in Washington, D.C., Reps. Don Beyer, D-Va., and Marcus Molinaro, R-N.Y., said they’re hoping a House AI working group will bring some movement to Congress’s many AI proposals.

“The AI caucus was primarily put together to educate the members of Congress and their staff on what AI is and how it’s developing,” Rep. Beyer said. “Kevin McCarthy, when he was still speaker, put together an informal working group – including Marcus and myself – to try to actually bring bills to the floor, to pass bills this year.”

“The new speaker, Mike Johnson, I believe it’s his intention to stand up this bipartisan working group to make things happen,” he said, adding, “We’re hoping that will happen this month.”

Senate Majority Leader Chuck Schumer, D-N.Y., is leading his own AI working group, which has hosted insight forums with top tech CEOs and labor and civil rights leaders to discuss possible regulations for the rapidly evolving technology.

While Rep. Beyer said he doesn’t believe the House risks falling behind the Senate in the AI space, he did say that Congress does “risk falling way behind the American people.”

“It has been a year of unnecessary and unlimited distractions, and I wish that we were a bit more advanced on some of these policies,” added Rep. Molinaro. “Too often, we are too far behind. This last year has really caused us to be even further behind.”

“I think the speaker accepts both the need to get the working group up and functional to move legislation, and he also understands both the potential benefits and risks of AI and establishing the basic framework,” Rep. Molinaro said.

Both congressmen said that they are concerned about AI’s impact on jobs, as well as more concerning risks such as the technology’s impact on election security, privacy, and intellectual property.

“All of that is at risk, and we really need to create the framework and the guidelines to protect ourselves from ourselves,” Rep. Molinaro said. “My concern, ultimately, is that we will be well too late and that lives will be horribly impacted because we didn’t establish those guidelines.”

“Is it too late for 2024? Likely not,” he added, referring to protecting the 2024 elections from AI risks such as deepfakes. “I think that there are regulatory restrictions … but this is a space that we certainly have to come to some formal agreement on because it is about protecting democracy.”

Last month, Reps. Beyer and Molinaro – alongside Reps. Ted Lieu, D-Calif., and Zach Nunn, R-Iowa – introduced the Federal AI Risk Management Act, a bipartisan and bicameral bill to require U.S. Federal agencies and vendors to follow the AI risk management guidelines put forth by the National Institute of Standards and Technology (NIST).

The bill would require Federal agencies and vendors to incorporate the NIST framework into their AI management efforts to help limit the risks that could be associated with the technology.

Rep. Beyer said that the group of lawmakers believes that NIST has established “the best international standard of what AI should be.” However, he added that NIST “apparently has two and a half full staffers on this,” so “they might need a little more resources, especially with the challenge we’ve given them.”

Those NIST standards, along with President Biden’s recent AI executive order, are “broad and necessary” steps to help develop AI regulations, according to Rep. Molinaro.

White House in Support of AI Regulations

Separately during the event, Anne Neuberger, the White House’s deputy national security advisor for cyber and emerging technologies, agreed that the United States needs to develop AI regulations.

“There are real risks about AI, and as the president has said, we do need regulation to ensure the controls are in place so we can use the technology responsibly,” Neuberger said.

Last year, 15 companies met with the White House to sign voluntary commitments around a set of practices related to managing the risks of AI. That set of voluntary commitments, Neuberger said, “was the first of three steps we see in the overall U.S. governance approach.”

“The first step was the voluntary commitments. The second, the president’s executive order, which goes to the line of what we can do under current law. The third is potential regulation, which the hill is working on under leader Schumer – really bipartisan efforts.”

“There have been some initial bills, but bipartisan efforts to determine what are new laws we need for this very new space,” she added. “We have an equal obligation to put the same creative efforts to ensuring we’re deploying [AI] in a safe, responsible way.”

Read More About
About
Grace Dille
Grace Dille
Grace Dille is MeriTalk's Assistant Managing Editor covering the intersection of government and technology.
Tags