The Senate Homeland Security and Governmental Affairs Committee on June 14 voted to approve the Transparent Automated Governance Act, which would require Federal agencies to notify people when they are interacting with – or subject to critical decisions made using – artificial intelligence (AI) and other automated systems.
The bill sponsored by committee Chairman Gary Peters, D-Mich., was approved on a 10-1 vote, sending it to the full Senate for further consideration.
In addition to the notification requirement, the bill would direct Federal agencies to establish an appeals process “that will ensure there is a human review of AI-generated critical decisions that may negatively affect individuals,” Sen. Peters said when he introduced the bill last week. The measure is cosponsored by Sens. James Lankford, R-Okla., and Mike Braun, R-Ind.
The lone vote against the bill was cast by Sen. Rand Paul, R-Ky., who said the measure would “lead to an expansion of the Federal bureaucracy.”
“Artificial intelligence is already transforming how Federal agencies are serving the public, but government must be more transparent with the public about when and how they are using these emerging technologies,” said Sen. Peters last week. “This bipartisan bill will ensure taxpayers know when they are interacting with certain Federal AI systems and establishes a process for people to get answers about why these systems are making certain decisions.”
The bill calls for the Office of Management and Budget to issue guidance to Federal agencies on how to implement the transparency practices related to use of AI and other automated systems.
In support of the legislation, Sen. Peters’ office pointed to a recent study which “found that the Internal Revenue Service used an automated system that was more likely to recommend Black taxpayers than white taxpayers for audits.”
“People who unknowingly interact with AI can often be confused or frustrated by how or why these systems make certain determinations,” his office said.