New Congressional Report on AI Urges More Federal Engagement

AI Quantum Computing

Reps. Will Hurd, R-Texas, and Robin Kelly, D-Ill., today released a new white paper on artificial intelligence (AI) that urges the Federal government to increase its engagement with the technology. The paper focuses its attention on four key issue areas–workforce, privacy, biases, and malicious use of AI–and provides concrete recommendations for each area.

“Underlying these recommendations is the recognition the United States cannot maintain its global leadership in AI absent political leadership from Congress and the Executive Branch,” Reps. Hurd and Kelly said in the paper. “Therefore, the Subcommittee recommends increased engagement on AI by Congress and the Administration.”

The white paper follows a three-part hearing series on AI that the House Oversight and Government Reform Subcommittee on Information Technology held earlier this year–Rep. Hurd chairs the subcommittee and Rep. Kelly serves as the ranking member. Through the hearing, Reps. Hurd and Kelly said they “examined a number of challenges facing AI” and used the expert testimony they heard to develop the paper.

Workforce Concerns

A common refrain regarding the technology is that AI-driven automation will lead to the loss of jobs. In the paper, the subcommittee urged Federal, state, and local agencies to “engage more with stakeholders on the development of effective strategies for improving the education, training, and reskilling of American workers to be more competitive in an AI-driven economy.” The subcommittee also called on the Federal government to “lead by example” and invest in education and training programs to help students, as well as the current workforce, gain the skills needed to succeed in AI-related jobs.

Protecting Privacy

Privacy concerns with AI revolve around the use of vast quantities of personal data to power AI’s algorithms. The paper cited conflicting advice the subcommittee received during the hearings, explaining that one witness said “that companies need to adopt more stringent safeguards in the design and development of their AI systems;” while another witness said “that rather than trying to regulate all AI-related privacy issues under one umbrella, regulations should be tailored to individual AI applications.” In the end, the paper recommended that “Federal agencies should review federal privacy laws, regulations, and judicial decisions to determine how they may already apply to AI products within their jurisdiction, and–where necessary–update existing regulations to account for the addition of AI.”

Addressing Biases

“As AI systems rely upon larger and larger quantities of data, the risk increases that the data sets may knowingly or unknowingly contain biases,” the white paper explained. “There are legitimate concerns that if an AI system is trained on biased data, the AI system will produce biased results.” The subcommittee said the key to addressing biases is transparency. In addition to transparency, the white paper said that “Federal, state, and local agencies that use AI-type systems to make consequential decisions about people should ensure the algorithms supporting these systems are accountable and inspectable.” Reps. Hurd and Kelly called on all levels of government to work with academic institutions, non-profit organizations, and the private sector to understand how to identify biases in AI systems, remove biases through technology, and account for biases.

Malicious Use of AI

The United States isn’t the only entity interested in harnessing the power of AI, and adversaries see the potential of using AI in a malicious manner to conduct cyberattacks and exploit cyber vulnerabilities.

On a basic level, the white paper recommended taking more active steps to “consider the ways in which [AI] could be used to harm individuals and society, and prepare for how to mitigate these harms.”

The subcommittee believes that the U.S. government cannot take a passive role in AI, instead, it must be engaged and take an active role in helping advance AI innovations in the United States.

“The government has an essential role to play in securing American leadership in AI,” the paper concluded. “Fulfilling this role will require balancing the creative energy of innovative Americans whose knowledge and entrepreneurial spirit have driven the development of this technology with regulatory frameworks that protect consumers. To ensure the appropriate balance is met, it is vital Congress and the Executive Branch continue to educate themselves about AI, increase the expenditures of R&D funds, help set the agenda for public debate, and, where appropriate, define the role of AI in the future of this nation.”

Recent