The Federal government needs to pay close attention to ethics and trust as it dives further into implementing artificial intelligence (AI) technologies, said Rep. Robin Kelly, D-Ill., at a Nov. 16 event hosted by the Information Technology Industry Council.

The congresswoman, who has long been involved in technology issues and serves on both the House Energy and Commerce and Oversight and Reform committees – said that the focus on trust and ethics is a critical component of the “developing AI conversation.”

She cautioned that if the average person doesn’t trust AI-driven systems, or if systems are deployed recklessly, then AI could cause real harm to people – particularly people of color – while also severely damaging the progression of the technology.

“Data shows that people who look like me are most likely to be negatively impacted by AI,” said Rep. Kelly, who is African-American. “That’s why attention to unintended bias is critically important, so that prejudices in the real world are not transferred to the digital world.”

The first step to ensure the responsible and ethical progression of AI is clarity, Rep. Kelly explained.  She recalled that in speaking different experts during congressional committee hearings, she continuously received different terminologies for transparency, interoperability, and ethics in AI.

“Getting the terminology right is so important, as we’re still wrestling with definitions and vocabulary around AI, but there is progress in this step. The [National Institute of Standards and Technology] AI Framework has helped,” Rep. Kelly said.

“I hear from companies that want to be good actors, but uncertainty and a lack of clarity make it a difficult field to navigate,” she added.

In addition, Rep. Kelly explained that the responsible and ethical progression of AI is not just an American responsibility, but a global one. However, the U.S. has fallen behind, she said.

“What we don’t want to happen is to create very different sets of laws and definitions … around AI. We tend to work at different speeds, but generally, the U.S. Congress is playing catch up … on digital laws, but there is a desire to get this right,” she said.

Rep. Kelly highlighted the European Union’s (EU) General Data Protection Regulation (GDPR) – a regulation in EU law on data protection and privacy in the EU and the European Economic Area – as an example of the type of legislation the U.S. should be striving to pass.

“We’re still working to pass our version of the GDPR,” Rep. Kelly said. “The federal privacy law we passed out of the House Energy and Commerce Committee, takes some of those first concrete steps to put guardrails around AI’s potential harms.”

The American Data Privacy and Protection Act – which was passed by the Energy and Commerce Committee with a 53 to 2 vote this summer – would create national standards and safeguards for personal information collected by companies, including protections intended to address potentially discriminatory impacts of algorithms. It would also require certain impact assessments and design evaluations.

“Getting the details right is important … and we want to pass something that will have the desired result, make people safer online, and allow for AI to develop in a responsible manner that is not overly burdensome for companies,” Rep. Kelly said.

Read More About
About
Lisbeth Perez
Lisbeth Perez
Lisbeth Perez is a MeriTalk Senior Technology Reporter covering the intersection of government and technology.
Tags