Witnesses at a House Oversight and Accountability subcommittee hearing on March 8 urged lawmakers to be proactive about artificial intelligence (AI) technologies and establish guidelines and protections so that AI-based systems are developed and deployed responsibly.

The House Subcommittee on Cybersecurity, Information Technology, and Government Innovation held a hearing with academic and industry AI experts to explore how the United States can harness and integrate AI while managing risks inherent with use of the technology.

Chairwoman Nancy Mace, R-S.C., began the hearing by noting the importance of embracing practical applications of AI while also balancing the impacts it may have on society. She also emphasized the important role the Federal government plays in the integration of AI tech.

Innovation Unleashed
With each new AI adoption, we learn new lessons to optimize business outcomes. Learn more.

The integration of AI “will require collaboration between government, industry, and academia” to ensure AI is developed and deployed in a “reliable, trustworthy, and aligned with public policy goals,” Rep. Mace said. She concluded her remarks by revealing that her opening remarks were created by ChatGPT – an AI software program – making her part of a small group of lawmakers to use the AI program to produce congressional content.

Overall, witnesses agreed with the chairwoman’s analysis of the benefits of AI and the necessity of partnerships in deploying the technology. However, they unanimously agreed that guardrails for AI are necessary before widespread deployments occur.

Merve Hickok, the chairwoman and research director at the Center for AI and Digital Policy, reiterated to lawmakers that the U.S. isn’t ready for the imminent “AI Tech Revolution.”

“We do not have the guardrails in place, the laws that we need, the public education, or the expertise in government to manage the consequences of the rapid changes that are now taking place,” she said. “Internationally, we are losing AI-policy leadership. Domestically, Americans say they’re more concerned than excited about AI.”

Hickok explained that if the U.S. wants AI systems to align with national values and serve all,

greater accountability and transparency will be needed. Specifically, Hickok explained that AI policymakers around the world have already made clear the need for fairness, accountability, and transparency in AI systems. She said the challenge ahead is with implementation.

“Both governments and private companies know that public trust is a must-have for further innovation, adoption, and expansion,” Hickok said.

However, Dr. Eric Schmidt, former chief executive officer at Google and now chair of the Special Competitive Studies Project, pointed out that while the advancement of AI may be inevitable, its ultimate destination is not.

“We must define our partnership with AI and shape the reality it will create together. Most importantly, we must shape it with our democratic values,” he said.

Specifically, Schmidt recommended that lawmakers follow three basic principles to ensure AI-based systems and technology are responsibly developed and deployed.

  • First, AI platforms must, at minimum, be able to establish the origin of the content published on their platform;
  • Second, know who is on the AI platform representing each user or organization profile; and
  • Third, AI systems must publish and be held accountable to published algorithms for promoting and choosing content.

Dr. Aleksander Madry, director at the MIT Center for Deployable Machine Learning and Cadence Design Systems Professor of Computing, added that seizing the AI momentum means having an increasingly important conversation about what AI should look like in a democratic society and how to mitigate associated risks.

Specifically, Madry urged lawmakers to identify the risks associated with AI and develop clear and actionable ways to mitigate them.

In addition, Madry explained that it’s “critical that we pay attention to the emerging AI supply chain. This chain engenders several reliability and regulatory challenges. It will also structure the distribution of power in an AI-driven world.”

“We are at an inflection point in terms of what future AI will bring,” Madry said. “Seizing this opportunity requires discussing the role of AI in our society and nation, what we want AI to do – and not do – for us, and how we ensure that it benefits us all. This is bound to be a difficult conversation, but we do need to have it, and have it now.”

Read More About
Recent
More Topics
About
Lisbeth Perez
Lisbeth Perez
Lisbeth Perez is a MeriTalk Senior Technology Reporter covering the intersection of government and technology.
Tags