The Government Accountability Office (GAO) is looking to publish by the end of June an internal AI Oversight framework that will be a first step in a “trust, but verify” development of AI solutions across the agency.

That was the word from Taka Ariga, Chief Data Scientist and Director of the Innovation Lab at GAO, who spoke today at the FCW Data and Analytics Summit.

“We’re looking to publish our first AI oversight framework, and not necessarily positioned as an end-all be-all, but as a first step in that conversations around trust but verify,” said Ariga. “We’re all, I think, more than happy to sort of trust the development of various AI solutions that are being implemented across the executive agency. But GAO fundamentally is in the business of verification.”

Geospatial data
High-compute power for GIS data. Learn more

The framework will be used by GAO to operationalize key principles and act as a blueprint for oversight entities to evaluate AI, and will be designed to co-evolve as AI technology advances forward. Ariga said that being able to adopt new technologies will allow the agency to evaluate the technologies being implemented elsewhere across the government.

“You know one of the unique aspects of GAO as an oversight entity is that we do operate in a duality, so much as with any organization, we’re looking to develop an AI solution, we’re looking to develop a blockchain solution, we’re looking to develop [robotic process automation], and any number of emerging capability to help our mission teams and our operation teams work better, more effectively, more efficiently, but as an oversight entity,” said Ariga.

He added that “we also need to figure out how we can audit them and assess them, and the time for us to figure out the ‘how’ is not when we see the congressional requests.”

Read More About
Recent
More Topics
About
Jordan Smith
Jordan Smith
Jordan Smith is a MeriTalk Staff Reporter covering the intersection of government and technology.
Tags