Taking a human-in-the-loop approach is crucial to building trust in AI, Federal experts said today, but they also said it’s critical to have a data and AI literacy program to ensure those humans are adequately trained.

 

At ATARC’s Artificial Intelligence and Data Analytics Breakfast Summit on April 6, Federal AI and data experts shared how the human-in-the-loop approach is helping to build trust in ethical AI technologies at their agencies.

 

Scott Beliveau, the branch chief of advanced enterprise analytics at the U.S. Patent and Trademark Office, explained how trust is paramount to AI, and compared the new technology to the common use of a navigation or map app – such as Google Maps or Apple Maps.

 

At a certain point, Beliveau explained we learned to trust the app, and not question when it tells us to turn left as opposed to right. The new question, he asked, is whether or not AI is at that point where people give it a chance.

 

“In our agency, we are kind of giving [AI] a chance, and the way we’re doing that – in analogously to driving – we’re keeping the person in the loop,” Beliveau said. “The fundamental decision as to whether to go left or right is absolutely in the driver’s seat… and that’s how we’re trying to mitigate that bias as well as build that trust.”

 

The experts agreed that keeping a human in the loop is helpful, but also stressed the importance of training that human so that they have a deep understanding of AI.

 

“We definitely need data literacy and AI literacy to be the key to this whole thing,” said Suman Shukla, head of the Data Management Section at the U.S. Copyright Office. “If I’m not aware of where the whole AI process is going or I cannot understand my own data and how to use it, it will make a huge impact.”

 

“If you have a human in the loop and that human doesn’t know what they’re doing, or worse, knows just a little bit about what they’re doing, that’s functionally not having a human in the loop, or worse, just having human-in-the-loop creating problems,” added Anthony Boese, the interagency programs manager for the Department of Veterans Affairs (VA) National AI Institute.

 

Boese added that he fears that if we make a human-in-the-loop approach look too consistent or rudimentary to AI tech, it could find ways around human-in-the-loop. Therefore, he encouraged agencies to build systems in which everything is human-accessible.

 

“When I build systems, I try to make sure that there’s opportunities for a human to walk around and tap any part of the system at any point in the need – almost like the barrels sitting in a winery getting sampled or they’re in a distillery getting sampled as they get closer and closer to readiness,” he said. “Complete access by informed individuals is the best way forward.”

 

To continue to build trust in the human-in-the-loop approach, agencies need best practices, according to Chakib Chraibi, the chief data scientist at the National Technical Information Service in the Department of Commerce.

 

“The human in the loop is very important, but we need to develop some best practices” beyond the ones that already exist, Chraibi said. 

 

“They provide us with a good baseline, but obviously it’s not enough because we haven’t resolved the issue of the discriminatory effect of applications,” he added. “There has to be a concerted effort and best practices in place to be successful.”

Read More About
About
Grace Dille
Grace Dille
Grace Dille is MeriTalk's Assistant Managing Editor covering the intersection of government and technology.
Tags