
Federal officials from the Library of Congress and the Department of Energy (DOE) said their agencies are focusing on deploying artificial intelligence (AI) systems that are low risk but deliver measurable value – emphasizing responsible experimentation and strong data over novelty.
While many federal agencies are expanding AI use cases, the Library of Congress has concentrated its deployment on internal applications that are low risk but “still can deliver high value,” said Natalie Buda Smith, director of digital strategy at the Library of Congress, during the GovAI Summit on Oct. 27.
“It’s not that we’re trying to sell more things or … generate more interactions. We want to ensure that we’re delivering on our mission in a more effective way, so ideally, [AI tools] that are aligned with the goals and are low risk,” said Smith.
Part of deploying AI, Smith explained, is evaluating how the technology can scale, saying what works well for smaller agencies may not translate to larger ones.
“So, we spent a lot of time thinking through, how do we explore and experiment? [We] understand what is the risk, what is the value?” she added.
Craig Haseler, technical lead for the DOE’s System of Record for Capital Asset Project Performance Information, said the department also prioritizes user feedback to refine its AI models and determine what approaches are most effective.
“We’ve actually developed a system of emails where we’re taking our user – basically like the chat box from people’s engagement with our AI chatbot – running [it] through a bunch of additional LLM (large language model) calls and saying, ‘Did this user get what they wanted out of this conversation?’” Haseler explained. “We can look at that over time and see new capabilities.”
Both officials agreed that data is critical to developing low-risk, high-reward AI systems.
“A lot of times we’ll build an application in two weeks, and it’s wonderful, but it will really showcase the two years of getting the data ready in order to use it in an application,” said Smith. “Data is something that is a long-term project for us.”
Meanwhile, Haseler cautioned against taking data at face value.
“Don’t trust the data. Go have a conversation with a trashy computer … ask about something you are an expert on and think back on anything it gets wrong and then ask about something that you don’t know anything about and be amazed at how it’s perfectly accurate,” he said.
To encourage employee buy-in, Haseler added that leaders should recognize when AI is and isn’t the right solution.
“Fortunately for me, in some ways, I rarely have anyone come to me and say, ‘You must use AI to do this, you must solve this problem.’ There’s a little bit of, ‘Hey, AI exists, you should make sure you’re using this,’ and obviously I want to do that because it makes everything easier when it’s appropriate,” said Haseler.