The White House recently released its Executive Order on Safe, Secure, and Trustworthy Artificial Intelligence (AI EO) to help establish standards that protect consumers and workers while encouraging further AI innovation. Federal technology leaders likewise are working to balance the desire for innovation with generative AI tools against the need to maintain public and employee trust. MeriTalk recently sat down with Tifani O’Brien, vice president and AI/machine learning accelerator lead at Leidos, to discuss how agencies can overcome barriers and build trust in AI to reap the benefits the technology can deliver.

MeriTalk: We’ve seen a lot of press recently about the use of generative AI tools like ChatGPT. In July, the Biden administration announced it had secured voluntary commitments from leading technology companies to manage the risks posed by AI, and followed that up with the Executive Order on Safe, Secure, and Trustworthy Artificial Intelligence (AI EO) to address AI concerns and establish standards. How is public perception impacting generative AI innovation? How about AI adoption across government?

O’Brien: While adoption of generative AI technology has been really fast, the publicity about the risks is definitely having an impact on its use. The new AI EO and industry commitments show that both government and the private sector are taking the public’s concerns seriously. We will still see innovation with AI, but that innovation will include a significant focus on building guardrails around the technology and addressing the concerns around safety, security, transparency, and trustworthiness.

As for government adoption, agencies have been using other forms of AI for a number of years. While they have made some initial strides with generative AI to show that they can use it, they are being cautious right now. They will work through the new guidance before rolling out the next generative AI solutions across their environments.

MeriTalk: With generative AI, government can use deep learning methods to generate new content and synthetic data that can be used to build predictive modeling that can inform decisions. What are some potential real-world use cases for generative AI in government?

O’Brien: Generative AI has immense capabilities to make human workflows more efficient. For example, in the Department of Defense (DoD), typically, humans create materials that help people understand DoD doctrines and available training. Generative AI can present these materials in ways that are much more accessible and understandable. It can also create the first draft of reports or forms that pull data from different areas. Once the form is created, a human can review it – and that saves time.

The technology can also be used to create realistic test data and digital twins. This allows teams to test new software solutions or new code more efficiently and with more accurate data so problems can be addressed before moving into production. This not only accelerates delivery time, but also helps ensure what is pushed into production is more secure.

Another exciting area where agencies can use generative AI is with modernization initiatives. Generative AI can create documentation that describes the code and design of legacy systems that can then be used to build the new modern architecture.

Generative AI can also facilitate better human-machine teaming. For example, the DoD could choose to fly a drone into a dangerous area instead of using a human pilot, and the human can interact with the drone more naturally with generative AI. The drone understands the nuances of what the human would tell it to do because it understands the language better.

MeriTalk: Many people may not realize that the government is already using generative AI tools in their operations or are pursuing generative AI implementations. What are some ways that the technology is currently being used in government, and what benefits are those agencies realizing?

O’Brien: Intelligence agencies are using the technology to quickly summarize daily news and world events to help personnel stay informed. Chatbots are being used by multiple agencies to aid in their service desk support and to improve customer service. The Department of Veterans Affairs uses generative AI to recommend when a patient should be seen by a specialist. Having AI to help the human processor understand important and relevant details and quickly make recommendations can help the human processor perform much faster.

MeriTalk: What advice do you have for agencies as they build generative AI technology into their mission requirements?

O’Brien: It’s important to think through and address the risks up front. For example, if a risk of using generative AI is exposing private information, agencies can take action to avoid that risk by anonymizing the data before it’s ever accessed by the AI. Considering the risks and addressing them at the outset will lead to a more successful deployment of the technology.

If an agency is new to generative AI, it’s also important to start with a lower level of automation before moving up to a fully automated project. Start with a project where the AI assists humans with making decisions by streamlining parts of the process that a human can do, but the AI can do it faster. When you achieve success with that level of automation, dial it up to have the AI do things that a human can’t do, such as looking through a vast amount of data quickly. The AI is still providing information to humans who will ultimately make the final decision. From there, agencies can move to a fully automated project where the AI performs tasks and makes decisions. This phased approach helps agencies ensure that they understand the AI, know how to use it, and can trust the results.

MeriTalk: How should agencies plan for eventual enterprise-scale adoption of generative AI?

O’Brien: Agencies should take the time to build a thoughtful and comprehensive AI governance policy that outlines the potential risks and guides people on how to overcome them – with examples. Educating teams about the technology is also critical for its success. People may be worried about the impact on their jobs or their lack of training or understanding of the technology, and therefore resist using it. It’s important for leaders to educate their workforce on the new tools so they can trust the technology.

MeriTalk: Leidos recently launched its Trusted GenAI campaign. Can you tell us more about why you developed the campaign and what the campaign’s goals are?

O’Brien: Leidos has been working with large language models for several years, but we really needed a way to demonstrate how we could deploy generative AI use cases quickly to government and commercial organizations that had special needs or specific concerns about the security of their sensitive data. Through the Trusted GenAI campaign, we developed a set of priority use cases that required secure data with measurable results and proof points so we could demonstrate secure, trusted deployments and show that we can help agencies overcome any barriers that may be standing in the way of using this technology.

MeriTalk: As an early AI innovator, how can Leidos help government agencies realize the full potential of generative AI?

O’Brien: Leidos delivers trusted AI in a way that offers transparency, robust testing, and security, which are all key elements in the new AI EO. Leidos has years of experience working with the technology and can help agencies monitor the outputs over time to ensure the AI models haven’t drifted, explain how particular decisions were made by the AI, and support testing the AI to ensure it can adapt to new data and conditions. Leidos also measures the impact of AI tools on human workflows and can recommend generative AI model types based on the need and intended outcomes. Leidos works with agencies to develop the right technical and policy guardrails to help them gain value from the AI while mitigating risks.

Read More About
About
MeriTalk Staff
Tags