Four months have passed since the Biden-Harris administration unveiled its AI executive order (EO), tasking more than 50 Federal entities to take more than 100 specific actions to implement guidance covering eight broad areas of policy.

As agencies are racing to keep up with the demanding deadlines that the EO has set, how can they utilize tools such as generative AI to accomplish these ambitious goals, effectively train workers in AI skills, and take advantage of the broader opportunities this historic order presents?

As directed by the White House’s draft AI guidance, Federal agencies have begun appointing chief AI officers (CAIO) to lead their organization in responsible AI use.

The Department of Labor (DoL) in January tapped its Deputy Chief Information Officer Lou Charlier to serve in the dual-hatted role of CAIO.

“The executive order takes a whole government approach, and it’s been put in place to help the Federal government manage the risks and harness the benefits of AI,” Charlier said during MeriTalk’s webinar, “The AI EO: How Federal Agencies Can Achieve Its Ambitious Goals.” He continued, “It really creates a starting point for the Federal government to establish governance that includes protecting Americans’ privacy, supporting workers, and ensuring responsible and effective government use of AI.”

“One of the things that it did is it required the agencies to designate a chief AI officer,” Charlier said. “The chief AI officer’s role is to promote innovation, coordinate trustworthy adoption of AI, and make recommendations related to Federal procurement and AI use within our agency and across the Federal agencies.”

The CAIO said DoL has already established an AI Center of Excellence and is currently in the process of forming an AI Advisory Board with other Federal agencies to help implement the EO.

“So, while it’s pretty broad ranging and ambitious, I think [the EO] sets the right time for the Federal government to move forward to make sure that we’re using AI in the best possible way to support the American worker,” Charlier said.

All of the AI experts on the panel noted that 2024 is going to be an important year for Federal leaders to set the stage for AI and its advances for the next decade and beyond. Particularly, they noted that workforce is one major challenge that must be addressed now.

“In essence, 2024 will focus on enabling the foundations, ecosystems, and initial coordinating work to set up the longer-term horizon objectives of the order, while demonstrating momentum and continued investment in the administration’s AI priorities,” said Tony Holmes, practice lead for solutions architects public sector at Pluralsight, a technology workforce skills provider.

Holmes noted that it would be critical for Federal agencies this year to update procurement protocols and human resources policies to enable faster hiring and acquisition of AI tools and talent across agencies and ramping up skills-building programs for the Federal workforce.

The central challenge, Holmes said, is AI knowledge gaps, compounded by difficulties in talent recruitment and retention. “The AI market is highly competitive, and agencies struggle with hiring and keeping qualified personnel,” he said.

Holmes recommended partnering with industry on talent exchange programs and making a wholesale investment in training Federal workers in AI skills. He said the effort should prioritize reskilling current employees over hiring new talent.

“Adaptable training programs allow you to tailor the curriculum and leverage existing knowledge,” Holmes said. He suggested that agencies focus training on both AI foundations and practical applications – and make training widely available.

“Widespread internal AI expertise will be critical to driving accountability and responsibility in deployment,” Holmes said. “Training benchmarking and outcomes should be part of a broader set of success metrics tracked over time. Without an AI-literate workforce, policies on paper may have little practical impact.”

Holmes also talked about an additional series of challenges, such as budgetary constraints as AI ambitions grow, as well as data and infrastructure needs because many agencies “lack modern data management, computing power, and tools required to support advanced AI models.” Industry can provide technology resources, platform access, and guidance on building scalable data pipelines, he noted.

DoL’s Charlier agreed that the department is facing challenges around data, and specifically not exposing critical government data while trying to leverage AI tools.

“We’ve established an in-house generative AI platform that our staff can use. They’re anxious to use these tools to be more productive, but we want to make sure that we’re protecting the data,” Charlier said. “AI can be that tool that helps us to make sure that we’re securing the data that we have to protect for the American workers.”

With the use of generative AI tools like ChatGPT, Elastic Solution Architect Dave Erickson noted that culture within the Federal government must play catch-up.

“[AI] is not a proxy for your responsibility and accountability, being the human in the loop for this process. And if we do this for the next couple of years, I think culture is going to catch up quite a bit, and we’re going to learn what are the right uses of these tools,” Erickson said.

“Hopefully, we’re all going to be more educated on some of the things that are not engineering or anything,” he said. “I have a whole lot of experts on my team that have got some very fancy engineering and science and mathematics degrees, and some of the people that are getting this the quickest are those with good liberal arts educations, that understand civics and paid attention in those places, and are getting it quicker, and I think it’s going to take both sides of the brain there to bring it all together and help us reach the outcome that we want.”

Watch the full webinar “The AI EO: How Federal Agencies Can Achieve Its Ambitious Goals” here and download the coinciding report, The AI EO: How Its Ambitious Goals Can Be Realized here.

Read More About