The Department of Defense (DoD) needs new guidance aimed at better securing artificial intelligence (AI) systems and data in the department, according to a top DoD official.

During MeriTalk’s Accelerate AI Forum in Washington D.C., David McKeown, DoD deputy chief information officer for cybersecurity and chief information security officer (CISO), explained that the department sees the value in AI, however, there are still some serious cyber risks associated with the technology that need to be addressed.

“At the DoD, we do not want to be ‘no’ people when it comes to deploying AI technologies … the department is not known for being an innovator, we acquire innovation. We look to our industry partners to bring AI solutions to the table that will serve our needs,” he said.

But before the department can effectively deploy acquired AI solutions, there must be standards in place to ensure the systems are secure and the data is protected. Existing cybersecurity compliance and guidance at the department are not the way to go, McKeown explained.

“People already despise [our] cyber compliance guidance without us adding anything else to them. They are relevant and some of those cybersecurity concerns related to AI augment [the department’s] cybersecurity guidance to make sure that any system built using AI or an AI system is covered, and information is protected,” he said. “But there’s just so much more related to AI that I think we need a higher-level construct.”

McKeown pointed to the AI Risk Management Framework (RMF), developed, and released by the National Institute of Standards and Technology (NIST), as a solid example of guidance needed to protect AI systems and data.

“The AI RMF brings people to the table to talk about what are the benefits, opportunities, and potential harm of using AI to people, organizations, and ecosystems,” said Martin Stanley, the strategic technology branch chief at the Cybersecurity and Infrastructure Security Agency (CISA).

Dave Erickson, public sector distinguished architect at Elastic, concurred that the Pentagon, like other Federal agencies, needs AI guidance for managing threats, especially as the industry enters an “AI gold rush.”

“In this ‘gold rush’ mentality, we will have a lot of folks trying to sell an [AI] infrastructure,” Erickson said. “And in a way, I’m hoping that this AI wave brings us back to the use case and brings us back to thinking about securing the mission and understanding what we’re actually trying to protect.”

However, he warned against creating complicated compliance standards that hinder the deployment of AI at higher security levels in the Federal government.

“I am excited, moving forward with this idea of a risk-based process. My broader concern though is to keep thinking about permission. Are there too many burdens from agencies that might inhibit this adoption? We need the right guardrails that also allow for effective use of AI systems where needed,” Erickson said.

Read More About
About
Lisbeth Perez
Lisbeth Perez
Lisbeth Perez is a MeriTalk Senior Technology Reporter covering the intersection of government and technology.
Tags