The National Institute of Standards and Technology (NIST) is looking to dive deep into the full artificial intelligence tech stack in its forthcoming “Community Profile” that aims to give organizations a framework for securing artificial intelligence systems and using them for defense. 

The new Cyber AI profile is based on the NIST Cybersecurity Framework (CSF) and will provide guidance for the cybersecurity community on security risks associated with AI use and development. 

Katerina Megas, cybersecurity for Internet of Things program manager at NIST, shared at DGI’s 930gov event on Thursday that following the agency’s first Cyber AI workshop on July 31, the profile will look to include information on how the latest profile relates to other frameworks and evaluate all aspects of AI systems. 

“The theme we heard is it is an absolute must that organizations look at adopting AI for cyber defense, because we do know that it is being used more and more for kind of offensive techniques,” said Megas. 

Some of the feedback the agency received included providing tools needed to operationalize and automate aspects of the framework and the need for frameworks that are tailored to organizational risk.  

Evaluating the different components of AI systems to secure them properly also is a proposed focus of the framework, Megas said, explaining that workshop participants said that “when you’re going to be covering securing AI systems, make sure you take a broad look of what an AI system consists of,” adding that NIST will “still have some conversations to have around that.” 

“I’m hearing a lot about kind of the AI stack, and a lot of discussion on, you know, what is the AI stack,” said Megas, adding that other considerations have included looking at data, models, and the application stack. 

Other areas of discussion included looking at proactive steps in defending AI systems, handling incidents, continuous monitoring, and red teaming. 

Next steps in the community discussions will include a focus on supply chain risks to AI, which Megas said public feedback noted was “considerable.” Discussions about whether something needs to be done differently or adapting practices already in effect will be included, she explained, adding “we just have to make sure AI is kind of rolled into the thinking.” 

Transparency, workforce education, and data are also part of future considerations, as Megas said “data is the crown jewel, and so all organizations really … need to not only be thinking about the data that goes into the system, you need to be thinking about the data and security the data that’s coming out of the system.” 

The Cyber AI profile, first announced earlier this year, aims to create less of guidance for cybersecurity professionals while creating something to fill taxonomy gaps between the AI and cybersecurity communities. 

“It’s a framework which is intended to kind of organize a lot of existing information and a lot of existing thinking, not necessarily be creating guidelines from scratch,” said Megas.  

Read More About
Recent
More Topics
About
Weslan Hansen
Weslan Hansen is a MeriTalk Staff Reporter covering the intersection of government and technology.
Tags