
The National Institute of Standards and Technology (NIST) plans to create five artificial intelligence use cases for security control overlays that will address risks with the use and development of AI systems.
According to a concept paper released by the agency on Aug. 14, the use cases will address generative AI, predictive AI, single and multi-agent AI, and controls for AI developers.
“The advances and potential use cases for adopting artificial intelligence … technologies brings both new opportunities and new cybersecurity risks,” said NIST. “While modern AI systems are predominantly software, they introduce different security challenges and risks than traditional software. The security of AI systems is closely intertwined with the security of the IT infrastructure on which they run and operate.”
The series of use cases is meant to complement the agency’s Cybersecurity Framework Profile for AI which it has been working to develop through coordination with leading experts and community sessions.
Katerina Megas, cybersecurity for Internet of Things program manager at NIST, said earlier this month that the overlays use cases had been requested during workshops for the AI profile.
“One of the things we heard at the workshop was, ‘yes, the cybersecurity framework does absolutely fill a certain role, but we also would like something when, when we’re talking about implementation guidance, looking at security controls,’” said Megas.
Specifically, the control overlays aim to create “a common technical foundation for identifying cybersecurity outcomes,” while allowing for “customization and the prioritization of the most critical controls,” NIST’s concept paper says.
“The overlays will focus on protecting the confidentiality, integrity, and availability of information for each use case,” said NIST.
The overlays, when used alongside existing cybersecurity programs, will target specific threats tied to different AI use cases, such as protecting models, related assets, and output security, explained the agency. NIST added they are not full security frameworks, assuming baseline safeguards like access control and incident response are already in place.
The initial release will cover five common AI usage scenarios for developers and organizations.
For those who are interested in learning more and discussing development of the overlays, NIST said it is launching a NIST Overlays for Securing AI Slack channel which will be “a hub for cybersecurity and AI communities to discuss the development of these overlays.”