The National Institute of Standards and Technology (NIST) released a second draft of its Artificial Intelligence (AI) Risk Management Framework (RMF) on Aug. 18, including further guidance on developing trustworthy and responsible AI systems.
In developing the draft, NIST consulted with AI experts, held workshops, and solicited comments before clarifying the core outcomes for the AI RMF. The second draft of the AI RMF builds on discussions at the second AI RMF Workshop and feedback received on the initial draft released in March 2022.
Part 1 of the AI RMF draft explains the motivation for developing and using the framework, its audience, and the framing of AI risk and trustworthiness.
Part 2 includes the AI RMF core outcomes and a description of systems and their use.
“The AI RMF is intended for voluntary use in addressing risks in the design, development, use, and evaluation of AI products, services, and systems,” NIST stated.
But because AI research and development is evolving rapidly, the AI RMF and its companion documents will evolve and reflect new knowledge, awareness, and practices.
“NIST intends to continue its engagement with stakeholders to keep the framework up to date with AI trends and reflect experience based on the use of the AI RMF. Ultimately, the AI RMF will be offered in multiple formats, including online versions, to provide maximum flexibility,” the agency noted in the draft.
In addition to the AI RMF, NIST released a draft of the AI RMF Playbook, an online resource providing recommended actions on how to implement the framework.
Any comments on the draft AI RMF and initial comments on the draft Playbook should be sent via email to AIframework@nist.gov by Sept. 29, 2022. Feedback will also be received during a third workshop on Oct. 18 and 19, 2022.
NIST plans to publish AI RMF 1.0 in January 2023.