A request for information (RFI) by the National Institute of Standards and Technology (NIST) developing a framework to improve the management of risks to individuals, organizations, and society associated with AI received feedback to assist in its development.
“The NIST Artificial Intelligence Risk Management Framework (AI RMF or Framework) is intended for voluntary use and to improve the ability to incorporate trustworthiness considerations into the design, development, and use, and evaluation of AI products, services, and systems,” NIST wrote in the RFI. “The Framework will be developed through a consensus-driven, open, and collaborative process that will include public workshops and other opportunities for stakeholders to provide input.”
Among those providing comments include various organizations and agencies specializing in AI use or utilize AI in their operations.
NASA provided two comments on the RFI, including that it should be understood how an AI/machine learning provider is using data to improve their services.
“Recommend that the NIST organization request that all cloud providers have a default a setting of ‘opt-out’ to prevent inadvertent data spillage,” NASA wrote. “Even if there are controls to anonymize or protect customer data, it comes across as ‘irritating’ or a violation of trust to learn of this practice after use of a service.”
Additionally, NASA said that some AI/ML activities require “ground-truth labeling” for model training, and payment for human intelligence tasks or equivalent should be fair and reasonable.
Deloitte & Touche LLP, a multinational professional service company, applauded NIST’s efforts to develop an AI framework. Deloitte expects NIST to help establish “guardrails” for AI and believes using an AI RMF is the correct approach.
“We encourage NIST to leverage principles already incorporated into other frameworks such as the NIST Cybersecurity Framework (CSF) and the NIST Privacy Risk Management Framework, as well as the five principles embodied in the Committee of Sponsoring Organization (COSO) Framework: governance; strategy; performance; review & revision; and information, communication & reporting,” Deloitte wrote.
Booz Allen Hamilton responded by presenting its own RMF that it developed across its “extensive AI portfolio” and described four relevant pillars, including Responsible AI; AI Readiness; Management of AI Risk; and Operationalizing AI.
“Although any new technology could be used improperly if its use is not guided by values, there are unique aspects of AI systems that complicate risk assessment, mitigation, and management,” Booz Allen wrote. “We believe that our AI governance process and Reference Architecture offer something new and valuable for NIST: a practical and transparent way to ensure that AI systems are safe, ethical, and robust for deployment.”