The National Institute of Standards and Technology (NIST) is proposing four principles to determine the degree to which decisions made by AI are “explainable,” and hopes that effort helps to jumpstart debate on what should be expected of decision-making technologies.

Four Principles of Explainable Artificial Intelligence (Draft NISTIR 8312) is a part of NIST’s foundational research to build trust in AI systems by understanding theoretical capabilities and limitations of AI, and by improving accuracy, reliability, security, robustness, and explainability in the use of the technology.

“AI is becoming involved in high-stakes decisions, and no one wants machines to make them without an understanding of why,” said NIST electronic engineer Jonathon Phillips, one of the report’s authors. “But an explanation that would satisfy an engineer might not work for someone with a different background. So, we want to refine the draft with a diversity of perspective and opinions.”

The four principles for explainable AI are:

  1. AI systems should deliver accompanying evidence or reasons or their outputs;
  2. AI systems should provide meaningful and understandable explanations to individual users;
  3. Explanations should correctly reflect the AI system’s process for generating the output; and
  4. The AI system “only operates under conditions for which it was designed or when the system reaches a sufficient confidence in its output.”

“As we make advances in explainable AI, we may find that certain parts of AI systems are better able to meet societal expectations and goals than humans are,” said Phillips. “Understanding the explainability of both the AI system and the human opens the door to pursue implementations that incorporate the strengths of each.”

NIST will be accepting comments until October 15, 2020.

Read More About
Recent
More Topics
About
Jordan Smith
Jordan Smith
Jordan Smith is a MeriTalk Senior Technology Reporter covering the intersection of government and technology.
Tags