Support Centre

You have out of 5 free articles left for the month

Signup for a trial to access unlimited content.

Start Trial

Continue reading on DataGuidance with:

Free Member

Limited Articles

Create an account to continue accessing select articles, resources, and guidance notes.

Free Trial

Unlimited Access

Start your free trial to access unlimited articles, resources, guidance notes, and workspaces.

USA: NIST's Four Principles of Explainable Artificial Intelligence

The National Institute of Standards and Technology ('NIST') released, on 17 August 2020, the first draft of Four Principles of Explainable Artificial Intelligence ('the Draft Paper'). The Draft Paper was developed by a multidisciplinary team of computer scientists, cognitive scientists, mathematicians, and specialists in AI, and provides a detailed analysis of theoretical and practical approaches to explainable AI systems with an emphasis on human-machine interactions. This Insight provides an overview of the Draft Paper and its key points of discussion.

gremlin / Signature collection / istockphoto.com

Structure of the Draft Paper

The Draft Paper has been released for public consultation as part of NIST's foundational AI research series and takes the form of an academic study into the explainability of AI processes and decision-making. It begins with an introduction to four essential principles of explainable AI:

  • Explanation
  • Meaningful
  • Explanation Accuracy
  • Knowledge Limits

The Draft Paper then emphasises that explanation strategies will vary based on contexts and audiences before exploring several types of explanations. In particular, the Draft Paper briefly considers the discussion of AI explainability in relevant academic literature and details a series of models for explainable AI. These types of explanations broadly fall into the following categories:

  • User benefit: Aimed at informing a user of an output.
  • Societal acceptance: Emphasises trust and broad social acceptance.
  • Regulatory and compliance: Designed for audits related to laws, regulations, safety standards, frameworks, or similar.
  • System development: For the purposes of developing, improving, or maintaining AI algorithms or systems.
  • Owner benefit: An explanation that is intended to benefit the operator of a system (i.e. an explanation that encourages a user to continue using system).

The Draft Paper goes on to revisit the four principles and considers how they are applied in the context of human's explaining decisions. The Draft Paper suggests that such a comparison may provide a standard to benchmark AI explainability.

Finally, the Draft Paper provides a general conclusion and an overview of continuing discussions. This conclusion primarily emphasises the importance of human-machine collaboration, and that the four principles may be used to establish a framework to guide real-world applications of AI.

NIST's four principles

Explanation: 'Systems deliver accompanying evidence or reason(s) for all outputs.'

The Explanation Principle obligates an AI system to provide an explanation in the form or 'evidence, support, or reasoning for each output.' In and of itself, this principle does not establish any further requirements for the quality of the explanation. The Explanation Principle simply requires that an AI system is capable of providing an explanation; the standards for such explanations are regulated by the other principles.

Meaningful: 'Systems provide explanations that are understandable to individual users.'

The Meaningful Principle establishes that a recipient of an explanation should be able to understand the explanation. The Draft Paper stresses that this principle is not intended to be one-size fits all and that explanations will need to be tailored to audiences, both on a group and individual level. This principle is at the heart of the challenges that the Draft Paper analyses in regard to the varying types of explanations that may be required to fulfil different needs.

Explanation Accuracy: 'The explanation correctly reflects the system's process for generating the output.'

The Explanation Accuracy Principle works in conjunction with the Meaningful Principle to regulate the quality of explanations. This principle emphasises the accuracy of explanations, rather than the accuracy of decisions. The Draft Paper notes that the Explanation Accuracy Principle will need to be balanced with the Meaningful Principle, and that in both cases the application of the principles may vary based on user needs. Ultimately, the Draft Paper suggests that metrics for assessing explanation accuracy are still developing.

Knowledge Limits: 'The system only operates under conditions for which it was designed or when the system reaches a sufficient confidence in its output.'

The Knowledge Limits Principle requires that a system identifies and declares its knowledge limits (i.e. any cases for which it was not designed or approved to operate). The aim of this principle is to prevent misleading explanations or unjust outputs. The Draft Paper suggests two examples of this principle in practice: where a system explains that it cannot return an answer within its operating parameters, and where a likely answer does not meet a confidence threshold and the system explains how the answer does not meet this threshold.

In terms of balancing the above principles, the Draft Paper notes that the time required for responding to the information in an explanation will affect the level of detail to be included in the explanation. The examples provided in the Draft Paper include a tornado warning, which would require immediate action and minimal detail in the explanation, and an audit of a system, which would have a longer time for a response but more detail would be required.

Models for explaining AI algorithms

The Draft Paper analyses both the literature related to AI algorithims, as well as a series of model approaches for explaining the same. The Draft Paper suggests that while there are ongoing disagreements within AI explainability discussions, a shared concern is the balance of explanation accuracy and meaningfulness. Having identified this central issue, the Draft Paper notes that, 'Context of the application, community and user requirements, and the specific task will drive the importance of each principle.'

The Draft Paper then focuses on three overlapping types of explanation models: self-explainable, global, and per-decision. As the Draft Paper considers these models it becomes increasing technical, but the central questions of assessing explanation accuracy, meaningfulness, and knowledge limits remain the same. In particular, the Draft Paper highlights that there has been less direct work on the Knowledge Limit principle.

In its analysis of the models, the Draft Paper emphasises that there is no singular or overarching strategy for explaining AI algorithms which could universally address the four principles. Instead, the Draft Paper repeatedly states that variations in explanations will be required based on the use case. A key challenge that the Draft Paper thus raises is how the application of the four principles will be measured and regulated in practice.

Humans and AI

Section 6 of the Draft Paper turns attention from computer science to focus more closely on the human role in AI explainability. Here the Draft Paper seeks to provide a benchmark by which AI explainability can be appropriately assessed by considering the explainability of human decision-making within the framework of the four principles.

Explanation: The Draft Paper notes that humans have a variety of explanation strategies, but that the process of explaining may interfere with decision-making. The Draft Paper turns to various studies and suggests from these that human decision-making can oftentimes be more accurate when there is not an explicit requirement to explain and judge the decision-making process.

Meaningful: The Draft Paper considers how humans interpret how other humans arrive at conclusions. In the example taken by the Draft Paper, of forensic scientists and jurys' understanding, human explainability does not meet the criteria for the Meaningful Principle. The Draft Paper notes that this is a complex area where context creates significant variations and that approaches to improving human explainability have resulted in mixed success.

Explanation Accuracy: The analogous situation between humans and AI for this principle, as outlined by the Draft Paper, is whether an explanation of a human's decision-making process reflects their mental process. Again, the Draft Paper notes that humans tend not to meet the criteria as set by the Explanation Accuracy Principle, as humans are often unable to introspect accurately.

Knowledge Limits: The Draft Paper considers whether humans can correctly assess their own ability and accuracy, and suggests that while humans can show insights into their own knowledge limits, circumstances can mean such insights are sometimes limited.

The Draft Paper proposes that the above analysis of human explainability outlines how parts of AI systems, as they advance, may be better suited to meet certain societal expectations of explainability. However, the Draft Paper stresses that the study of human and machine explainability processes should ultimately result in incorporating the strengths of each to improve the capabilities of both.

Concluding discussion

The Draft Paper highlights in its concluding section that there is a growing field of studies on human-machine interaction that is placing humans at the heart of AI explainability. In particular, the Draft Paper suggests the four principles may act as a means to encourage this exploration of human-machine interactions.

The Draft Paper returns and reiterates its central point that context will determine the requirements of explainability. However, it also emphasises in its concluding section that the four principles may be able to guide and provide a foundational philosophy for the development of AI explainability towards safer operations and the empowerment of users. The principles are thus presented as both fundamental, but also flexible.

The Draft Paper is open for public consultation until 15 October 2020. Further information is available here.

How OneTrust DataGuidance helps

OneTrust DataGuidance™ is the industry's most in-depth and up-to-date source of privacy and security research, powered by a contributor network of over 500 lawyers, 40 in-house legal researchers, and 14 full time in-house translators. OneTrust DataGuidance™ offers solutions for your research, planning, benchmarking, and training.

OneTrust DataGuidance provides daily updates and analysis of relevant global regulatory developments. By leveraging customised email alerts and newsletters, and creating dedicated spaces for projects, jurisdictions and topics, you can stay on top of developments as they happen, including in relation to AI through our dedicated portal.

OneTrust DataGuidance solutions are integrated directly into OneTrust products, enabling organisations to leverage OneTrust to drive compliance with hundreds of global privacy and security laws and frameworks. This approach provides the only solution that gives privacy departments the tools they need to efficiently monitor and manage the complex and changing world of privacy management.

Angus Young Privacy Operations
[email protected]