USA: NIST seeks comment on publication for developing trustworthy AI
The National Institute of Standards and Technology ('NIST') released, on 22 June 2021, Draft NIST Special Publication 1270 A Proposal for Identifying and Managing Bias in Artificial Intelligence ('AI') which forms part of NIST's effort to support the development of trustworthy and responsible AI. In particular, the draft notes that NIST has identified the following technical characteristics needed to cultivate trust in AI systems including accuracy, explainability and interpretability, privacy, reliability, robustness, safety, security, and the mitigation of harmful biases. With specific reference to bias, the draft highlights that it focuses on biases that can lead to harmful societal outcomes. Furthermore, the draft identifies common problems for AI stakeholders in the AI lifecycle including:
- problem formulation and decision making;
- assumptions on operational settings;
- overselling tool capabilities and performance;
- optimisation over context; and
- intended context vs. actual context.
As such, the draft recommends the increased use of deployment monitoring and auditing, the use of standards and guides for the evaluation of bias, and bias reduction techniques. In addition, the draft outlines that NIST plans to develop a framework for trustworthy and responsible AI with the participation of a broad set of stakeholders to ensure that standards and practices reflect viewpoints not traditionally included in AI development.
Comments may be submitted until 5 August 2021.