USA: NIST publishes AI Risk Management Framework
The National Institute of Standards and Technology ('NIST') released, on 26 January 2023, its Artificial Intelligence Risk Management Framework (AI RMF 1.0) ('AI RMF'), a guidance document for voluntary use by organisations designing, developing, deploying, or using artificial intelligence ('AI') systems to help manage the risks of AI technologies. In particular, NIST outlined that the AI RMF is intended to adapt to the AI landscape as technologies continue to develop, and to be used by organisations so that society can benefit from AI technologies while also being protected from its potential harms.
Notably, NIST illustrated that the AI RMF provides a flexible, structured, and measurable process that aims to enable organisations to address AI risks, and is divided into two parts:
- the first part discusses how organisations can frame the risks related to AI, and outlines the characteristics of trustworthy AI systems; and
- the second part describes four specific functions (i.e. govern, map, measure, and manage) to help organisations address the risks of AI systems in practice.
Importantly, NIST noted that, on the same date, it released a companion voluntary AI RMF Playbook, which provides inputs on ways to navigate and use the AI RMF. In this regard, NIST concluded by mentioning that it plans to update the AI RMF periodically and welcomes suggestions for additions and improvements to the AI RMF Playbook at any time.
Lastly, NIST stated that comments received at [email protected] by the end of February 2023 will be included in an updated version of the AI RMF Playbook to be released in spring 2023.
You can read the press release here, the AI RMF here, and the AI RMF Playbook here.