USA: ITI releases recommendations on NIST's Approaches to AI Risk Management
The Information Technology Industry Council ('ITI') issued, on 13 September 2021, recommendations to the National Institute of Standards and Technology ('NIST') on its draft of an Artificial Intelligence ('AI') Risk Management Framework, and on Draft NIST Special Publication 1270 A Proposal for Identifying and Managing Bias in Artificial Intelligence.
In particular, in its response to the AI Risk Management Framework, ITI recommends:
- considering what 'risk' means in the context of AI;
- conducting a more granular mapping exercise of the standards landscape;
- taking an outcomes-based approach to protect against the risks of AI while facilitating innovation;
- helping stakeholders determine how to navigate tensions that may arise in developing and using AI; and
- suggesting that the framework account for the deployment context of an AI system, the training data and optimisation function of an AI system, and the goal of the product.
In addition, in its comments to NIST's Proposal for Identifying and Managing Bias in AI, the ITI recommends, among other things:
- indicating the preliminary nature of the document, so policymakers do not view it as a definitive guide to approaching and managing bias;
- clarifying references to unintentional or other types of bias, and further clarifying the definition of bias;
- considering more specific technical guidance for how to address bias in specific instances;
- including information as to how this proposal will interact with the NIST AI Risk Management Framework;
- referencing and integrating ongoing standards efforts; and
- articulating a clear plan for how work on measuring and mitigating AI bias will translate into adoption across federal agencies.