Support Centre

You have out of 5 free articles left for the month

Signup for a trial to access unlimited content.

Start Trial

Continue reading on DataGuidance with:

Free Member

Limited Articles

Create an account to continue accessing select articles, resources, and guidance notes.

Free Trial

Unlimited Access

Start your free trial to access unlimited articles, resources, guidance notes, and workspaces.

EU: Navigating the AI Act - a comparative analysis: EU AI Act vs NIST's AI RMF

The first part of this series on the EU's Artificial Intelligence Act (AI Act) explored the covered types of artificial intelligence (AI) and the corresponding obligations of each AI actor. The second part of this series provided a brief explanation of the importance for providers to comprehend and adhere to their obligations. In the third installment, we examined these provider obligations within the broader context of the evolving AI landscape. In the fourth article of this series, Starr Drum, Shareholder at Polsinelli PC, compares the AI Act with the National Institute of Standards and Technology's (NIST) Artificial Intelligence Risk Management Framework (AI RMF 1.0) (AI RMF). Through an exploration of their distinct features and complementary aspects, this article provides essential insights for organizations navigating AI governance.

As the adoption of AI accelerates globally, regulatory and voluntary frameworks play a crucial role in ensuring responsible and trustworthy AI deployment. This comparative analysis of the EU's AI Act against the NIST's AI RMF sheds light on the similarities and differences between these two influential guideposts, providing insights for organizations navigating the complex landscape of AI governance.

Floriana / Signature collection / istockphoto.com

Introduction

NIST AI Risk Management Framework

The NIST AI RMF is one of the most influential AI governance frameworks. It is an industry-agnostic framework that offers voluntary guidance through which organizations can make informed, internal determinations regarding AI risk identification, management, and governance. It was developed by NIST within the US and consists of four key functions through which various AI risks are identified and mitigated, each of which is broken down into categories and subcategories that offer more granular actions and outcomes. These core functions include mapping, measuring, managing, and governing AI risks.

The EU's AI Act

The EU AI Act, in contrast, is a legally enforceable regulatory framework that was adopted by the European Parliament on March 13, 2024. Its requirements are detailed and specific, consisting of fully defined AI system classifications, risk tolerances and thresholds, risk management strategies, and role-specific responsibilities that are shared across all participants within the EU's economic area and jurisdiction. Unlike the AI RMF, which cannot be legally enforced outside contractual obligations or voluntary public commitments, the AI Act will become enforceable in stages beginning six months and twenty days after its publication in the Official Journal of the EU.

Complementary frameworks

The AI Act and the AI RMF complement one another. While variations in their specificity, structure, and applicability may make harmonization challenging, their similar definitions of AI systems and risk, shared AI development principles, and common risk management techniques offer practitioners valuable guidance as they develop their AI governance and risk management strategies.

Similar definitions of risk and AI systems

Both the AI Act and AI RMF define 'risk' to mean the composite measure of an event occurring and the severity of that harm. They also both offer similar definitions of an AI system:

  • EU AI Act: An AI system is a 'machine-based system designed to operate with varying levels of autonomy and that may exhibit adaptiveness after deployment and that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments.'
  • NIST AI RMF: An AI system is an 'engineered or machine-based system that can, for a given set of objectives, generate outputs such as predictions, recommendations, or decisions influencing real or virtual environments. AI systems are designed to operate with varying levels of autonomy.'

Both definitions focus on the ability of an AI system to generate outputs like predictions, recommendations, or decisions that influence real or virtual environments. They both emphasize AI systems' ability to operate with varying levels of autonomy. However, the AI Act specifically discusses the ability of some AI systems to adapt post-deployment and the fact that the objectives can be both implicit and explicit, neither of which are elements explicitly addressed by the AI RMF.

Only the AI Act directly addresses general-purpose AI, defining it as a model that 'displays significant generality and is capable to competently perform a wide range of distinct tasks regardless of the way the model is placed on the market and that can be integrated into a variety of downstream systems or applications.'

Similar AI development principles

Both the AI Act and the AI RMF offer similar principles for building trustworthy, ethical AI systems. The seven principles in the EU AI Act are human agency and oversight; technical robustness and safety; privacy and data governance; transparency; diversity, non-discrimination, and fairness; social and environmental well-being; and accountability. The nomenclature for these principles differs slightly in the AI RMF. For example, the AI Act uses the phrase 'technical robustness and safety' to encapsulate practices that minimize vulnerabilities, ensure system reliability, and prevent unintended consequences. NIST similarly advocates for robust AI design and development but describes the systems as safe, secure, valid, reliable, and resilient in the matching principles. Although the terminology differs, the underlying principles remain the same.

Shared risk management techniques

Finally, both the AI Act and the AI RMF rely on similar risk management techniques to accomplish the objectives outlined in their frameworks. Generally, the frameworks utilize three common techniques to manage AI risks: risk assessments, testing, and documentation.

These techniques are horizontal similarities - meaning they are utilized across each framework's risk management categories (e.g., govern, map, measure, and manage for the AI RMF) - and are not intended to represent all the risk management strategies, techniques, or tools embedded within either framework. Execution standards for the techniques (and subsequent reporting requirements) in the AI Act are more nuanced but still match the same practical goals as the AI RMF.

Risk assessments

Before an organization can develop or implement an AI risk management or governance strategy, it needs to identify the risks involved in the AI systems it is interested in developing, deploying, or utilizing. Both the AI Act and the AI RMF incorporate risk assessments in their frameworks. The AI Act incorporates risk assessments to, for example, identify and analyze reasonably foreseeable misuse, detect and mitigate possible biases, and assess the human oversight measures required. The AI RMF similarly incorporates risk assessments to categorize impacts on vulnerable populations, consider the scientific integrity of design decisions within the AI system, and assess performance metrics and residual risk calculations. Both frameworks rely on assessments to identify the risks that require mitigation.

Testing

Once an organization has identified its AI risks and aligned on its mitigation strategies, it needs to test the AI system to verify the efficacy of its risk management and governance strategies. The AI Act requires testing throughout the lifecycle of the AI system, such as testing prior to deployment against pre-defined metrics and probabilistic thresholds to confirm the most appropriate risk management measures have been implemented. The AI RMF similarly incorporates testing within its recommendations, such as highlighting the need for AI testing and ongoing monitoring to demonstrate the system is valid, reliable, and free from bias. Both frameworks rely on testing to verify the efficacy of an organization's controls and identify emerging risks and mitigation needs.

Documentation

Finally, documentation is required for an organization's risk management and governance programs to operate effectively. Both frameworks rely on documentation to drive compliance with one's AI risk management and governance strategies. The AI RMF recommends documenting the results of most actions throughout the AI lifecycle, including the risks and potential impacts of each system, development strategies to maximize AI benefits and minimize negative impacts and associated risk treatments, intended uses and system limitations, and user instructions and human oversight processes.

The AI Act's documentation requirements vary depending on the risk classification of the AI system involved. Its technical documentation requirements for high-risk systems are robust and similarly span the entire lifecycle of an AI system, from design and development to risk assessments, testing, and post-deployment monitoring. The requirements listed in Annex IV are expansive. A single requirement to provide a detailed description of the AI system elements and its development process, for example, contains a list of eight subcategories that range from the methods and steps performed to develop the AI system (including if and how any third-party tools or pre-trained systems were used) to key design choices (including system and algorithm logic and parameters, and decisions about any possible trade-offs regarding the technical solutions adopted); training methodologies, techniques, and data set information; human oversight measures; and the provider's post-marketing monitoring plan (including the performance metrics).

Key Differences

At its core, the foundational difference between the AI Act and the AI RMF is that the former is a legal framework with formal enforcement mechanisms and fines for non-compliance, while the latter consists entirely of voluntary guidance. Other key differences are set forth below:

Risk classifications, tolerances, and thresholds

While the AI RMF acknowledges that there is a spectrum of risk involved in AI systems based on their intended use, it does not specify what those use cases might be or provide corresponding risk thresholds. It limits its guidance to industry-agnostic tools through which organizations can make their own risk identification and mitigation determinations.

The AI Act, in contrast, goes a step further by evaluating this risk within the context of EU laws, rights, and regulations and creating a risk-based system for classifying AI systems and assigning risk tolerances and mitigation requirements accordingly. The AI Act prohibits certain AI systems and use cases that it deems to involve unacceptable levels of risk. Examples of prohibited AI systems include those that materially distort behavior by deploying subliminal or purposefully manipulative techniques or exploiting vulnerable characteristics; utilizing biometric categorization systems on natural persons to deduce certain sensitive information; or calculating social scores based on subcategories of behavior or characteristics that lead to certain unjustified or unrelated treatment.

The AI Act also defines two ways through which AI systems can be classified as high-risk, which results in stringent regulatory requirements. Systems can be classified as high risk under the AI Act when:

  • the AI system is intended to be used as a product, or a safety component of a product, that is already covered by the EU harmonization legislation listed in Annex I and requires a third-party conformity assessment prior to going on the market. Machinery, motor vehicles, and children's toys are among the categories currently listed; or
  • the AI system falls under any of the conditions or contexts listed in Annex III, which are automatically considered high-risk within the EU, absent certain exceptions. Examples of high-risk conditions include biometrics, critical infrastructure management and operation, education and vocational training, employment and workers management, law enforcement, border control management, and administration of justice and democratic processes.

High-risk AI systems are permitted under the AI Act when accompanied by mandated risk mitigations, including implementing a risk management and quality management system, performing a conformity assessment (demonstrating compliance with all the requirements within the Act), and satisfying the Act's requirements around data and data governance, technical documentation, recordkeeping and logging, transparency, human oversight, accuracy, robustness, and cybersecurity.

The AI RMF, in contrast, as a voluntary framework, does not ban any AI systems or uses, classify any specific instances as high risk, or tie risk-based classifications to particular risk management strategies, techniques, or requirements.

Roles and responsibilities

Whereas the AI RMF groups all stakeholders under the broad term 'AI Actors,' the AI Act identifies three key sets of stakeholders: AI providers, AI deployers, and AI distributors/importers. The AI Act enables the shared responsibility model embedded within its legal framework by specifying differing obligations for each stakeholder type.

The role-based responsibilities outlined in the AI Act are detailed, nuanced, and often depend on the circumstances and lifecycle or supply chain step involved. For example, the developers of an AI system or model, AI providers, are responsible for completing a conformity assessment demonstrating compliance with all AI Act requirements, whereas downstream distributors and importers are only required to ensure the conformity assessment is complete and the associated evidence and documentation attached.

Third-party diligence

The AI RMF offers more granular insights and guidance around third-party due diligence within the AI context than the AI Act does. While the AI Act acknowledges that an AI system or model may utilize third-party systems, data, or other components and that these third parties should be subject to a certain degree of due diligence to ensure the integrated system is compliant, it does not describe this process in detail. It instead authorizes the AI Office to 'develop and recommend voluntary model contractual terms between providers of high-risk AI systems and third parties that supply tools, services, components or processes that are used or integrated in high-risk AI systems.' In turn, the AI RMF outlines specific due diligence and risk management actions practitioners should consider across the govern, map, measure, and manage functions when engaging third parties.

Conclusion

Organizations should leverage both the EU AI Act and NIST AI RMF to develop their AI governance strategies. Their shared definition of risk, AI development principles, and risk management techniques mean there is significant overlap between the frameworks. While the AI Act is legally enforceable and contains some key differences, the AI RMF can offer practitioners valuable insights as they design and operationalize AI governance and compliance strategies within their organizations.

Starr Drum Shareholder
[email protected]
Polsinelli PC, Los Angeles

Feedback