Support Centre

You have out of 5 free articles left for the month

Signup for a trial to access unlimited content.

Start Trial

Continue reading on DataGuidance with:

Free Member

Limited Articles

Create an account to continue accessing select articles, resources, and guidance notes.

Free Trial

Unlimited Access

Start your free trial to access unlimited articles, resources, guidance notes, and workspaces.

EU: Implementation of ISO/IEC 42001:2023 standards as AI Act compliance facilitator

Trustworthy artificial intelligence (AI) has become a crucial topic, and the recently published EU Artificial Intelligence Act (the AI Act) represents a significant legislative development. This landmark AI regulation will reshape AI deployment across sectors, requiring organizations to comply within two years from August 2, 2024 (36 months for certain types of high-risk AI systems). In this Insight article, Sean Musch and Michael Borrelli, from AI & Partners, and Victoria Hordern, from Taylor Wessing, briefly examine how implementing ISO/IEC 42001:2023 standards can facilitate compliance with the EU AI Act. Moreover, they provide an analysis of a research report from DIGITALEUROPE highlighting key aspects of ISO/IEC 42001:2023 that align with the EU AI Act.

Pavliha/E+ via Getty Images

Introduction

Trustworthy AI is critical due to incidents involving deepfakes, misinformation, and other harms. The AI Act, approved by the European Parliament in December 2023 and now published in the EU's Official Journal, aims to ensure safe and secure AI throughout the EU, protecting individuals' fundamental rights. The ISO/IEC 42001:2023 standard specifies requirements for establishing, implementing, maintaining, and improving an AI Management System (AIMS) to ensure responsible AI development and use. Organizations considering how to develop AI governance and comply with the AI Act will find the ISO/IEC 42001:2023 standard a useful framework to build on.

The AI Act

The AI Act aims to balance AI uptake with citizen protection. It introduces rules to ensure AI systems are safe, respect fundamental rights, provide legal certainty, and prevent market fragmentation. The AI Act applies a risk-based approach, setting minimum requirements to address AI risks without hindering technological development. Key points include:

  • ensuring AI systems are safe and respect fundamental rights;

  • providing legal certainty for AI investment and innovation;
  • enhancing governance and enforcement of safety requirements; and
  • facilitating a single market for trustworthy AI applications.

The AI Act's global reach means it affects all businesses placing AI systems on the EU market, especially those considered to be high-risk. Significant fines of up to €35 million or 7% of global annual turnover will be enforced for major infringements. The AI Act includes prohibitions on unacceptable AI practices, sets rules for high-risk AI systems, and requires transparency for specific AI uses, such as chatbots and deepfakes.

ISO/IEC 42001:2023

AI risks challenge organizations' operational and strategic goals. ISO/IEC 42001:2023 provides a framework for AI risk management, offering specifications for AI Risk Management Systems. It emphasizes people, processes, and technology to defend against internal and external threats. The standard outlines 10 control objectives and 38 controls, including fairness, security, safety, privacy, robustness, transparency, accountability, availability, maintainability, quality of training data, and AI expertise.

Discussion

ISO/IEC 42001:2023 and the AI Act share many similarities, making the standard a valuable tool for compliance. Key areas where ISO/IEC 42001:2023 supports the AI Act's requirements include:

  • Assurance: Certification schemes like ISO/IEC 42001:2023 provide assurance that AI risks are managed effectively.
  • AI system framework: The standard recommends controls aligning with the the AI Act's requirements for AI systems.
  • People, processes, and technology: ISO/IEC 42001:2023 covers essential aspects of AI, protecting against various risks.
  • Accountability: The standard requires leadership support and appointing a senior individual responsible for AI risk management, similar to the AI Act's mandates.
  • Risk assessments: Regular risk assessments are required by both the standard and the AI Act.
  • Continual improvement: ISO/IEC 42001:2023 mandates continual monitoring and updating of the AI Risk Management System.
  • Testing and audits: Regular testing and audits are necessary for both compliance and certification.
  • Certification: Accredited certification to ISO/IEC 42001:2023 provides an independent assessment of AI risk management measures.

The alignment between ISO/IEC 42001:2023 and the AI Act presents a strategic advantage for organizations seeking to navigate the complex regulatory landscape surrounding AI. Adherence to ISO/IEC 42001:2023 not only helps organizations meet compliance with the rigorous standards set forth by the AI Act, but also enhances their overall AI governance framework. The standard's emphasis on assurance through certification schemes ensures that AI risks are systematically managed, providing stakeholders with confidence in the robustness of the AI systems deployed. This dual compliance mechanism facilitates a streamlined approach to regulatory adherence, reducing the burden on organizations to develop disparate compliance strategies for each regulation.

Moreover, the comprehensive scope of ISO/IEC 42001:2023, covering people, processes, and technology, mirrors the holistic approach mandated by the AI Act and provides a good framework to build wider compliance on. This congruence simplifies the integration of risk management practices, from appointing a senior individual responsible for AI oversight to conducting regular risk assessments and audits. Continual improvement, a cornerstone of ISO/IEC 42001:2023, ensures that organizations can adapt to evolving risks and technological advancements. This proactive stance not only reflects regulatory requirements but also positions organizations at the forefront of AI innovation, fostering a culture of accountability and continuous enhancement that is essential in the rapidly evolving AI landscape.

Results

Analysis of DIGITALEUROPE's report highlights key points regarding ISO/IEC 42001:2023 and compliance with the AI Act:

  • Standards awareness: Many organizations lack awareness of specific AI standards and centralized guidelines.
  • Challenges for smaller companies: Smaller firms face resource constraints, making it difficult to establish standards or obtain certifications.
  • Diversity in guidelines: There is a wide range of guidelines and practices for AI development, requiring extensive research for standard implementation.
  • Specific standards identification: Some participants mentioned specific standards like the Assessment List for Trustworthy Artificial Intelligence (ALTAI), the High Level Expert Group on Artificial Intelligence (AI HLEG) ethics guidelines, and ISO standards related to information security.
  • Harmonized standards awareness: Most participants were not familiar with European harmonized standards for AI, indicating a need for specialized knowledge.

The insights from DIGITALEUROPE's report underscore significant gaps in awareness and resources among organizations striving to comply with ISO/IEC 42001:2023 and the AI Act. The pervasive lack of awareness about specific AI standards and centralized guidelines is particularly concerning. It suggests that many companies might be navigating AI development and compliance in an ad hoc manner, potentially leading to inconsistent practices and heightened risks of non-compliance. The diverse landscape of guidelines and practices exacerbates this issue, as organizations must invest substantial time and effort into researching and implementing appropriate standards. This complexity can slow down innovation and increase operational costs, creating barriers to efficient AI integration.

For smaller firms, the challenges are even more pronounced. Limited resources often translate to difficulties in establishing robust AI standards or obtaining necessary certifications. This resource constraint could hinder their competitiveness and limit their ability to adopt cutting-edge AI technologies. Furthermore, the unfamiliarity with European harmonized standards among most participants indicates a critical gap in specialized knowledge essential for compliance. As AI becomes increasingly integral to business operations, the need for harmonized standards and specialized expertise becomes paramount. Addressing these challenges requires targeted educational initiatives and resource allocation to ensure all organizations, regardless of size, can meet regulatory requirements and harness the full potential of AI technologies.

Conclusion

The AI Act should be viewed as a strategic advantage, emphasizing trust between organizations and stakeholders. Certifications like ISO/IEC 42001:2023 demonstrate active AI risk management according to international best practices. Organizations with ISO/IEC 42001:2023 in place are well-positioned to develop and build on ISO/IEC 42001:2023 in order to meet AI Act compliance. The AI Act's implementation introduces controls that align with ISO/IEC 42001:2023, helping organizations meet regulatory requirements. Future enterprise challenges are likely to relate to how ISO/IEC 42001:2023 certification aids in EU AI Act compliance (or not, as the case may be).

Sean Musch Co-CEO/CFO
[email protected]
Michael Charles Borrelli Co-CEO/COO
[email protected]
AI & Partners, Amsterdam

Victoria Hordern Partner
[email protected]  
Taylor Wessing LLP, London