Support Centre

You have out of 5 free articles left for the month

Signup for a trial to access unlimited content.

Start Trial

Continue reading on DataGuidance with:

Free Member

Limited Articles

Create an account to continue accessing select articles, resources, and guidance notes.

Free Trial

Unlimited Access

Start your free trial to access unlimited articles, resources, guidance notes, and workspaces.

EU: Navigating the AI Act - understanding types of AI and obligations for AI actors

The EU's Artificial Intelligence Act (the AI Act) represents a pivotal development in the regulatory landscape, ushering in a new era for businesses engaged in artificial intelligence (AI) development. This comprehensive legislation aims to address the challenges and opportunities presented by AI technologies, establishing a framework that balances innovation with ethical considerations. In part one of this series looking at the AI Act, Sean Musch and Michael Charles Borrelli, from AI & Partners, and Charles Kerrigan, from CMS UK, look at what types of AI are covered and what obligations are applicable to each AI actor. Part two of this series looks at the obligations of providers under the AI Act and the importance of understanding these obligations. Part three of this series explores the significance of comprehending these provider obligations.

This article is accurate as of its time of publication and will be updated to reflect any changes to the AI Act.

imaginima / Signature collection / istockphoto.com

Introduction

Significance of the legislation

The significance of the AI Act extends far beyond its regulatory scope. For businesses involved in AI development, deployment, and/or use, adherence to these guidelines is paramount for ensuring compliance and ethical AI practices. The legislation not only outlines clear expectations but also emphasizes the responsible use of AI, aligning business objectives with societal values, such as those enshrined in the EU Charter of Fundamental Rights.

In navigating this regulatory landscape, businesses gain a competitive edge by demonstrating their commitment to 'ethical' or 'trustworthy' AI. Specifically, the High-Level Expert Group on AI outlines these as:

  • lawful: respecting all applicable laws and regulations;
  • ethical: respecting ethical principles and values; and
  • robust: both from a technical perspective while taking into account its social environment.

This commitment not only fosters trust with customers and stakeholders but also mitigates potential legal risks associated with non-compliance. As the EU prioritizes transparency, accountability, and human-centric AI, businesses are compelled to integrate these principles into their AI strategies, fostering a culture of responsible innovation.

In essence, the AI Act not only establishes a regulatory framework but also serves as a catalyst for businesses to prioritize ethical considerations in AI development. As the AI landscape evolves, the legislation positions businesses to be at the forefront of innovation while ensuring responsible and accountable practices. Understanding the significance of the AI Act is not merely a compliance requirement; it is a strategic imperative for businesses aiming to thrive in an AI-driven future.

An ecosystem of different AI actors

Defining AI actors' roles

Under the AI Act, various entities play pivotal roles, each carrying distinct responsibilities. Understanding these roles is fundamental to navigating the regulatory landscape set forth by this legislation, especially as they shoulder different regulatory burdens given their position in the AI value chain.

  • Provider: A natural or legal person, public authority, agency, or other body that is or has developed an AI system to place on the market, or to put into service under its own name or trademark.

  • Deployer: A natural or legal person, public authority, agency, or other body using an AI system under its authority.
  • Authorized representative: Any natural or legal person located or established in the EU who has received and accepted a mandate from a provider to carry out its obligations on its behalf.
  • Importer: Any natural or legal person within the EU that places on the market or puts into service an AI system that bears the name or trademark of a natural or legal person established outside the EU.
  • Distributor: Any natural or legal person in the supply chain, not being the provider or importer, who makes an AI system available in the EU market.
  • Product manufacturer: A manufacturer of an AI system that is put on the market or a manufacturer that puts into service an AI system together with its product and under its own name or trademark.
  • Operator: A general term referring to all the terms above (provider, deployer, authorized representative, importer, distributor, or product manufacturer).

Guidance on assessing AI actors

Navigating the intricacies of AI actor assessments under the AI Act demands clear guidance to ensure uniform understanding and implementation across diverse scenarios.

Clear definitions and criteria

The AI Act provides explicit definitions and criteria for assessing AI actors. Manufacturers, importers, distributors, and users are guided by specific parameters, facilitating a standardized approach. This clarity streamlines compliance efforts and fosters a shared understanding of expectations. Businesses benefit from regulatory clarity with respect to the AI value chain.

Addressing grey areas, especially regarding foundational models

Grey areas often emerge in the assessment of foundational models, requiring nuanced consideration. The legislation acknowledges these complexities and encourages a dynamic approach to address evolving challenges in AI technology. For example, the EU has, for now, agreed that general purpose AI models that were trained using a total computing power of more than 10^25 FLOPs are considered to carry systemic risks, given that models trained with larger computing tend to be more powerful. This adaptability ensures that the regulatory framework remains effective amidst rapid advancements.

Grey areas in AI actor assessment

While the AI Act provides a robust framework, challenges persist in certain areas, necessitating careful consideration and ongoing dialogue.

Challenges in defining and categorizing foundational models

Foundational models, being at the core of many AI systems, pose challenges in precise definition and categorization. Policymakers behind the EU AI Act acknowledge this complexity and have called for industry collaboration to refine definitions, ensuring a common understanding that aligns with technological advancements.

Unraveling legal implications and ambiguities

Grey areas in AI actor assessments can lead to legal implications and ambiguities. To address this, the AI Act has, both since its inception and ongoing dialogue, encouraged proactive engagement with legal experts, fostering a cooperative approach to interpret and apply the legislation. Clarity in legal frameworks is crucial for ensuring fair and consistent enforcement.

Industry perspectives on navigating grey areas

Industry perspectives provide valuable insights into navigating grey areas. Collaborative efforts between regulators and industry stakeholders are vital for understanding practical challenges and devising effective solutions. This dialogue fosters a symbiotic relationship, ensuring that the regulatory framework evolves in tandem with industry advancements. For example, the European Digital Small, Medium Enterprise Alliance has been a proactive steward of the AI Act's utility since 2021.

Categories of AI systems

Overview of AI system categories

Under the AI Act, AI systems are classified into distinct categories based on their associated risks, especially in regards to the potential harm caused to individuals' fundamental rights, democracy, and rule of law. Understanding these can aid businesses and stakeholders in navigating the regulatory landscape.

Minimal risk

This category includes all AI systems that can be developed and used within the existing legal framework without additional obligations. The majority of AI systems currently in use within the EU fall into this classification. Providers of such systems may voluntarily choose to adhere to the requirements for trustworthy AI and voluntary codes of conduct.

High risk

The proposal identifies a limited number of AI systems with the potential to adversely impact people's safety or fundamental rights as high-risk. The AI Act includes an annex listing high-risk AI systems, which may be periodically reviewed to align with evolving AI use cases. This category encompasses safety components of products covered by sector-specific Union legislation, remaining high-risk when subjected to third-party conformity assessment under that legislation.

Unacceptable risk

This category encompasses a highly restricted set of particularly harmful AI uses that violate EU values by contravening fundamental rights. The following uses are banned:

  • Social scoring for public and private purposes.
  • Exploitation of vulnerabilities of individuals and the use of subliminal techniques
  • Real-time remote biometric identification in publicly accessible spaces by law enforcement, with narrow exceptions.
  • Biometric categorization of individuals based on data inferring race, political opinions, trade union membership, religious or philosophical beliefs, or sexual orientation, unless used to identify victims. Filtering datasets based on biometric data in law enforcement is still possible.
  • Individual predictive policing.
  • Emotion recognition in workplaces and educational institutions, except for medical or safety reasons.
  • Untargeted scraping of the internet or CCTV for facial images to build or expand databases.

Specific transparency risk

Certain AI systems are subject to specific transparency requirements, particularly where there is a clear risk of manipulation (e.g., through the use of chatbots). Users should be informed when they are interacting with a machine.

Figure 1: AI System Risk Categories (Source: AI & Partners)

Guidance on categorizing AI systems

Navigating the categorization of AI systems involves a structured approach to assess and manage associated risks.

Criteria for assessing risk levels

The AI Act provides clear criteria for assessing risk levels, considering factors such as the potential harm to individuals, the societal impact, and the nature of the application. These criteria guide businesses in objectively evaluating their AI systems.

Challenges and solutions in determining risk levels

The AI Act acknowledges the challenges in determining risk levels, especially in rapidly evolving technological landscapes. Businesses are encouraged to engage with regulatory bodies to address uncertainties, fostering a collaborative approach to overcome challenges and find effective solutions.

Types of AI covered by the AI Act

Examples of AI systems falling under the Act

The AI Act covers a broad spectrum of AI systems, including autonomous vehicles, facial recognition systems, and medical diagnosis AI. These examples highlight the diversity of applications subject to regulatory oversight.

Prohibited AI systems

Types of AI with potential harm and societal impact

Certain types of AI systems are banned altogether due to their potential for harm and significant societal impact. This includes AI applications designed for social scoring and certain forms of autonomous weapons. The prohibition reflects a commitment to preventing misuse and safeguarding societal well-being.

The rationale behind the prohibition

The rationale behind the prohibition is rooted in ethical considerations and the potential for severe harm. The AI Act prioritizes the protection of individuals and society, aiming to prevent the development and deployment of AI systems that could lead to detrimental consequences.

Implications for stakeholders and the industry

The ban on specific AI systems has implications for stakeholders and the industry at large. It underscores the importance of responsible AI development and encourages innovation in areas that align with ethical standards. Stakeholders are urged to explore alternative technologies that contribute positively to societal well-being.

Obligations for AI actors

Product manufacturers

Product manufacturers play a pivotal role in ensuring the responsible development and deployment of AI systems.

Compliance with technical requirements

Product manufacturers are obligated to adhere to the technical requirements outlined in the regulatory framework. This involves rigorous testing and verification processes to ensure that AI systems meet the specified standards for safety, reliability, and ethical considerations.

Documentation and transparency obligations

Transparency is key in the AI landscape. Product manufacturers must diligently document the design, functionalities, and potential risks associated with their AI systems. This documentation not only aids in regulatory compliance but also fosters trust by providing stakeholders with a clear understanding of the AI system's capabilities and limitations.

Importers and distributors

Importers and distributors serve as crucial intermediaries in the AI supply chain, holding distinct responsibilities.

Responsibilities in ensuring conformity

Importers and distributors are tasked with ensuring that the AI systems they handle conform to the regulatory standards set by the AI Act. This involves thorough assessments and collaboration with product manufacturers to address any identified issues, reinforcing the commitment to delivering AI products that meet established criteria, under Article 26 (Obligations of importers).

Record-keeping obligations

Article 12 (Record-keeping) of the AI Act stipulates that record-keeping is essential for accountability. Importers and distributors are required to maintain meticulous records regarding the AI systems they deal with. This includes information on the origin, conformity assessments, and any corrective actions taken. These records contribute to a transparent and accountable AI ecosystem.

Users

Users, whether organizations or individuals, also bear responsibilities in the ethical use of AI technologies.

Adherence to guidelines and restrictions

Users are obligated to adhere to the guidelines and restrictions set forth in the AI Act. This involves utilizing AI systems in a manner that aligns with ethical considerations, societal values, and legal requirements. Users play a critical role in ensuring that AI is deployed responsibly and for the benefit of society. For example, Article 13 (Transparency and provision of information), as it stands, requires users to conduct the following:

User understanding:

  • The user should be enabled to understand and use the AI system appropriately.
  • This includes knowing how the AI system works and understanding the data it processes.

Explanation to affected persons:

  • Users should be able to explain the decisions made by the AI system to the affected person, as outlined in Article 68(c) of the Regulation.

Concluding thoughts

Recap of key points

This brief exploration of the AI Act reveals a transformative regulatory landscape, balancing innovation with ethical considerations. Key highlights include categorizing AI systems, defining obligations for AI actors, and addressing grey areas in assessments.

Call to action

For businesses, compliance is not just a legal obligation but a strategic imperative. Embrace ethical AI practices outlined in the AI Act to foster trust, mitigate legal risks, and contribute to a responsible AI future. Businesses must collectively prioritize transparency, accountability, and human-centric AI, ensuring a harmonious integration of technology into our societal fabric.

Sean Musch Co-CEO/CFO
[email protected]
Michael Charles Borrelli Co-CEO/COO
[email protected]
AI & Partners, Amsterdam

Charles Kerrigan Partner
[email protected]
CMS UK, London