Support Centre

You have out of 5 free articles left for the month

Signup for a trial to access unlimited content.

Start Trial

Continue reading on DataGuidance with:

Free Member

Limited Articles

Create an account to continue accessing select articles, resources, and guidance notes.

Free Trial

Unlimited Access

Start your free trial to access unlimited articles, resources, guidance notes, and workspaces.

EU: The AI Act - the first regulatory regime for AI across the EU

On March 13, 2024, the European Parliament adopted the European Union's (EU) Regulation laying down harmonized rules on artificial intelligence (AI), commonly known as the Artificial Intelligence Act (the AI Act) (see the European Parliament press release and OneTrust DataGuidance News article). Almost three years after the European Commission's first legislative proposal, and after the EU legislators reached a political agreement on the key aspects of the AI Act in December 2023 in the course of the trilogue following months of negotiations, the world's first comprehensive regulatory framework for AI has officially been approved. 

This Insight article addresses the most important questions as to what companies and other entities should know and consider when conducting any activities involving AI. Valentino Halim, Junior Partner at Oppenhoff & Partner, unpacks the AI Act and provides insight into the scope and key obligations of the new regulatory framework for AI at the EU level. 

blackdovfx / Signature collection / istockphoto.com

What are the objectives of the AI Act? 

The AI Act is a landmark piece of legislation that aims to comprehensively regulate the use and development of AI across the EU Member States. It primarily seeks to establish a harmonized and consistent legal framework governing AI and ensure that AI technology is developed and used in a way that is safe and in line with EU values, including respect for fundamental rights and democracy. By establishing clear standards and guidelines, the AI Act also seeks to foster innovation in the AI sector while protecting citizens from potential harm associated with AI technologies. 

What technologies are covered by the AI Act? 

Most of the requirements of the AI Act apply to AI systems. Following a horizontal, sector-agnostic approach, the new regime covers all kinds of AI. Its provisions apply to simple AI-based apps to complex AI systems across all industries, regardless of whether, for example, a financial application or a health care system uses AI. 

The AI Act provides the first legal definition of AI, thereby determining the scope of application of the regime. While the Commission's proposed text of the AI Act contained an extremely broad and complex definition - which was widely criticized - the definition in the adopted text of the AI Act is somewhat narrower and less complicated. Under Article 3(1) of the AI Act, an AI system refers to 'a machine-based system designed to operate with varying levels of autonomy, that may exhibit adaptiveness after deployment and that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments.' 

Based on this definition, autonomous functioning and output of specific work results by means of inferring constitute essential characteristics of AI within the meaning of the AI Act. This is almost identical to the Organisation for Economic Co-operation and Development's (OECD) AI definition. Systems operating exclusively based on deterministic rules shall not be covered (see Recital 6 of the AI Act). In view of the vague terms, legal uncertainty is not unlikely to occur when determining the systems falling within the scope of the AI Act. It remains to be seen how supervisory authorities and courts will make use of this room for maneuver. 

Which entities and individuals are affected by the AI Act? 

Almost all stakeholders involved in the development and use of AI are subject to certain requirements of the AI Act. In particular, 'providers' and 'deployers' of AI systems (and/or models) are addressed: 

  • a provider means an entity or individual that has developed an AI system, including general purpose AI (GPAI), and places it on the market or puts it into service under its own name or trademark, whether for payment or free of charge; and 
  • a deployer means an entity or individual using an AI system under their own responsibility. 

Importers and distributors are also subject to the requirements of the AI Act. The use of AI systems for private purposes is not covered by the new regime. 

In terms of geographical scope, the AI Act is not restricted to operators and deployers residing or established in the EU. The regime's reach also extends beyond the EU's borders, affecting: 

  • any provider that places AI systems on the market or puts them into operation within the EU; and 
  • providers and operators in third countries if the result generated by the AI system is used within the EU. 

On this basis, even international companies will need to align their AI practices with the AI Act's requirements when engaging with the EU market. 

Regulatory framework for AI: Risk-based approach 

The AI Act establishes a risk-based approach. Its regulatory framework categorizes AI systems according to the level of risk they involve and adapts the nature and content of the requirements to be applied to the intensity and scope of the respective level of risk. This includes the prohibition of certain unacceptable AI practices, strict requirements for AI systems involving high risks, and general, less strict requirements such as transparency obligations for operators of all other AI systems. 

Prohibited AI practices 

Under the EU AI Act, certain AI systems and practices are considered unacceptable and are therefore prohibited to ensure safety, fundamental rights, and the protection of health and the environment. Prohibited practices include: 

  • Manipulative and exploitative practices: The use of AI systems that aim to or result in distorting human behavior in a way that may cause significant harm, in particular through the use of subliminal techniques or other manipulative or deceptive techniques that undermine the autonomy, decision-making, or free choice of individuals. 
  • Social rating systems: AI systems used by public or private actors for the social rating of natural persons, which may lead to discriminatory outcomes and the marginalization of certain groups, are prohibited. Such systems assess or classify natural persons or groups thereof based on multiple data points related to their social behavior in multiple contexts or known, inferred, or predicted personal or personality characteristics over certain periods of time. 
  • Real-time remote biometric identification: The use of AI systems for real-time remote biometric identification of natural persons in publicly accessible spaces for law enforcement purposes is generally prohibited, except in narrowly defined and limited situations where the use is strictly necessary for substantial public interest where its importance outweighs the risks. 
  • Biometric categorization: AI systems based on biometric data and used to deduce or infer sensitive characteristics or attributes of individuals, such as political opinions, trade union membership, religious or philosophical beliefs, race, sexual orientation, or gender, are prohibited. 
  • Emotion recognition systems: The use of AI systems to identify or infer the emotions or intentions of natural persons based on their biometric data in work and educational contexts is prohibited, except when used solely for medical or safety reasons. 

These prohibitions are aimed at preventing abuse and ensuring compliance with fundamental rights and ethical standards in the use of AI technologies. 

Restrictions relating to high-risk AI systems 

For high-risk AI systems (HRAIS), the AI Act defines strict compliance with specific requirements relating to data quality, transparency, and human oversight. HRAIS are those that are used in sensitive areas and may pose a high risk to the health, safety, or fundamental rights of people. The areas in which AI systems are categorized as high-risk include: 

  • critical infrastructure (e.g., transport and utilities); 
  • education and vocational training; 
  • employment, human resource management, and access to self-employment; 
  • essential private and public services (e.g., creditworthiness); 
  • law enforcement; 
  • migration, asylum, and border control; and 
  • administration of justice and democratic processes. 

Systems that are identified as HRAIS are subject to strict codes of conduct, which include the following: 

  • Risk management: Providers must establish a risk management system to identify and mitigate the risks associated with the use of AI systems. 
  • Data quality and management: The datasets used for the training, validation, and testing of AI systems must meet high-quality standards and must not contain any biases that could lead to discriminatory results. 
  • Technical documentation and recordkeeping: Providers must produce comprehensive technical documentation to demonstrate the AI system's compliance with the requirements of the AI Act. 
  • Transparency and provision of information: Providers must ensure that AI systems are transparent and that users are informed of the system's operation and limitations. 
  • Human oversight: HRAIS must be designed to ensure adequate human oversight to minimize and correct errors or malfunctions. 
  • Robustness, accuracy, and cybersecurity: HRAIS must be robust, accurate, and have an appropriate level of cybersecurity. 
  • Conformity assessment: Before being placed on the market or put into service, HRAIS must undergo a conformity assessment to ensure they meet the requirements of the AI Act. 

These codes of conduct aim to protect people's safety and fundamental rights while promoting innovation and the development of AI technologies in the EU. 

Requirements for GPAI systems and models 

For the use of GPAI models and AI systems classified as GPAI systems, specific codes of conduct apply under the EU AI Act. The most important aspects include: 

  • Definition and demarcation: GPAI models are clearly defined and demarcated from AI systems to ensure legal certainty. They are based on key functionalities such as generality and the ability to perform a wide range of tasks competently. These models are typically trained with large amounts of data and can be brought to market in various ways, including libraries, APIs, direct downloads, or physical copies. 
  • Specific rules for GPAI models: There are specific rules for GPAI models and those that pose systemic risks. These rules also apply if these models are integrated into an AI system or are part of an AI system. The obligations for providers of GPAI models apply as soon as the models are placed on the market. 
  • Transparency measures: Providers of GPAI models must take appropriate transparency measures, including the creation and updating of documentation and the provision of information about the GPAI model for its use by downstream providers. Technical documentation should be prepared and updated for the purpose of making it available, upon request, to the AI Office and national competent authorities. 
  • Free and open-source license: GPAI models published under a free and open-source license and whose parameters are made publicly available are subject to exceptions to the transparency requirements, unless they pose a systemic risk. In any case, providers must make a summary of the content used to train the model publicly available and implement an EU copyright compliance policy. 
  • Risk assessment and mitigation: Providers of GPAI models with systemic risks must assess and mitigate potential systemic risks. This includes conducting model assessments, including adversarial testing, and continuously assessing and mitigating systemic risks throughout the life cycle of the model. 
  • Cooperation along the AI value chain: Providers of GPAI models should work closely with the providers of the corresponding high-risk AI systems to enable their compliance with the relevant obligations under this Act. 

Requirements applicable to all AI systems 

For the use of AI systems that are not classified as high-risk systems, the AI Act applies general codes of conduct that focus on transparency, data security, and compliance with ethical standards. These rules are less strict than those for high-risk AI systems and include: 

  • Transparency: Providers of AI systems must ensure that users are informed about the functioning and limitations of the system. This applies to AI systems that interact with natural persons or generate content that is indistinguishable from human behavior. 
  • Data protection and security: Providers of AI systems must guarantee the security and protection of the data used. This includes compliance with the General Data Protection Regulation (GDPR) and other relevant data protection regulations. 
  • Ethical standards: AI systems should be developed and used in accordance with ethical principles to avoid discrimination and uphold fundamental rights. 

For AI systems that are not categorized as high-risk but may nevertheless involve specific risks, such as systems for interacting with people or generating content, the AI Act provides for special transparency obligations. For example, users must be informed when they interact with an AI system or when content has been generated or manipulated by an AI system. 

What are the sanctions for violations? 

The new AI Act comes with a stringent system of sanctions. In the event of violations of the obligations under the AI Act, the competent supervisory authorities may impose significant administrative fines, which may even exceed those under the GDPR. These fines may amount to a maximum of: 

  • up to €35 million or 7% of the worldwide annual turnover of the group of companies in the event of violations of prohibited AI practices; 
  • up to €15 million or 3% of the worldwide annual turnover of the group in the case of non-compliance with the requirements for HRAIS and GPAI and the transparency obligations for certain AI systems; or 
  • up to €7.5 million or 1% of the worldwide annual turnover of the group of companies in the event of incorrect or incomplete statements to the supervisory authorities. 

Enforcement will be the responsibility of national supervisory authorities. 

What are the competent regulators for enforcement measures? 

Several authorities and bodies will be responsible for monitoring and enforcing the provisions of the AI Act, including the AI Office. The main regulatory actors include: 

  • EU AI Office and independent experts: A scientific panel of independent experts is to be established to support the implementation and enforcement of the regulation, in particular the AI Office's monitoring of activities regarding general AI models. These experts shall be selected based on current scientific or technical expertise in the field of AI and shall carry out their tasks impartially, objectively, and in compliance with the confidentiality of information and data. Member States may request assistance from the pool of experts for their enforcement activities. 
  • National supervisory authorities: Each Member State shall designate at least one authority and at least one market surveillance authority as national supervisory authorities to monitor the application and implementation of the AI Act. Member States may entrust any type of public body with the tasks of national supervisory authorities, in accordance with their specific national organizational characteristics and needs. 
  • Union AI testing support structures: Union AI testing support structures shall be established and made available to Member States to support adequate enforcement in relation to AI systems and to strengthen Member States' capacities. 

These authorities and structures are crucial in ensuring compliance with the requirements set out in the AI Act and help to promote the safe and responsible use of AI systems in the EU. 

What is the timeline for entry into force and application of the AI Act? 

Once the European Parliament has given its consent, the AI Act must still be approved by the Council of the European Union to become applicable law. The AI Act is expected to be formally adopted and published in the Official Journal of the European Union in May or June 2024. It will officially enter into force 20 days after its publication. This date is the relevant starting point for the calculation of various landmark points in the legislative timeline, from which key provisions of the AI Act enter into force in several stages: 

AI Act provisions Start of application
Prohibited AI practices: A six-month transition period is granted from the date of entry into force of the AI Act to comply with the provisions relating to prohibited AI practices. November/December 2024
Codes of practices: Codes of practices are foreseen for GPAI models to help providers demonstrate compliance with their legal obligations. The Commission will be tasked with approving these codes, which must be developed within nine months of the AI Act entering into force. February/March 2025 
GPAI provisions and sanctions: The provisions relating to GPAI models and administrative fines will start to apply 12 months after the AI Act enters into force. May/June 2025 
HRAIS: The requirements for HRAIS under Annex III will take effect 24 months from the date of entry into force, with an extended period of 36 months for HRAIS obligations under Annex I for AI systems already covered by existing EU product legislation (EU harmonization legislation). 

May/June 2026 and May/June 2027 

AI Act: The provisions of the AI Act in their entirety (with the exception of the above provisions) will apply from May or June 2026. 

Exception for AI systems already in use: HRAIS that have already been placed on the market or put into service 24 months before the date of entry into force will not be subject to the AI Act unless significant changes are subsequently made to their design. GPAI models already in use must comply with the obligations 36 months from the date of entry into force regardless of such changes. 

May/June 2026 

 

Companies and other entities are well advised to use this timeline as a basis for phased implementation of the various provisions of the AI Act, where applicable. 

How can entities prepare for the AI Act? 

The AI Act's wide scope and complex regulatory regime means that a number of companies and other entities will have to implement the new requirements. Affected entities are well advised to use this timeline and should start preparing now by: 

  • assessing whether and, if applicable, to what extent they will be subject to the new requirements and obligations under requirements and preparing for necessary adjustments; 
  • where applicable, planning for sufficient financial and human resources for implementation and involving competent external partners in a timely manner to assist with implementation; 
  • establishing appropriate AI compliance and governance structures, ideally based on existing structures within the entity; and 
  • identifying and implementing appropriate measures to which the entity is subject, including adapting internal company processes accordingly. 

Commencing preparations should not be delayed by the fact that not all requirements are yet complete and clear in detail. It is recommended that units adopt a risk-based approach that begins with the implementation of the most urgent and high-risk requirements. 

Conclusion and outlook 

The EU's AI Act is a significant step meant to ensure that AI technologies are developed and used in a manner that benefits society while safeguarding individual rights and public safety. As the AI Act moves towards implementation, companies operating in the field of or using AI must carefully assess their technologies and practices to ensure compliance. Companies should start implementing the requirements of the new regime as soon as possible. 

However, the success and acceptance of the AI Act will ultimately also depend, to a large extent, on whether companies can implement the complex requirements in a practical manner and whether supervisory authorities and courts can find practicable and uniform standards for the application and enforcement of the regime. It remains to be seen whether this will be achieved in practice. 

Valentino Halim Junior Partner 
[email protected] 
Oppenhoff & Partner, Frankfurt 

Feedback