Support Centre

You have out of 5 free articles left for the month

Signup for a trial to access unlimited content.

Start Trial

Continue reading on DataGuidance with:

Free Member

Limited Articles

Create an account to continue accessing select articles, resources, and guidance notes.

Free Trial

Unlimited Access

Start your free trial to access unlimited articles, resources, guidance notes, and workspaces.

EU: The AI Act – Where we stand today

The EU Artificial Intelligence Act (AI Act) is part of the overarching EU Digital Strategy. The strategy 'focuses on putting people first in developing technology, and defending and promoting European values and rights in the digital world.'1

On December 8, 2023, after an extensive discussion that lasted several days and was preceded by months of intense negotiations, the EU Parliament, Council, and Commission announced that they had reached a provisional agreement on the AI Act. 

This is not the end of the legislative process since this is only a political agreement, and for the AI Act to become EU legislation both the Parliament and Council are required to formally adopt the same. A reasonable forecast is that enactment will take place by the end of 2024, but it remains to be seen how discussions will proceed. These discussions will be focused on the actual text of the AI Act, which may be different from the text that is available today. In this Insight article, Francesca Gaudino, from Baker & McKenzie LLP, comments on the current text of the AI Act, which may be amended upon formal adoption by the Parliament and Council. 

BlackJack3D / Signature collection /


The AI Act was first proposed by the EU Commission in April 2021. In some respects, it reflects the General Data Protection Regulation (GDPR), which has so far been regarded as one of the most debated EU pieces of legislation. This is probably because, like the GDPR, the AI Act's ambitious aim is to set a regulatory and legal framework to rule on a complex matter. The difference is that the GDPR came after more than a decade of European experience in ruling on data protection issues. The AI Act, in contrast, is the first European initiative on AI and intends to break the traditional dichotomy between technology and law.  

Purposes, definitions, and application of the AI Act

In the explanatory memorandum on the rationale behind the AI Act, the Commission lists the following as its specific objectives:  

  • ensure that AI systems placed and used in the EU market are safe and respect existing law on fundamental rights and EU values;  
  • ensure legal certainty to facilitate investment and innovation in AI;  
  • enhance governance and effective enforcement of existing law on fundamental rights and safety requirements applicable to AI systems; and 
  • facilitate the development of a single market for lawful, safe, and trustworthy AI applications and prevent market fragmentation. 

The definition of an AI system is fairly broad and omni-comprehensive, in alignment with the recently updated definition provided by the Organisation for Economic Cooperation and Development (OECD): 'a machine-based system that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that [can] influence physical or virtual environments. Different AI systems vary in their levels of autonomy and adaptiveness after deployment.'2

A provider of an AI system is a natural or legal person, public authority, agency, or other body that develops an AI system or that has an AI system developed with a view to placing it on the market or putting it into service under its own name or trademark, whether for payment or free of charge. A user of an AI system is defined as any natural or legal person, public authority, agency, or other body using an AI system under its authority, except where the AI system is used in the course of a personal non-professional activity. 

Similarly broad are the definitions of the main subjects to which the AI Act applies, including an importer, distributor, and operator of an AI system, as well as the definitions of actions that trigger application of the AI Act, such as 'placing on the market,' 'making available on the market,' or 'putting into service' an AI system. 

The AI Act has an extraterritorial reach as it applies to:  

  • users of AI systems based in the EU;  
  • AI system providers of marketing or putting into service AI systems in the EU, irrespective of their location (i.e., inside or outside the EU boundaries); and  
  • providers and users of AI systems located outside the EU territory, when the relevant output is used within the EU territory. 

Risk-based approach and obligations 

The AI Act takes a risk-based approach in the sense that it identifies different categories of risks for AI systems based on the risks that the use (or potential use) of an AI system may trigger. For each risk category, there are specific requirements to be fulfilled. 

The risk categories under which AI systems are organized are:  

  • Prohibited AI practices: AI systems triggering high risks for human beings and their fundamental rights, freedoms, and safety are considered as posing an unacceptable risk and therefore are prohibited. This is the case for AI systems deploying subliminal techniques capable of affecting an individual's behavior, exploiting vulnerabilities of individuals, for example, their age or disability, or evaluating individuals based on social scoring.  
  • High-risk AI systems: This category has a heavier compliance burden, such as the establishment of a risk management system, specific quality criteria for data set deployment, data governance practices, technical documentation and recordkeeping in due place, transparency obligations, effective human oversight, security and cybersecurity measures, conformity assessments, cooperation with competent authorities, among many others. Additional requirements may be complemented by specific-sector legislations, such as health care, transportation, and finance. 
  • Low/minimal risk AI systems: For this category, the absence of specific obligations is balanced by the suggestion to adhere to codes of conduct on a voluntary basis. For some AI systems, there are specific transparency and information requirements for users, who should be made aware that they are interacting with a machine. 
  • General purpose AI systems (GPAI): This category also covers foundation models and generative AI systems. The specific rules are to be determined and it has so far adopted a two-tiered approach with 'transparency requirements for all general-purpose AI models and stronger requirements for powerful models with systemic impacts.'3 In addition, a dedicated AI Office has been set up within the European Commission to oversee the rules applicable to the most advanced AI models. 

The AI Act establishes the possibility of organizing AI regulatory sandboxes in Member States in support of innovations, acting as a sort of secure environment to facilitate the development, testing, and validation of innovative AI systems for a limited time before their placement on the market or their deployment.  

Government structure and sanctions

A European Artificial Intelligence Board has been established, with its main duties being to support the European Commission to foster effective cooperation of the national supervisory authorities, contribute to guidance on emerging issues within the scope of the AI Act, and assist national supervisory authorities and the European Commission to foster coherent application of the AI Act. 

At a Member State level, national supervising authorities will be established or designated to ensure application and implementation of the AI Act, including sanctioning powers for non-compliance, even if the enforcement process is still to be determined in detail. 

In terms of sanctions, the current framework provides for fines of up to €35 million or 7% of a company's annual global revenue in case of the use of prohibited AI systems. For breach of the AI Act's requirements, the threshold is €15 million or 3% of the turnover of a company's annual global revenue, while breach of the information requirements amounts to a fine of €7.5 million or 1.5% of the turnover of a company's annual global revenue. A proportionate approach is envisaged regarding the thresholds of fines for small and medium-sized enterprises and start-ups. 

The AI Act, once enacted, will be enforceable in different phases, at least according to the consolidated text. Indeed, some obligations will be enforceable after six months (this may include the rules on prohibited AI systems), others after 12 months, and the remaining after 24 months, thus leaving stakeholders with a two-year period to prepare for compliance. 

What's next 

What is reasonably expected is that application and enforcement will have to consider other pieces of legislation that materially impact the AI Act, such as the GDPR, sector-specific regulations (i.e., health care, finance, etc.), and also the Artificial Intelligence Liability Directive. The proposal of this Directive was published by the European Commission on September 2022, and it intends to integrate the EU liability framework in order to capture the specific instances of damages caused by AI systems, ensuring for individuals the same level of protection as for harms caused by other technologies in the EU. 

Whether the AI Act will serve as the benchmark for AI regulation worldwide, similar to the role of the GDPR in the data protection environment, remains to be seen. At the current time, the consolidated text of the AI Act must still be voted on, so the content discussed above may change and be amended. A comprehensive analysis of the text and its impact on the business should be deferred when the AI Act is enacted. 

Francesca Gaudino Partner 
[email protected]  
Baker & McKenzie LLP, Milan