Support Centre

You have out of 5 free articles left for the month

Signup for a trial to access unlimited content.

Start Trial

Continue reading on DataGuidance with:

Free Member

Limited Articles

Create an account to continue accessing select articles, resources, and guidance notes.

Free Trial

Unlimited Access

Start your free trial to access unlimited articles, resources, guidance notes, and workspaces.

EU: AI Act signed by Coreper

On February 2, 2024, the Belgian Presidency of the European Union announced, via X (formerly Twitter) that the Committee of Permanent Representatives (Coreper) had signed the Proposal for a Regulation of the European Parliament and of the Council Laying Down Harmonised Rules on Artificial Intelligence (the AI Act). On the same date, the European Parliament Committee on Internal Market and Consumer Protection (IMCO Committee) published the consolidated text of the AI Act.

The consolidated text of the AI Act

The consolidated text highlights that its purpose is to ensure a high level of protection of health, safety, and fundamental rights enshrined in the Charter of Fundamental Rights of the European Union. This includes democracy, rule of law, and environmental protection, though national security is excluded from the scope of the AI Act.

The consolidated text clarifies that the definition of an AI system, under Article 3(1) of the AI Act, has been amended to align it more closely with that of the Organization for Economic Cooperation and Development (OECD). Further, Recital 6 of the AI Act clarifies that the definition of AI system is not intended to cover simpler traditional software systems or programming approaches which are based on rules defined solely by natural persons to automatically execute operations.

Prohibited AI practices

Regarding prohibited AI practices, such prohibition now extends to real-time biometric identification by law enforcement in publicly available spaces, with exceptions. Other prohibitions include:

  • the use of untargeted scraping of facial images for the purposes of creating or expanding facial recognition databases;
  • emotion recognition at the workplace and in educational institutions, with exceptions;
  • biometric categorization based on certain specific beliefs or characteristics to a limited extent; and
  • predictive policing to a limited extent.

On predictive policing specifically, the consolidated text provides that the prohibition will not apply to AI systems used to support the human assessment of the involvement of a person in a criminal activity.

Fundamental rights impact assessment

The consolidated text outlines a light obligation for some deployers to conduct a fundamental rights impact assessment, concerning only:

  • deployers that are bodies governed by public law;
  • private actors providing public services; and
  • deployers that are banking and insurance service providers using AI systems are listed as high-risk.

Notably, the fundamental rights impact assessment only needs to be carried out for aspects not covered by other legal obligations such as a Data Protection Impact Assessment (DPIA) under the General Data Protection Regulation (GDPR).

General-purpose AI models

The consolidated text clarifies new provisions on general-purpose AI models (GPAI). This includes:

  • horizontal obligations on keeping up to date and making available, on request, technical documentation to the AI Office and national authorities;
  • providing certain information and documentation to downstream providers for the purpose of compliance with the AI Act; and
  • other requirements for models with systemic risks, such as risk assessments and ensuring an adequate level of cybersecurity protection.

The classification of GPAI models is also outlined in the consolidated text to initially depend on the capability, either based on the quantitative threshold of the cumulative amount of compute used for training measures in floating point operations, or on an individual designation decision of the European Commission. More specifically, on copyright, GPAI models will need to put in place a policy to respect EU copyright law and make publicly available a summary of the content used for training of the GPAI, based on a template provided by the AI Office.

Grace period

The consolidated text notes that GPAI models which are on the market before the entry into force of the provisions related to GPAI models, will have three years to bring operations into compliance (regardless of a substantial modification or not).

Next steps 

IMCO outlined that the IMCO and Civil Liberties, Justice and Home Affairs (LIBE) Committees are scheduled to vote on the AI Act on February 13, 2024.

You can read the Presidency press release here, the IMCO press release here, and access the consolidated text here.

Feedback