Support Centre

You have out of 5 free articles left for the month

Signup for a trial to access unlimited content.

Start Trial

Continue reading on DataGuidance with:

Free Member

Limited Articles

Create an account to continue accessing select articles, resources, and guidance notes.

Free Trial

Unlimited Access

Start your free trial to access unlimited articles, resources, guidance notes, and workspaces.

EU: What European employers should know about the draft AI Act

Artificial intelligence (AI) is everywhere - in translation or navigation services, in software for the monitoring of an assembly line, or in CV screening tools. The EU has been negotiating the world's first attempt to comprehensively regulate AI for 18 months. On June 14, 2023, the European Parliament voted overwhelmingly in favor of the Proposal for a Regulation of the European Parliament and of the Council Laying Down Harmonised Rules on Artificial Intelligence (the draft AI Act). Dr. Jessica Jacobi, Partner at KLIEMT.Arbeitsrecht Partnership of Lawyers Ltd., outlines what European employers should consider in light of the draft AI Act.

farakos / Essentials collection / istockphoto.com

Status of the legislative process - and why European employers should start acting now

On June 14, 2023, the European Parliament voted on the draft AI Act and has now introduced the amended draft AI Act into the further legislative process. Negotiations between the EU Member States and the Council of the European Union will now follow. The draft AI Act is expected to be adopted in 2024 and to enter into force with a grace period for implementation, probably in 2025 or 2026.

Despite the expected grace period, European companies should start acting now. The reason is that all deployment of software takes time. And once chosen and implemented, the software is not easily removed, not even in non-works council countries.

Developers of software which includes AI aspects are of course highly aware of the expected restrictions which will apply to all software intended for use by, or for monitoring of, EU citizens. Similar to the General Data Protection Regulation (GDPR), the AI Act will have extraterritorial scope and set a benchmark for all companies that target the EU market.

AI is everywhere

AI is already omnipresent in working life. AI is defined as self-learning systems that establish and apply rules with the help of algorithms, and can thus calculate probable results, which, however, do not necessarily have to be correct.

Workers are already using AI, including in the context of their professional activities:

  • They use search engines or translation programs, each of which is based on an algorithm.
  • Speech generators are used to draft a brief, prepare a marketing presentation, or summarize a text.

Companies use programs that make use of AI in parts:

  • A large online retailer uses AI-controlled robots in the warehouse for 'pick and pack' to select goods in the high-bay warehouse and pack them for shipment.
  • Another online fashion retailer is using AI to track fashion trends on social media and to gauge the popularity of sample styles.
  • AI can provide answer suggestions for dialogues in messaging applications. AI is used to connect individuals with colleagues who are working on a similar topic to their own.
  • AI is used in the monitoring of machines, reports unusual key figures or noises, and designs entire text modules for maintenance reports.
  • In online retail, chatbots are used for customer communication in service. In fashion retail, the AI-controlled customer advisor makes fashion suggestions.
  • The HR department uses recruiting AI-supported programs for actively sourcing suitable employees from social media. The program screens these applicants for suitability for the profiles of the open positions in terms of professional and social skills.

Existing labor law regulations for the use of AI

Even today, the use of AI does not take place in a legal vacuum. The requirements of data protection law, privacy law, and copyright law must be observed. Employers should already issue regulations on the use of generative AI systems, such as chatbots or translation programs, in accordance with the law. In countries with a works council system, the intended use of AI systems is subject to co-determination under general provisions for the use of electronic tools and under rules for employee monitoring.

The draft AI Act

The draft AI Act is the world's first attempt to comprehensively regulate the use of AI. It is hoped that it will have a signal effect beyond the borders of Europe. The draft AI Act still contains the risk-based approach of the original draft which was adopted by the European Commission on April 21, 2021, but also many additions as outlined below.

Starting with the highest risk category, all types of AI that can suppress humans are prohibited. This includes, for example, permanent surveillance in public spaces by means of biometric recognition systems, or so-called 'social scoring,' a permanent social behavioral evaluation (Article 5 of the draft AI Act).

Secondly, high-risk AI (Article 6 of the draft AI Act) refers to systems that can have a benefit but can also cause significant harm, such as self-driving cars or influencing voters. The obligations under the draft AI Act are also explicitly directed at the users of such systems. Important from a labor law perspective is the use of AI systems in the selection of job applicants as part of this category (Annex III, paragraph 1(4)(a) of the draft AI Act). Another example is systems that decide on admission to professional and other training courses (Annex III, paragraph 1(3)(a) of the draft AI Act). Finally, this category also includes systems that make decisions about promotions, transfers, or terminations, or for monitoring and evaluating employee behavior and performance (Annex III, paragraph 1(4)(b) of the draft AI Act). Here, the data with which the AI is trained must be selected in such a way that discrimination is not possible. The user must pay attention to system security and document how the system works. A human must monitor the application. The user must carry out a risk analysis before using the system.

The final risk category comprises low-risk-use cases, such as the use of chatbots in telephone hotlines, for example. Here, it must be made clear when the answer comes from a human and when it comes from the AI.

In addition, the European Parliament has added general regulations for all users and developers of AI (Article 4(a) of the draft AI Act):

  • human overview;
  • technical reliability;
  • data protection and privacy;
  • transparency;
  • protection against discrimination; and
  • sustainability and environmental friendliness.

Developers of generator AI, such as speech generators, must fulfill transparency requirements. They must refer to the use of AI and document with which data the system was trained. Finally, the AI definition is now more narrowly defined (Article 3(1)(1) of the draft AI Act). Critics of the Commission's draft AI Act had claimed that its AI definition would include any spreadsheet or automated coffee machine.

Finally, it is worth noting that the fines for infringements included in the draft AI Act go even beyond the fines which are part of the GDPR: up to €40 million or 7% of the worldwide turnover, whichever is higher.

What impact does the draft AI Act already have on European employers?

As mentioned at the beginning, the draft AI Act will probably only become applicable after a transitional period. However, it may already have a signal effect as 'state of the art' for the assessments in practice, also by authorities. Hence, companies and employers should already thoroughly study and observe the requirements. The development and introduction of software, whether AI-supported or not, is a time-consuming process that is difficult to reverse, not only because of co-determination in companies.

It is already advisable to issue regulations on dealing with AI-supported systems. Some large corporations and banks have generally banned the professional use of voice generators. However, there is also a case for not generally prohibiting the use of voice generators. But employees must be advised not to enter personal data into the system and not to commit copyright infringements. They should also be made aware that speech generators do not necessarily produce correct results, but only something that sounds probable and plausible, so that every draft should be examined critically and in detail.

It is therefore certainly advisable to adopt the risk-based approach of the draft AI Act in internal assessments already now. For the HR sector and labor law, the classification of AI-supported recruiting systems as a high-risk use is therefore of particular importance. The software of a renowned European manufacturer may raise fewer questions here than an unknown US product.

It is already advisable to conduct and document a comprehensive compliance risk assessment before using AI. The objectives of the use of AI should be recorded and weighed against the interests of the employees and other risks of compliance violations.

Employers who develop or use AI will be obliged to train their employees in the use of AI ('shall take measures to ensure a sufficient level of AI literacy of their staff [...]', Article 4(b) of the draft AI Act).

Dr. Jessica Jacobi Partner
[email protected]
KLIEMT.Arbeitsrecht Partnership of Lawyers Ltd., Düsseldorf

Feedback