EU: Challenges of AI in employment
Artificial intelligence (AI) solutions can save a lot of time and money. Even before the emergence of generative AI, a study conducted by the EU-US Trade and Technology Council (TTC) revealed that as early as 2021, 28% of companies in the EU with more than 250 employees deployed AI technology.
Use cases within the employment context, mainly based on machine learning (ML) and its subcategory, deep learning, are diverse: employee flight risk analysis, payroll optimization, and sentiment analysis.
Generative AI solutions, which refer to AI generating new content rather than simply relying on predefined patterns or examples, can already write engagement letters, create presentations, source code, videos, or CVs, or can be used for chatbots or virtual assistants. AI-supported platforms can also bring together employers and applicants so that human recruiters have more time for the final selection of the most convincing candidates.
However, the use of AI also bears risks such as privacy concerns, identity theft, or disruptive effects, as some jobs might even be substituted by AI.
In the following Insight article, Kirsten Ammon, from Fieldfisher, outlines the challenges posed by EU law to AI solutions in the context of employment. It presents the core issues under the GDPR and the draft AI Act and highlights the key considerations concerning AI utilization in the employment sector. Notably, mere labor law legislative acts, such as those governing parental leave and employment contracts, are not considered.
Overall, the legal requirements for AI within the EU have far-reaching business implications, affecting market access, compliance costs, data protection, transparency, and liability.
Legal instruments in the EU and Member State law
In EU law, the primary differentiation is typically between regulations and directives. Regulations are generally and directly applicable in the EU Member States, holding complete legal authority (according to the Treaty on the Functioning of the European Union (TFEU)).
In comparison, directives shall be binding, as to the result to be achieved, but shall leave the choice of form and methods to the national authorities.
However, opening clauses allow EU Member States to introduce or maintain certain national variations or specifications within their national law in case of regulations. Similarly, the option to deviate from the content of directives exists, in particular, allowing the introduction of stricter or milder provisions. In both scenarios, albeit to a varying degree, it remains essential to always consider the national legal framework.
In some jurisdictions, other legislative acts may provide for additional requirements. For example, in Germany, the works council has to be involved in certain processing activities, background checks are subject to strict requirements, and labor law and criminal law specifics apply.
Data protection: GDPR
The General Data Protection Regulation (GDPR) imposes a variety of obligations on AI. Infringements are subject to administrative fines up to €20 million, or up to 4% of the undertaking's total worldwide annual turnover, whichever is higher.
The GDPR's broad scope applies to controllers and processors in the EU, regardless of whether the processing takes place in the EU or not, or when goods or services are offered to data subjects in the EU or their behavior is being monitored.
As the GDPR only applies to personal data, it must be analyzed whether this is being processed and needed. Personal data encompasses any information, directly or indirectly linked to an identified or identifiable individual. This generally means names, e-mail addresses, telephone numbers, or ID numbers, among others. Even data that is pseudonymized, such as a personnel number, remains personal data.
As fully anonymized data is excluded from the GDPR's scope, it is worth evaluating whether the respective purpose can also be fulfilled with anonymized (employee) data. However, achieving true anonymization entails high requirements.
Controller, joint controller, or processor?
It can also be challenging to differentiate between the roles of (joint) controllers and processors. This primarily affects the obligations outlined by GDPR and the contractual agreements to be established.
The controller is responsible for defining the purposes and means for the data processing. If this is done jointly, this is considered a joint controllership. When AI service providers process personal data both as processors on behalf of the controller and also use this for their own training purposes, it is rather difficult to argue a mere controller-processor relationship (even though such a data processing agreement is often the only available option).
Controllers must provide a legal basis for their data processing activities. Generally, legitimate interests such as improving productivity and efficiency of the employees might justify the deployment of a certain AI solution. However, this has to be evaluated in detail by conducting a legitimate interest assessment.
Stringent requirements also apply if the use of AI solutions should be based on consent. This is particularly required for collecting special category data, such as biometric data, health data, trade union membership, or data concerning sexual orientation. As consent must be freely given, it is also important to take into account the power imbalance in employment relationships.
The GDPR states a right for individuals not to be subject to decision-making based solely on automated processing, including profiling, which produces legal effects or similarly significantly affects the individual. Nevertheless, exceptions do exist, in particular when such processing is necessary for the performance of a contract or is based on consent.
Automated decision-making means that decisions are solely made without human intervention, e.g., a company selecting candidates fully automatically without human HR personnel making the final decision. In comparison, if AI output is still subject to human involvement, such as an HR employee choosing an AI-preselected candidate, the abovementioned prohibition does not apply.
According to the Working Party's, legally non-binding, viewpoint, decisions within the employment context that are "similar significantly affecting" encompass decisions:
- that deny someone an employment opportunity or put them at a serious disadvantage; or
- that affect someone's access to education, for example, university admissions.
Legal peculiarities apply to special category data and transparency obligations, that particularly necessitate disclosing the logic, significance, and consequences of processing.
Transparency and data subject rights
It is generally required to establish transparency at the time when personal data are obtained, which may involve privacy notices that also include the involvement of third-party AI providers.
However, adhering to these transparency requirements can pose challenges, when AI service providers lack insight into the criteria governing the algorithmic decisions, referred to as the "black box problem." In such instances, it is not feasible to notify the employees adequately.
Another issue of AI solutions is to ensure compliance with the data subject rights, notably the rights to erasure, restriction, and access.
A data protection impact assessment (DPIA) is required when a type of processing in particular using new technologies is likely to result in a high risk to the rights and freedoms of natural persons. In particular, a DPIA is required for:
- a systematic and extensive evaluation of personal aspects relating to natural persons which is based on automated processing, including profiling, and on which decisions are based that produce legal effects concerning or similarly significantly affecting the natural person;
- processing on a large scale of special categories of data, or relating to criminal convictions and offenses; or
- a systematic monitoring of a publicly accessible area on a large scale.
The list is non-exhaustive and national data protection authorities in the EU shall provide black lists and white lists, where a DPIA is not required. For example, in Germany, most of the 18 federal data protection authorities have published their own DPIA black lists.
In summary, for many AI tools processing employee or applicant personal data, a DPIA will be required due to the potential high risk for the data subjects.
International data transfer
In case personal data should be transferred to countries outside the EU/EEA, additional requirements apply for international data transfer (e.g., standard contractual clauses as a transfer tool). However, the Court of Justice of the European Union's judgment in Data Protection Commissioner v. Facebook Ireland Limited, Maximillian Schrems (C-311/18) (the Schrems II Case) requires that the transferring party must conduct a transfer impact assessment to determine whether the respective data transfer requires supplementary measures. With respect to the US, a new adequacy decision, the EU-US Trans-Atlantic Data Privacy Framework, was adopted on July 10, 2023, by the European Commission.
Further GDPR requirements
Since AI often relies on Big Data for training purposes, involving large-scale utilization, where outcomes and conclusions remain undefined during data collection, it is particularly challenging to uphold general GDPR principles of data minimization, purpose limitation, or privacy by design. Furthermore, technical-organizational measures and retention periods (e.g., for applications of rejected candidates) are required.
Bias and hallucinations
Another challenge for AI, in particular for generative AI, is bias and hallucinations, both of which can pose a threat to the protection of personal data.
Bias means distortion effects in the development and use of AI that often reflect human partialities and prejudices. An example is a candidate selection tool that only suggests white men located in rich neighborhoods even though applicants with an Asian migration background would provide better objective grades.
Hallucinations refer to situations where AI systems, particularly those based on deep learning or generative models, produce outputs that are not based on real data but are instead entirely fabricated by the model. For example, an AI model falsely claims or invents incorrect facts (e.g., fake news). This can happen due to various reasons, such as lack of diverse training data, or inherent limitations in the model architecture. Therefore, sensitive personal data or trade secrets may be mistakenly generated or disclosed, incorrect data can lead to data breaches, and malicious actors can exploit this shortcoming to their advantage.
To mitigate risks of bias and hallucinations when deploying AI solutions, data quality and data variety must be ensured as well as a constant monitoring of the processing. Diverse teams (with varying backgrounds, genders, ethnicities, etc.) allow for different perspectives.
In addition, internal policies and regular training of employees can safeguard against certain risks when deploying AI solutions. Most importantly, a generative AI policy should include:
- a description of different data levels (e.g., training data, prompt data, output data) and admissible processing activities;
- transparency requirements;
- legal bases for all processing activities;
- measures to avoid anti-discrimination, bias, and hallucinations; and
- rules to protect personal data, trade secrets, and other intellectual property rights.
AI Act (draft)
As part of the EU digital strategy, the Proposal for a Regulation of the European Parliament and of the Council Laying Down Harmonised Rules on Artificial Intelligence (the draft AI Act) will be the world's first comprehensive AI legislation. It classifies four different risk categories that depend upon the risk posed by the AI system to the user and possibly third parties. The higher the risk level, the higher the regulatory requirements. It ranges from AI systems with an 'unacceptable risk,' that are completely prohibited, over 'high-risk' AI systems (entailing stringent requirements) to 'limited-risk' systems (mandating transparency requirements) and 'low-risk/minimal-risk' systems (carrying no obligations). It now also includes generative AI solutions as so-called 'foundation models.'
Violations can now be fined a maximum of €40 million or up to 7% of a company's total worldwide annual turnover, whichever is higher.
The newly adopted text, amending the first draft of April 2021, will serve as the EU Parliament's negotiating position during the trilogue procedure with the Council of the EU and the European Commission. An agreement could be reached by the end of 2023, entering into force in mid-2024. Then, it could be directly applicable in 2026, with a 24-month transition period.
Focusing on employment, the draft AI Act particularly classifies high-risk AI systems under its second category. These systems will have to be registered in an EU and should encompass:
- biometric identification and categorization of natural persons;
- education and vocational training; and
- employment, worker management, and access to self-employment.
In particular, requirements for high-risk AI systems apply:
- mandatory ex-ante conformity assessment;
- a fundamental rights impact assessment; and
- obligations in the fields of risk management, testing, technical robustness, data training, data governance, transparency, human oversight, and cybersecurity.
Several anti-discrimination legislative acts aim to protect individuals from discrimination and promote equality in various areas of life. The key EU anti-discrimination directives that also affect the employment sector in the EU, primarily concern race and ethnic origin, gender (including equal treatment and equal pay), disability, sexual orientation, age, and religion and belief.
There are further important anti-discrimination legal norms, including Article 157 of the Treaty on the Functioning of the European Union (TFEU) and Article 21 of the EU Charter of Fundamental Rights.
Further legislative acts
- Copyright and Related Rights Directive (EU) 2019/790): EU copyright currently only protects the results of human intellectual effort. This means that mere AI-generated output is generally not protected, but input training data may be subject to copyright.
- Directive on the Protection of Trade Secrets (Directive (EU) 2016/943).
- Whistleblowing Directive (EU) 2019/1937).
- Digital Services Act (DSA) and Digital Markets Act (DMA): both regulate online platforms and online intermediaries. The DSA, in particular, addresses illegal content (e.g., hate speech) and will be fully applicable from February 17, 2024.
The DMA sets new rules for online platforms (e.g., transparency, accountability, online advertising rules) and has been applicable since May 2, 2023. So-called gatekeepers (very large online platforms and search engines) are subject to more stringent requirements. Companies can face penalties of up to 6% of the platform's worldwide turnover.
- Data Act and Data Governance Act (drafts): these draft legislations will encompass manufacturers of connected products and providers of related services (mainly in the Internet of Things (IoT) sector, e.g., data sharing, interoperability, and portability rules). A provisional agreement on the draft Data Act was reached on June 27, 2023, and it will not be applicable before 2025. The Data Governance Act (DGA) will be applicable from September 27, 2023.
- Directive On Improving Working Conditions In Platform Work (draft): Addresses working conditions in platform work (a form of employment on online platforms to solve specific problems, or to provide specific services in exchange for payment).
- Product Liability Directive (draft): including software and AI systems.
- Revision of the General Product Safety Regulation from December 13, 2024: faces new challenges regarding digitization (new technologies and business models, in particular, risks from cybersecurity and software).
- NIS and NIS 2 Directive: the Network and Information Security (NIS) directives oblige certain entities, including AI service providers, to certain cybersecurity rules. NIS 2 must be implemented by Member States by October 17, 2024.
- For the next mandate, the European Commission is also preparing a new legislative initiative on algorithmic management in the workplace, which is expected to build on the AI Act.
Both AI service providers and AI users face multiple challenges to ensure compliance with EU and national law. It specifically remains to be seen which further amendments will be made to the draft AI Act that will hopefully enhance innovation. Many questions remain open, particularly those concerning the interplay between different legislative acts.
However, the potential to boost efficiency in the employment sector should still be harnessed in the context of trustworthy and responsible AI. To enable the deployment of AI, early involvement of legal counsel is key.
When offering and/or using AI solutions in the EU, it is recommended to:
- evaluate potential risks, encompassing both legal and factual aspects;
- comply with mandatory rules;
- monitor ongoing legislative initiatives;
- work closely with the data protection officer (DPO) and other stakeholders;
- provide training for employees on the issues;
- create an internal AI policy;
- monitor the use of AI; and
Kirsten Ammon Associate