Support Centre

You have out of 5 free articles left for the month

Signup for a trial to access unlimited content.

Start Trial

Continue reading on DataGuidance with:

Free Member

Limited Articles

Create an account to continue accessing select articles, resources, and guidance notes.

Free Trial

Unlimited Access

Start your free trial to access unlimited articles, resources, guidance notes, and workspaces.

International: The importance of the new GPA resolution on generative AI systems - part two

In part one of this Insight article, Daniel Necz, Associate from Eversheds Sutherland, highlighted key considerations and regulatory frameworks essential to navigating the landscape of generative artificial intelligence (AI) systems. In part two, Daniel explores automated decision-making and data security in AI employment practices, offering insights on transparency, bias mitigation, and regulatory compliance for organizations.

Greg Pease/Stone via Getty Images

AI has certainly made an appearance in the workplace in recent years and has been used by many employers in a variety of industries ranging from food delivery to financial services and telecom. Some fear that algorithms might gradually replace human personnel or that the use of AI at the workplace may infringe upon the data protection rights of employees or lead to extreme forms of monitoring. It is beyond doubt, however, that the use of AI at the workplace may also have certain benefits if used in accordance with relevant laws and regulations, as it may help employees perform work more efficiently and support workplace safety. A new resolution of the 45th Global Privacy Assembly (GPA) sheds light on the main challenges of AI and employment and provides measures to address them.

Data protection principles and transparency

The new resolution of the GPA highlights the importance of complying with data protection principles by employers, such as necessity, proportionality, data minimization, purpose specification, and limitation, and the right not to be subject to a decision based solely or primarily on automated means. This also means that employers may only use AI to collect and process the personal data of employees if that is necessary and proportionate to the given purpose of data processing and should be limited in scope and time to such purpose. For example, data collected for the purpose of monitoring secure use and access to certain systems cannot also be used to assess work efficiency, as employers are required to restrain from such forms of 'function creep,' especially in cases where the purpose or purposes of data processing by different AI tools are not communicated transparently to employees.

Under the principle of accountability, employers are further expected to take into account the reasonable expectations of employees and to mitigate and, to the extent possible, prevent the risks to the rights and freedoms of employees, which may be hindered by undue monitoring or retaliatory use of the technology. The employer, for example, may not use AI tools to dissuade employees from exercising their employment rights such as the right to association or the use of relevant whistleblowing channels to report unlawful activity.

In addition to data protection laws, employers also need to comply with relevant labor laws and human rights principles and frameworks (e.g., with respect to the right to privacy or non-discrimination). It is further noted that workers who do not have an employment contract and employment relationship as well as candidates who have not yet been employed may also merit protection under relevant data protection laws such as the General Data Protection Regulation (GDPR). Bearing this in mind, employers are expected to clearly inform candidates and members of staff not having an employment contract about the use of different AI tools, for example, for screening applications or extending contracts.

In addition to informing employees about the collection and use of their personal data by AI, employers may further be required under relevant labor laws to engage in meaningful conversation with trade unions or labor representatives about the implementation of new technology for processing the personal data of employees.

Automated decision-making and data protection rights

Employees and candidates may exercise their rights not to be subjected to a decision based solely or partially on automated decision-making. If AI is used to make automated decisions about employees or candidates (such as decisions having legal effects or similarly significant effects, e.g., hiring, promotion, termination), information must further be provided about the logic involved, as well as the significance and the envisaged consequences of such processing for the data subject.

Employers should ensure recorded, meaningful human review of employment decisions made by AI systems at the request of the employee, and employees may further express their point of view and contest the decision. It is recommended for employers to specify in their policies appropriate redress mechanisms and complaint handling and provide such information to employees.

In addition to data protection laws, labor laws may further address the use of various AI tools for automated decision-making or certain other automated activities. For example, the proposed EU directive on improving working conditions of platform work requires online platforms using automated monitoring and decision-making systems to use such systems transparently and ensure that the use of the system is duly monitored to avoid a harmful impact on the physical and mental health of platform workers and that human review of significant decisions related to employment or the use of the platform is undertaken (e.g., decisions affecting work assignment, earnings, or account suspension).

Algorithmic discrimination and bias

Algorithmic discrimination and bias are also one of the major concerns of the use of AI tools at the workplace, as improperly trained or unsupervised use of the technology can reinforce discriminatory trends or amplify the harmful effects of bias against members of vulnerable groups.

To avoid such harmful effects, in cases where AI systems are developed and deployed in an employment context, it is important to ensure that personal data used for training and developing the system are representative of the context in which the system would be used. Regular updates and revisions of such systems (e.g., AI tools used for hiring new applicants or promotions) must further be ensured by employers to avoid harmful effects.

Data security and impact assessment

In order to ensure the security of personal data collected through AI systems (e.g., systems used to allocate work or score employees), employers or other providers developing AI systems that collect personal data on employees or applicants are expected to comply with the privacy by design and default principles and implement measures which effectively protect the personal data of such individuals.

Employers are further expected to take into account the potential harmful effects of the data processing on employees and applicants. In accordance with the GDPR and data protection authority practice, employers may further be required to undertake a data protection impact assessment or a similar assessment under local laws, and also specify measures implemented to prevent or minimize harmful effects on employees or applicants. It is noted that under the new  EU Artificial Intelligence Act (AI Act), certain employers (especially public sector employers) may further be expected to undertake a fundamental rights impact assessment focusing on the potential impact of the given AI system on the fundamental rights of affected individuals as well as respective protective measures and solutions put in place to prevent or minimize risks and harmful effects.

What to be on the lookout for?

Employers processing the personal data of employees through different AI tools and solutions must ensure that they comply with data protection principles, transparency requirements, and relevant labor laws and human rights principles and frameworks. Under such principles and requirements, employers should only collect personal data from employees that are necessary and proportionate with respect to the legitimate purpose of processing. Tools used for covert monitoring, facial recognition, or assessment of work performance and quality may violate such principles or would be subject to more stringent requirements. 

In the case of implementing AI systems used for automated decision-making, employers are expected to provide additional information concerning the logic involved in the system as well as the potential consequences on employees. Measures further need to be put in place to prevent bias. This may include relying on data representative of the context in which the system is used, ensuring human oversight, revision, and updating of relevant decisions and output, as well as the introduction of policies that clearly describe the rights employees have in respect of AI systems used for automated decision-making, including the right to redress.

Employers should also carefully consider the data security measures they apply to protect the personal data of employees being subject to data processing by AI systems and assess the potential impact of using such systems, especially in cases where sensitive data (such as health data or biometric data) are involved. Comprehensive data security policies and assessment documentation may need to be prepared in this respect, which coherently reflect the data protection and data security compliance of the employer.

Companies employing gig workers, students, contingent workers, or other individuals without an employment relationship or employment contract also need to comply with data protection requirements and may be subject to certain workplace requirements such as ensuring workplace safety. Trade unions or labor representatives should also be consulted, where appropriate, concerning the implementation of the given AI system in the workplace.

Companies using AI systems at the workplace may need to comply with a variety of employment, data protection, human rights, and technological requirements, potentially including sector-specific guidelines and best practices. In order to confidently navigate such complex legislation and avoid the risks of non-compliance, it is recommended for businesses to consult legal experts who have extensive experience in both employment and data protection laws as well as AI and data security requirements and best practices.

Daniel Necz Associate
[email protected]
Eversheds Sutherland (International) LLP, Dublin