Support Centre

You have out of 5 free articles left for the month

Signup for a trial to access unlimited content.

Start Trial

Continue reading on DataGuidance with:

Free Member

Limited Articles

Create an account to continue accessing select articles, resources, and guidance notes.

Free Trial

Unlimited Access

Start your free trial to access unlimited articles, resources, guidance notes, and workspaces.

International: AI, privacy, and security - part one: Exploring the role of AI in the global workplace

In part one of our Global workplace AI, privacy, and security - technical and practical implications series, Dr. Paolo Balboni, Noriswadi Ismail, Davide Baldini, and Kate Francis, from ICT Legal Consulting, delve into the growing influence of artificial intelligence (AI) in areas such as recruitment, talent management, and cybersecurity. The exploration underscores not only the advantages of AI but also stresses the crucial need for ethical implementation, uncovering the technical and practical implications that are shaping the future of work.

xxmmxx / Essentials collection / istockphoto.com

Appetite for AI in the workplace

AI systems are increasingly being integrated into various aspects of the workplace. This is transforming how businesses select and manage their resources. In the last few years, AI has found the workplace to be one of its most fertile grounds for adoption. In particular, employers report that improving employees' performance and reducing staff costs are the main drivers behind the adoption of AI systems.

AI systems are currently being deployed in the workplace for multiple purposes most notably for recruitment, talent management, process automation, and cybersecurity improvement.

When AI systems are used for recruitment and hiring, they can help human recruiters by providing resume screening activities, i.e., analyzing resumes to identify qualified candidates by matching skills and experience with specific job requirements. AI-powered chatbots can handle initial candidate interactions, answer questions, and schedule interviews to streamline the recruitment process. These types of AI systems are especially useful for companies such as multinationals which routinely receive and process voluminous applications that would be manually challenging to manage.

In the context of talent management, AI systems can assist managers in evaluating employee performance by analyzing data on individual and team achievements, providing insights for performance appraisals, and providing feedback. Moreover, AI may help to identify skills gaps within an organization. This allows for targeted training and development programs to enhance employee skills, or even singling out high-potential employees who are suitable for leadership roles. AI systems may also assist in tailoring training programs to individual employee needs, optimizing the learning experience, and improving knowledge acquisition and retention.

Concerning process automation, AI-powered bots can automate repetitive and rule-based tasks, reducing the burden on employees and freeing them up to do more meaningful work, thereby increasing overall efficiency. The recent rise and fast improvement of generative AI has also made it possible to extend the automation of repetitive tasks which was previously confined to the 'blue collar' sector to more white-collar professions.

AI is also playing a crucial role in improving cybersecurity at work, in particular by enhancing data loss prevention (DLP) systems and thus providing more advanced and effective means of identifying, preventing, and responding to potential data breaches. This is typically made possible by leveraging behavioral analytics, that is, analyzing and understanding normal user behavior within an organization. By establishing a baseline of typical activities, the system can then identify anomalies that may indicate unauthorized access or data exfiltration. Moreover, AI systems can detect unusual patterns of data access or usage that may signal a security threat. This includes recognizing atypical login locations, access times, or data transfer volumes which can help in identifying potential insider threats or compromised accounts.

In summary, incorporating AI in the workplace can lead to more efficient and data-driven decision-making, improved employee experience, and enhanced cybersecurity. However, it is essential to implement AI ethically, not only to ensure compliance with current and forthcoming legislation but also, to build trust among employees. In the next section, we will provide an overview of the typical risks connected to the use of AI in the workplace for the rights and freedoms of workers.

Typical risks for the rights and freedoms of workers

AI systems that are deployed in the workplace normally leverage the personal data of candidates and/or workers in order to achieve the purposes of the employer. As a result, the workers' fundamental rights to privacy and the protection of personal data are potentially impacted by the use of such systems. In order to address and manage privacy risks, organizations wishing to use AI systems in the workplace must therefore make sure that the relevant processing of workers' personal data is in line with the internationally recognized privacy principles of lawfulness, fairness, transparency, purpose limitation, data minimization, accuracy, integrity, and confidentiality. Moreover, the organization must have procedures in place to effectively respond to data subjects' requests, such as, for example, the rights to access personal data, to information, to object, etc. Such rights can be exercised by candidates and employees to understand which data are processed, why, and how, including the logic behind possible profiling activities carried out by way of AI systems, and to oppose and require human intervention with respect to fully automated processing activities that result in legal effects concerning them or similarly significantly affects such data subjects.

When such systems are used to make decisions concerning individuals, which might be the case especially when used in the context of recruitment and talent management, Article 22 and Recital 71 of the General Data Protection Regulation (GDPR) are considered to be the global benchmark with regards to the main safeguards that organizations need to implement in order to protect the rights and freedoms of workers. In particular, the following four principles may be drawn from the provision:

  • transparency: Individuals must be made aware that they are subject to algorithmic decision-making;
  • redress: Individuals must be able to express their point of view and to contest the decision;
  • human intervention: Individuals must have the right to request and obtain meaningful human intervention; and
  • non-discrimination: the decision-making process must not have a disparate and unjustified impact on individuals who are part of a category protected under anti-discrimination law.

Algorithmic discrimination is in fact a risk that is strictly connected with the deployment of AI systems in the workplace, especially when the AI system is used to evaluate an individual (e.g., fitness for a job or work performance) or to make decisions concerning them. Such risks may typically arise when the datasets used to train the relevant machine learning system are biased, for example, due to the under-representation of categories that are protected under anti-discrimination law, such as ethnic minorities, the disabled, or LGBTQIA+ individuals.

Aside from unlawful discrimination, workers may also risk being treated unfairly when decisions or evaluations concerning them are taken via AI systems, for example when the relevant algorithm leverages bias on non-pertinent data, thereby violating the reasonable expectations of the individuals. This would in turn not only undermine the worker's trust in the organization, but it could possibly also lead to violation of the applicable labour legislation or collective agreements in place. In this respect, safeguards such as meaningful human oversight implemented by organizations to curtail the risks outlined above may also prove useful in addressing risks of unfair treatment.

Further risks to the rights and freedoms of workers may arise in a case where AI systems that process personal data are subject to a data breach, especially where such data falls into the hands of malicious third parties. In this respect, it is of fundamental importance that AI systems are adopted and operated in line with established cybersecurity best practices as reiterated in the words of the Executive Director of the European Union Agency for Cybersecurity (ENISA) Juhan Lepassaarof: "Cybersecurity is the foundation of trustworthy Artificial Intelligence solutions. It will serve as a springboard for the widespread secure deployment of AI […]."

AI risk governance: A holistic and global dimension

In light of the numerous and diverse hazards highlighted in the previous section, it is apparent that AI risk governance needs to be addressed in line with a holistic approach. This should take into account not only 'traditional' privacy and data protection risks connected with the processing of personal data inherent to the use of AI, but also cybersecurity concerns that are specific to AI systems. Organizations should also take into account the risk of unlawful algorithmic discrimination and, more generally, the unfair treatment of workers. It is relevant to point out that there are always specific provisions at the EU Member States' level that shall be considered in order to assure compliance with the deployment of AI in the workplace. Accordingly, multinational companies will need to comply with multiple pieces of legislation applicable in each of the countries where they operate. As a result, they will be subject not only to various EU-level sectoral legislations (data protection, cybersecurity, as well as the nascent field of specific AI legislation) but also to overlapping and not always consistent national laws (especially non-discrimination and labor law).

In light of the above, to address the multiple risks and applicable legislations connected to AI, multinational companies should consider developing and adopting a policy on the use of AI in the workplace. The policy should be both holistic, i.e., capturing the legal, cybersecurity, and ethical domains of AI governance, and global, i.e., taking stock of the requirements stemming from multiple jurisdictions. In order to do so, organizations should take into account already-existing sources aimed at regulating AI which, as of today, most notably include:

  • The EU Commission's Ethics by design Guidelines for use of Artificial Intelligence;
  • The OECD recommendations on Artificial Intelligence;
  • New York City's Automated Employment Decisions Tool regulation;
  • Canada's Guide on the Use of Generative AI systems;
  • Singapore’s Model AI Governance Framework Second Edition;
  • ICO's Guidance on AI and Data Protection;
  • CNIL's Guidance on AI;
  • ENISA's Cybersecurity of AI and Standardisation;
  • EU Publication Office's Cybersecurity of Artificial Intelligence in the AI Act; and
  • Japan's Governance Guidelines for Implementation of AI Principles.

The above sources, which mostly encompass soft law instruments, provide an extensive overview of the current state-of-the-art requirements for AI risk governance in the workplace. The development of a policy based on these instruments will not only be an effective tool to govern current AI risks but will also allow organizations to anticipate most obligations that will stem from the forthcoming hard-law instruments specifically aimed at regulating AI, such as the EU's AI Act, whose definitive approval is expected in early 2024. In the end, organizations that have developed and operationalized such a policy will gain a competitive edge over competitors who have not done so in a timely fashion.

Following this approach will not only help organizations achieve compliance with applicable legislation but will also protect them against cybersecurity threats and build trust among their workforce.

The EU at the forefront of AI regulation

With the forthcoming adoption of the AI Act, the EU legislator aims to become the first one to systematically regulate AI by means of a hard-law instrument. With the adoption of the AI Act, the EU is aiming to set forth the global gold standard for AI regulation, paving the way for other countries to regulate AI based on its own model, similar to what has happened in the data protection field with the GDPR.

Aside from the forthcoming AI Act, and as noted in paragraph 2 above, it should be borne in mind that the GDPR already applies to AI systems that process personal data, as recently shown by the Italian Data Protection Authority's (Garante) decision on ChatGPT.

Against this background, multinational companies should focus their attention on both current and forthcoming EU legislation in order to anticipate worldwide legislative developments and gain an edge over competitors.

Although the AI Act is still undergoing adoption at the time of writing, a political consensus on its core obligations has already been formed so that organizations may start planning their compliance actions accordingly. In particular, organizations should carry out internal evaluations aimed at categorizing the risk level of AI systems that they plan on producing, distributing, and/or using, in order to understand whether such systems are captured under the scope of the law. This will, in turn, enable the organization to create a gap analysis regarding compliance with the AI Act requirements.

In this respect, it is noteworthy that AI systems used in the workplace are classified as high-risk under the latest draft of the law, therefore being subject to the bulk of its obligations, including the performance of Fundamental Rights Impact Assessments (FRIAs) prior to deploying such systems. Moreover, organizations should consider whether the AI system is outright prohibited, or whether it is necessary to provide meaningful transparency for workers who interact with user-facing AI (such as chatbots).

Concerning GDPR compliance, as observed above, organizations deploying AI systems in the workplace must take into account all the obligations provided by the GDPR, including the provision of all necessary information to the workers, fulfilling the exercise of data subjects' rights, as well as reliance on an appropriate legal basis. On this last point, it should be borne in mind that obtaining employee consent is typically considered to be an invalid legal basis, due to the power asymmetry between the employer and the employee. Organizations willing to deploy AI systems in the workplace should therefore consider grounding the relevant processing of personal data on other legal bases, such as legitimate interest or performance of a contract.

As observed above in the section on typical risks for the rights and freedoms of workers in part one, AI systems used to make decisions concerning workers must also be used in compliance with Article 22 of the GDPR, which regulates automated decision-making.

Notably, organizations are also required to perform a Data Protection Impact Assessment (DPIA) under Article 35 of the GDPR before the relevant processing operation takes place, given that the use of AI systems in the workplace is typically considered to be a high-risk activity for the rights and freedoms of data subjects, in light of the innovative nature of the technology and given the vulnerable status of workers vis-à-vis their employer. In this respect, the AI Act and the GDPR must be read in conjunction, and under the latest draft of the AI Act - the FRIA and DPIA must be performed in conjunction, and the DPIA must be included as an addendum to the FRIA. Moreover, it is advisable to integrate at this stage also the most appropriate Cybersecurity Impact Assessment (CIA) to make sure that possible cybersecurity risks will be mitigated with full respect to data protection and other possibly impacted fundamental rights:

DPIA + FRIA + CIA = Integrated holistic risk management and compliance

Compliance with national labor legislation must be duly taken into account by the organization. In this respect, Article 88 of the GDPR allows Member States to provide more specific rules to protect the rights and freedoms for the processing of employees' personal data in the employment context. Coherently, as already explained above, organizations must always check whether national legislation provides for additional obligations compared to the GDPR when it comes to the processing of workers' personal data.

The importance of ESG and sustainability in AI deployment

Of increasing relevance in the current data-driven world, organizations implementing AI systems must consider the wider social and ethical implications of such use. Fairness, ethics, and sustainability are becoming increasingly important for people. The public cares about how organizations behave and the ways in which technology and data are used.

While the GDPR provides a solid framework for legal data protection compliance, it sometimes falls short when it comes to ensuring ethical behavior and the fair use of technologies. At the same time, ESG scores, of which cybersecurity and privacy can account for almost a third of the score, which are widely consulted by investors may be improved by adopting lawful and fair practices in the use of AI. 

In order to ensure long-term success and improve ESG rankings, it is fundamental that organizations actively seek to adopt socially responsible behavior when deploying AI systems in the employment context. Complying with ethical principles for the sustainable, fair, trustworthy, and secure use of AI represents a challenge, albeit one that must be dealt with head-on for organizations to truly reap the benefits that AI has to offer. Frameworks such as the auditable Maastricht University Data Protection as a Corporate Social Responsibility Framework (UM-DPCSR), which specifically tackles the use of AI and provides concrete guidance and controls on how organizations can implement ethical AI, can serve as a useful resource for organizations also in respect to ESG reporting.

Dr. Paolo Balboni Founding Partner
[email protected]
Noriswadi Ismail Global Board Member, Risk Quotient and Of Consultant & Advisor
[email protected]
Davide Baldini Partner
[email protected]
Kate Francis Privacy & Ethics Researcher, Development & Communication Specialist
[email protected]
ICT Legal Consulting