Support Centre

You have out of 5 free articles left for the month

Signup for a trial to access unlimited content.

Start Trial

Continue reading on DataGuidance with:

Free Member

Limited Articles

Create an account to continue accessing select articles, resources, and guidance notes.

Free Trial

Unlimited Access

Start your free trial to access unlimited articles, resources, guidance notes, and workspaces.

EU: Assessments under European legislation: GDPR vs. the AI Act

Six years after the go-live of the General Data Protection Regulation (GDPR), covered organizations have gotten very used to Data Protection Impact Assessments (DPIA). Seasoned privacy professionals have definitively been part of many talks about the difference between DPIAs and Privacy Impact Assessments (PIA), and if there is or should be any difference.

In a time when everyone is talking about artificial intelligence (AI) and the upcoming EU AI Act (the AI Act), organizations are turning to privacy experts to see if this new legislative and regulatory focus will lead to a similar level of compliance work (and expense). In particular, they are wondering whether the AI Act's Conformity Assessments (CA) and Fundamental Rights Impact Assessments (FRIA) will find their way into every organization's compliance framework.

In this article, Maarten Stassen, of Crowell & Moring LLP, compares the GDPR's DPIAs with the AI Act's CAs and FRIAs, considering their key practical considerations and impact on organizations.

Roman Melnyk/iStock via Getty Images

DPIA vs. PIA

First, we have to address the difference between DPIAs and PIAs. While many privacy professionals prefer using the words 'privacy' and 'data protection' interchangeably, making a clear distinction between Privacy and Data Protection Impact Assessments makes sense, as is the case for privacy and data protection officers.

Indeed, DPIA and data protection officer are defined terms with very specific compliance requirements under the GDPR, and their incorrect use might lead to confusion and, in the case of the officer, even to a possible misrepresentation of the level of protection that is offered.

PIAs are, therefore, often used for assessments that are not limited to a specific legal framework and their approach and content are thus less prescriptive. Furthermore, a clear distinction between the two types of assessments helps prevent confusion if they result in a residual high risk. Indeed, if it is a 'real' DPIA, the controller should consult the supervisory authority and, more importantly, wait for its feedback, with the uncertainties (and delays) that this may entail.

Organizations therefore often prefer to limit GDPR-specific DPIAs to when they are legally required, i.e., where a type of processing is likely to result in a high risk to the rights and freedoms of natural persons.

DPIAs

In practice, though, the compliance effort is much greater than one might expect when considering this rather high threshold of high risk as in order to assess (and thus be able to exclude) whether there is a high risk, the controller needs to have:

  • a systematic description of the envisaged processing operations and the purposes of the processing, including, where applicable, the legitimate interest pursued by the controller;
  • an assessment of the necessity and proportionality of the processing operations in relation to the purposes; and
  • an assessment of the risks to the rights and freedoms of the corresponding natural persons - which are three of the four elements that a 'full' DPIA should at least contain.

DPIAs are thus a core element of every GDPR compliance program. Next, we will look into whether CAs and FRIAs have a similar impact on the AI Act's compliance programs. For this assessment, we will use the version of the AI Act that was approved by the European Parliament (as the final version has not been enacted yet).

CAs

A CA is defined as 'the process of demonstrating whether the requirements [for] high-risk AI system have been fulfilled,' which shows its quite limited scope.  

A major difference with DPIAs is that it is not the user but the developer who needs to carry out the CA. Or, to use the precise terms, under the GDPR controllers are required to conduct DPIAs, while under the proposed AI Act it is the providers, not the deployers, who must carry out the CA prior to placing a high-risk AI system on the market or putting it into service.

So, while a CA will be very important for organizations developing AI systems, organizations using such AI systems won't have to conduct them.

It is relevant to mention that a provider who considers that an AI system in a 'typical' high-risk area is not high-risk should document its assessment and register the AI system before it is placed on the market or put into service. This assessment shall be made available to national competent authorities upon request.

The typical high-risk areas referred to above are explicitly listed in the AI Act: biometrics, critical infrastructure, education and vocational training, employment, workers management and access to self-employment, access to and enjoyment of essential private services and essential public services, benefits, law enforcement, migration, asylum, and border control management, and administration of justice and democratic processes.

FRIAs

The above does not mean that deployers should never carry out AI-related assessments. Deployers must conduct FRIAs when they are bodies governed by public law, are private entities providing public services, or when they deploy high-risk systems intended to be used:

  • to evaluate the creditworthiness of natural persons or establish their credit score, with the exception of AI systems used for the purpose of detecting financial fraud; or
  • for risk assessment and pricing in relation to natural persons in the case of life and health insurance.

DPIAs and FRIAs

DPIAs and FRIAs are much more similar in nature than DPIAs and CAs as both assess the impact on the fundamental rights of individuals and both must be carried out by organizations using (vs. developing) technology.

While there are important differences, such as the need to describe the implementation of human oversight measures in an FRIA, their approach and content are quite similar too. In fact, the AI Act acknowledges that FRIA-specific obligations can already be complied with as a result of a DPIA.

FRIAs must contain at least:

  • a description of the deployer's processes in which the high-risk AI system will be used in line with its intended purpose;
  • a description of the period of time within which, and the frequency with which, each high-risk AI system is intended to be used;
  • the categories of natural persons and groups likely to be affected by the system's use in the specific context;
  • the specific risks of harm likely to have an impact on the categories of persons or groups of persons identified above, taking into account the information that the provider has to give;
  • a description of the implementation of human oversight measures, according to the instructions for use; and
  • the measures to be taken where those risks materialize, including the arrangements for internal governance and complaint mechanisms.

An important difference is that the results of an FRIA always have to be reported to the market surveillance authority, while for DPIAs this is only necessary when they result in a residual high risk.

Conclusion

Since CAs and FRIAs must be conducted by providers and a limited number of deployers, respectively, can we conclude that the EU AI Act might not be as burdensome in terms of compliance as expected for organizations that use AI in their business operations?

Unfortunately not. The regulatory focus on AI requires organizations to better understand the digital ecosystem in which they operate. With some AI systems being banned and others being subject to specific requirements, organizations should be able to show that they fully understand the type of technology used, both by themselves and their service providers.

In that sense, DPIAs, CAs, and FRIAs have something important in common: just because you are not obliged to conduct them does not mean the underlying processing operations shouldn't be on your risk radar. In fact, the opposite is true: since the GDPR and the AI Act offer specific frameworks for their respective assessments, there are many well-organized methods to gather the right information and make informed, risk-based decisions, which can, and most certainly will, be taken into consideration when an organization is held responsible for the impact or effects of using technology.

Maarten Stassen Partner
[email protected]
Crowell & Moring LLP, Brussels