EU: EDPB and EDPS joint opinion - the Draft AI Regulation from a privacy perspective
Following the European Commission's proposal for a regulation on artificial intelligence ('AI'), the European Data Protection Board ('EDPB') and the European Data Protection Supervisor ('EDPS') recently adopted a joint opinion to address the corresponding data protection implications1. Notably, the EDPB and EDPS call for stronger data protection mechanisms, as well as the prohibition of intrusive forms of AI. This article considers these recommendations and the impact on businesses in this field, featuring insight from Dan Whitehead, Senior Associate at Hogan Lovells International LLP.
Key features of the AI regulation
Applying broadly to AI systems that have an impact within the EU, the draft regulation adopts a risk-based approach which categorises uses of AI that create: (i) an unacceptable risk; (ii) a high risk; and (iii) a low or minimal risk.
While the draft regulation prohibits certain AI systems considered to carry unacceptable risks, it primarily focuses on high-risk systems and establishes minimum requirements for the same.
At the same time, the draft regulation outlines specific obligations for developers of high-risk systems, as well as obligations for distributors, importers, and users. Providers of AI, for example, are obliged to conduct 'conformity' assessments prior to placing AI systems in the EU market.
In terms of enforcement, a European Artificial Intelligence Board ('EAIB') is envisaged to implement the regulation. Member States would also be expected to designate one or more national authorities to implement the regulation at national level, with the EDPS acting as the authority responsible for supervising EU institutions that fall within the scope.
Data protection implications
In this context, the EDPB and EDPS expressed in their joint opinion that they welcomed the Commission's proposal. Indeed, they cited the impact of AI in future decision-making within business and public policy, as well as the need to regulate their use, not least in ensuring transparency, accountability, and human control.
More importantly, the EDPB and EDPS highlighted that personal data is the 'key premise' underpinning autonomous decision-making. The use of such data therefore creates risks to individual's rights and freedoms, including the right to private life and the protection of personal information.
EDBP and EDPS recommendations
Aligning with the GDPR
According to Dan Whitehead, "at the heart of the joint opinion appears to be the view from the EDPB and EDPS that privacy concerns are not being adequately addressed by the proposal, and there needs to be greater alignment between the EU's existing data protection framework and the future AI regulation. They propose addressing this through two key measures: (i) integrating the data protection principles and the requirement to protect fundamental rights into the proposal; and (ii) appointing data protection authorities as the national supervisory authorities under the regulation."
More specifically, the EDPB and EDPS recommend that the Commission should clarify the relationship between the processing of personal data and the development and use of AI, namely by making the application of the General Data Protection Regulation (Regulation (EU) 2016/679) ('GDPR') and other data protection frameworks explicit in the proposal.
In order to direct the regulation of AI towards protecting individuals, the EDPB and EDPS also recommend the Commission to align the definition of 'risk' under the draft regulation to 'risk to fundamental rights,' as is the case under the GDPR.
Additional measures put forward by the EDPB and EDPS to enhance this relationship include:
- requiring compliance with existing data protection frameworks as a precondition to market entry;
- incorporating principles of data minimisation and Privacy by Design into the certification process;
- introducing safeguards against bias, such as human oversight; and
- reinforcing the rights of data subjects, including in restricting processing, requesting erasure, and being informed of automated decision-making.
In terms of assessing risk and whether an AI system is classified 'high risk,' the EPDB and EDPS found the proposed framework to be generally 'insufficient.'
Whereas the onus of performing risk assessments is placed on providers of AI technologies, the EDPB and EDPS advised that this should not exclude subsequent assessments carried out by the users of such technologies.
In particular, they noted that a Data Protection Impact Assessment under Article 35 of the GDPR should be considered an additional but separate means of assessing risk. Such assessments should examine the technical characteristics of AI systems, as well as the specific use cases and context in which the system operates.
Prohibiting intrusive forms of AI
In terms of prohibited AI practices, the EDPB and EDPS commented that the prohibitions contemplated by the draft regulation are very limited in nature, only 'paying lip service' to general values without further qualification.
To remedy this, the EDPB and EDPS call for the prohibition of the use of AI for:
- social scoring;
- the automated recognition of human features in publicly available spaces; and
- categorising individuals with biometric data according to ethnicity, gender, political or sexual orientation, or other grounds for discrimination.
The future for AI
In light of the EDPB and EDPS' joint opinion, Whitehead indicated, "if the Commission was to adopt the recommendations that have been put forward, then this would have a mixed result for organisations who are developing and using AI technologies.
On the one hand, the EDPB and EDPS are suggesting that several aspects of the proposal be further strengthened. However, on the other, there are a number of compelling ideas relating to improved harmonisation, such as through the introduction of a GDPR-style one-stop-shop for cross-border enforcement. A one-stop-shop mechanism should benefit companies who operate across the EU by avoiding disparate supervision across different jurisdictions."
Whitehead further noted, "some examples of where the recommendations in the joint opinion may impact businesses include:
- introducing an outright prohibition on the use of remote biometric identification systems (e.g. facial recognition) in publicly accessible spaces;
- introducing an additional prohibition on the use of social scoring for private companies;
- expanding the list of high-risk applications to include AI systems that are used to determine insurance premiums, assess medical treatments, and for health research purposes; and
- lacing a new obligation on users of AI systems to undertake detailed risk assessments with respect to specific AI systems, rather than the emphasis being solely on the provider."
According to Whitehead, "it is worth noting that the joint opinion has no binding effect on the Commission, but each of the recommendations (including concerns regarding the role of the EAIB) will be considered as part of the wider set of responses received from EU institutions and private parties during the consultation phase."
More generally, the EDPB and EDPS recommend that the Commission should consider clarifying the roles and responsibilities of stakeholders across the AI value chain, whether user, provider, importer, or distributor of an AI system.
The public consultation for the Commission's proposal is set to end on 6 August 2021, after which the outcome will be presented to the Council of the European Union and the European Parliament.