Support Centre

You have out of 5 free articles left for the month

Signup for a trial to access unlimited content.

Start Trial

Continue reading on DataGuidance with:

Free Member

Limited Articles

Create an account to continue accessing select articles, resources, and guidance notes.

Free Trial

Unlimited Access

Start your free trial to access unlimited articles, resources, guidance notes, and workspaces.

International: The importance of the new GPA resolution on generative AI systems - part one

Generative artificial intelligence (AI) has emerged as a ground-breaking technology, raising creativity to a new level and testing the limits of various fields of law such as intellectual property and data privacy. With respect to the privacy compliance of generative AI systems, a new resolution of the 45th Global Privacy Assembly (GPA) puts forward important insights and explores relevant issues, challenges, and approaches to be followed by providers of such systems where they engage in processing personal data. In part one of this Insight article, Daniel Necz, Associate from Eversheds Sutherland, highlights key considerations and regulatory frameworks essential to navigating this evolving landscape.

Jian Fan / Essentials collection / istockphoto.com

Lawful basis for processing

The GPA highlights the importance of having a lawful basis for processing personal data by generative AI systems interacting with individuals and providing content (e.g., text, images, or videos). A separate legal basis may need to be relied on for training and deployment of the given system.

Different legal bases will usually need to be relied on in cases where users registered to use the given solution provide their data to the business providing such solutions, and in cases where personal data of other data subjects are collected and used who have not signed up for the given service. In the first case, the contract concluded with such a user, or the consent of the user may likely be relied on, whereas in the latter case, another legal basis such as legitimate interest may be more relevant (if appropriate).

Principles and data protection rights

The GPA further highlights the importance of complying with principles of data protection such as purpose specification, use limitation, and data minimization under which personal data can only be collected and used by generative AI systems for a legitimate purpose without processing personal data indiscriminately.

Accuracy is also a key aspect of generative AI as such systems usually rely on vast amounts of datasets for training, testing, and validating data. This also means however that the output of the system would largely depend on the quality and representativeness of such data. Relying on inaccurate or inappropriate information could reinforce discriminatory practices or lead to other harmful consequences, so measures must be taken to exclude false, misleading, inappropriate, or irrelevant information from the pool of training data. Appropriate measures to prevent inaccurate results and algorithmic bias include data governance procedures or technical safeguards such as filters and periodic review of relevant parameters and processing techniques, sources, and sets of data relied on.

Transparency is a further key aspect of using generative AI systems, especially with respect to how, when, and why training data are collected and used. Providers of such systems must also inform persons affected about potential risks concerning the system and address them in their relevant policies and practices. Documentation should also be kept about the sources of datasets as well as about modification, filtering, and other relevant practices to make sure that individuals affected by the use of the given system understand relevant privacy risks and how they are addressed. In respect of generative AI, it is further recommended to provide clear information to users on the risks and impact of providing personal data of others (e.g., persons mentioned in email drafts or memos uploaded in the given solution) since these persons may be less aware of the use of their personal data by the given solution compared to registered users.

Providers should further ensure that individuals affected by the use of generative AI systems may exercise their data subject rights. Such rights include the right to access personal data, the right to rectify inaccurate personal data or to erase data, as well as the right not to be subject to automated decisions that result in a significant effect on the individuals. Ensuring that such rights may be adequately exercised is especially relevant in cases where sensitive information is collected and processed by generative AI systems or where the use of such systems affects members of vulnerable groups such as children.

Privacy by design and default

In addition to the above, providers of generative AI systems further need to put in place effective security measures in the design, conception, and operation of such systems. Such measures may include traditional and specific cybersecurity controls (e.g., against indirect prompt injection attacks) and focus on preventing model inversion attacks and ensuring that adequate privacy safeguards are put in place. The risk of misuse should further be assessed and mitigated. Such risks may include, for example, the use of the given system to create deep fakes, orchestrate phishing attacks, or use for other harmful purposes such as harassment.

The GPA further confirms that in order to effectively address potential privacy risks related to the development and use of generative AI systems, providers of generative AI systems should conduct a data protection and privacy impact assessment at every stage of the lifecycle of the system (e.g., development, introduction to the market, significant modification). It is noted in this respect that various impact assessment obligations may arise for businesses developing and providing generative AI systems such as the data protection impact assessment under the EU General Data Protection Regulation (GDPR) and the fundamental rights impact assessment under the new EU Artificial Intelligence Act (AI Act). Providers of generative AI systems may create, for example, a consolidated impact assessment documentation with multiple chapters or a chain of cross-referenced impact assessment documents depending on the relevant industry, legislation, system, and scope of data used.

It is noted with respect to the above that providers of generative AI systems are responsible for complying with various privacy, data protection, and AI laws and requirements depending on their role in the AI model supply chain and the scope of data used. This also means that such actors need to maintain appropriate technical documentation and policies that describe how their models work, how data are collected and used, and what measures are put in place to address relevant risks. External audits by relevant experts and teams (such as red teaming) could also play an essential role in demonstrating accountability.

What to be on the lookout for?

Despite the fact that the GDPR and the new EU AI Act put forward a wide number of requirements that may be relevant for providers of generative AI systems, there are still numerous questions that may arise in respect of the collection and use of personal data by such systems as AI continues to question long-standing data protection concepts and principles developed in preceding decades. It is uncertain, for example, how the principles of purpose specification, use limitation, or data minimization can be interpreted in the case of AI systems that collect information from the internet and various other public sources, especially in the case of multimodal generative AI systems which use data to create texts, images, videos, and other types of content for various purposes. Guaranteeing transparency and explaining the outputs of the given system can further have their limitations, especially with regard to the 'black box' problem which relates to the often-opaque nature of algorithmic decision-making.  

In respect of providing generative AI systems, more focus should also be put on complying with the principles of privacy by design and default as data protection authorities would likely focus in the near future on how organizations implement measures to protect the privacy of users or other third parties affected by generative AI systems. This is especially true since both the GDPR and the EU AI Act may require undertaking relevant assessment, the preparation of comprehensive documentation, and putting in place additional measures that protect the rights and freedoms of persons affected by the given solution. Bearing this in mind, businesses should especially assess the potential effects of the given solution on users and the general public, implement measures that prevent or mitigate such negative effects, and be transparent about such potential negative effects in their relevant policies. In cases where AI models are implemented by other actors in the supply chain or are significantly modified, the potential impacts of the solution on the processing of personal data may also have to be re-assessed. In such cases, it is highly recommended that companies put more focus on contractual terms regarding the provision and implementation of the given system, permitted use and modifications, as well as on relevant liability, data protection, and data security clauses.

Besides implementing appropriate technical measures to protect information or adequately monitor the system, businesses providing generative AI systems should also train key members of staff to make sure that such persons are aware of the risks of using the system and the steps that need to be taken to correctly address relevant risks.

As highlighted above, it is hard to dispute that generative AI is transforming businesses and various sectors rapidly. Businesses developing or relying on generative AI systems should therefore be aware of privacy-related risks and implement measures that adequately address those risks and help protect their users and other persons affected by the given system. To avoid potential legal pitfalls, it is highly recommended that businesses also involve legal experts knowledgeable in both legal and business aspects of providing generative AI systems.

Daniel Necz Associate
[email protected]
Eversheds Sutherland (International) LLP, Dublin

Feedback