Support Centre

EU

Summary

Law: General Data Protection Regulation (Regulation (EU) 2016/679) ('GDPR')

Regulator: The European Data Protection Supervisor ('EDPS') is the European Union's (EU) data protection authority and monitors privacy within EU institutions and bodies. The European Data Protection Board ('EDPB') is an independent European body composed of representatives of the national data protection authorities and the EDPS.

Summary: The GDPR was approved on 24 May 2016 and became applicable in the EU Member States from 25 May 2018. It has since inspired several other privacy laws around the world. The GDPR lays down rules relating to the processing of personal data aimed at protecting natural persons, as well as provisions on the free movement of personal data. The GDPR, although a European regulation has a broad scope of application that imposes direct statutory obligations on data processors and can affect controllers established outside the EU.

The EU has also established further pieces of legislation with substantive importance within the Digital Single Market. In particular, the Directive on Privacy and Electronic Communications (2002/58/EC) (as amended) ('the ePrivacy Directive') regulates the processing of personal data and the protection of privacy in the electronic communications sector, with specific reference to, among other things, the regulation of unsolicited communications and cookies and similar technologies. Furthermore, the Directive on Security Network and Information Systems (Directive (EU) 2016/1148) ('the NIS Directive') establishes measures in order to achieve a high network and information systems security level within the EU. Importantly, the Directive on Measures for a High Common Level of Cybersecurity across the Union (Directive (EU) 2022/2555) ('NIS 2 Directive'), was published on 27 December 2022, and will repeal the NIS Directive as of 18 October 2024.

Insights

Timea Bana, Partner at Dentons, explores the evolving landscape of data protection in the digital age, delving into the significance of European Data Protection Board (EDPB) guidelines to navigate complexities arising from technological advancements, offering clarity for entities such as online advertisers and businesses engaged in digital services.

On January 11, 2024, the European Commission issued a press release marking the entry into force of the Regulation on Harmonised Rules on Fair Access to and Use of Data (the Data Act) on the same date, as part of the European Union's (EU) digital strategy. The Data Act aims to facilitate the exchange of data and will become applicable in 20 months, on September 12, 2025. OneTrust DataGuidance Research gives an overview of the Data Act, with further insights provided by Wim Nauwelaerts, Partner at Alston & Bird. 

On February 13, 2024, the European Data Protection Board (EDPB) published its Opinion on the notion of the main establishment of a controller in the EU under the General Data Protection Regulation (GDPR) (the Opinion). OneTrust DataGuidance Research breaks down the Opinion with expert comments from Philip James and Anna Allen, from Eversheds Sutherland's Global Privacy & Cybersecurity Group.

In today's rapidly evolving digital landscape, the EU stands at the forefront of introducing comprehensive digital and data-related legislation. The EU's intentions are to balance the interests of the data economy, promote fair competition, and protect the rights of individuals. In this article, Theresa Ehlen, Philipp Roos, and John-Markus Maddaloni, from Freshfields Bruckhaus Deringer, delve into the practical implementation of the EU rules for the data and digital landscape.

Understanding the obligations inherent under the EU Artificial Intelligence Act (the AI Act) is paramount for users and other actors navigating this dynamic landscape.

The AI Act predominantly imposes obligations on 'providers' (developers) rather than on 'users' (deployers) of high-risk artificial intelligence (AI) systems. While some of the risk posed by the systems listed in Annex III comes from how they are designed, significant risks stem from how they are used. This means that providers cannot comprehensively assess the full potential impact of a high-risk AI system during the conformity assessment, and therefore that users must have obligations to uphold fundamental rights as well. The first part of this series on the AI Act explored what types of AI are covered and what obligations are applicable to each AI actor. The second part of this series offered a brief explanation of the profound importance of providers' comprehending and adhering to the provider obligations. In the third and final article of this series, Sean Musch and Michael Charles Borrelli, from AI & Partners, and Charles Kerrigan, from CMS UK, explore the significance of comprehending these provider obligations and places them in the broader context of the ever-evolving AI terrain.

The EU Artificial Intelligence Act (AI Act) is part of the overarching EU Digital Strategy. The strategy 'focuses on putting people first in developing technology, and defending and promoting European values and rights in the digital world.'1

On December 8, 2023, after an extensive discussion that lasted several days and was preceded by months of intense negotiations, the EU Parliament, Council, and Commission announced that they had reached a provisional agreement on the AI Act. 

This is not the end of the legislative process since this is only a political agreement, and for the AI Act to become EU legislation both the Parliament and Council are required to formally adopt the same. A reasonable forecast is that enactment will take place by the end of 2024, but it remains to be seen how discussions will proceed. These discussions will be focused on the actual text of the AI Act, which may be different from the text that is available today. In this Insight article, Francesca Gaudino, from Baker & McKenzie LLP, comments on the current text of the AI Act, which may be amended upon formal adoption by the Parliament and Council. 

In the context of business operations, understanding providers' obligations under the EU Artificial Intelligence Act (the AI Act) remains key given their position as a key actor in the artificial intelligence (AI) value chain. The first part of this series on the AI Act explored what types of AI are covered and what obligations are applicable to each AI actor. In this article, Sean Musch and Michael Charles Borrelli, from AI & Partners, and Charles Kerrigan, from CMS UK, aim to offer a brief explanation of the profound importance of providers' comprehending and adhering to these obligations, which extend beyond a mere checklist of regulatory requirements. Part three of this series explores the significance of comprehending these provider obligations.

This article is accurate as of its time of publication and will be updated to reflect any changes to the AI Act.

On June 4, 2021, the European Commission (the Commission) released a new set of Standard Contractual Clauses (SCCs), which included Module 4. In this Insight article, Charlotte Gerrish and Evane Alexandre, from Gerrish Legal, look at the contents of Module 4, what its purpose is, and who can use it.

It is now more important than ever to ensure compliance with data privacy laws, including for your intranet. Sofia Calado, Lead Privacy Counsel at Cloudflare, discusses the key concerns regarding intranet compliance and explores two cases of noncompliance. 

In this Insight article, Melike Hamzaoğlu and Yücel Hamzaoğlu delve into the political agreement reached among the European Commission, Council, and Parliament, unraveling the nuances of the forthcoming EU Artificial Intelligence Act (AI Act), particularly focusing on the classification dilemma of AI providers versus deployers and the implications for foundational models.

Artificial intelligence (AI) is transforming the way we work, learn, and communicate. The rapid development and adoption of new AI-based technologies have prompted regulators around the world to create policies and regulations governing its use, in an effort to ensure that AI is used in a responsible and ethical manner. Canada and the EU are among the many jurisdictions that have recently recognized the need for AI-specific regulation.

In April 2021, the European Commission published its proposed Artificial Intelligence Act (AI Act) as a framework for a coordinated European approach to addressing the challenges and concerns raised by the increasing use of AI. The following year, in June 2022, the Canadian government introduced Bill C-27 for the Digital Charter Implementation Act 2022 (Bill C-27), which aims to update existing federal private-sector privacy laws. In addition to privacy law reform, Bill C-27 also includes the Artificial Intelligence and Data Act (AIDA), Canada's first attempt to regulate AI through standalone legislation.

Both AIDA and the AI Act seek to encourage the responsible development and use of AI systems through a single regulatory framework. In this Insight article, Heather Whiteside, from Fasken, examines the similarities and differences between these legislative proposals, as currently drafted, in Canada and the EU.

In recent years, the EU has published different kinds of regulations and directives that cover many parts of the digital advertising landscape, including personalized advertising. Ady van Nieuwenhuizen and Laurent van der Bruggen, from Fieldfisher, provide an overview of personalized advertising in Europe and the interplay between the different regulations. 

Feedback