Support Centre

EU

Summary

Law: General Data Protection Regulation (Regulation (EU) 2016/679) ('GDPR')

Regulator: The European Data Protection Supervisor ('EDPS') is the European Union's (EU) data protection authority and monitors privacy within EU institutions and bodies. The European Data Protection Board ('EDPB') is an independent European body composed of representatives of the national data protection authorities and the EDPS.

Summary: The GDPR was approved on 24 May 2016 and became applicable in the EU Member States from 25 May 2018. It has since inspired several other privacy laws around the world. The GDPR lays down rules relating to the processing of personal data aimed at protecting natural persons, as well as provisions on the free movement of personal data. The GDPR, although a European regulation has a broad scope of application that imposes direct statutory obligations on data processors and can affect controllers established outside the EU.

The EU has also established further pieces of legislation with substantive importance within the Digital Single Market. In particular, the Directive on Privacy and Electronic Communications (2002/58/EC) (as amended) ('the ePrivacy Directive') regulates the processing of personal data and the protection of privacy in the electronic communications sector, with specific reference to, among other things, the regulation of unsolicited communications and cookies and similar technologies. Furthermore, the Directive on Security Network and Information Systems (Directive (EU) 2016/1148) ('the NIS Directive') establishes measures in order to achieve a high network and information systems security level within the EU. Importantly, the Directive on Measures for a High Common Level of Cybersecurity across the Union (Directive (EU) 2022/2555) ('NIS 2 Directive'), was published on 27 December 2022, and will repeal the NIS Directive as of 18 October 2024.

Insights

The EU has recently adopted several new cyber laws that impose detailed cybersecurity requirements and, in some cases, can hold senior management personally liable, as well as significant fines for non-compliance. These new cyber laws include: (i) the Network and Information Systems 2 Directive (NISD2); (ii) the Digital Operational Resilience Act (DORA); (iii) the Cyber Resilience Act (CRA); (iv) the Critical Entities Resilience (CER) Directive; and (v) the Cybersecurity Act.

These new cyber laws are game changers, requiring businesses operating in the EU to examine their cybersecurity processes, update their incident reporting plans, and review their interactions with vendors and other third parties. Crucially, senior management is now responsible for compliance, necessitating businesses to evaluate their exposure, and implement the required measures.

In the first article in this series on the new EU cyber laws, William Long and Francesca Blythe, from Sidley Austin LLP, focus on NISD2, which will be enforceable from October 18, 2024.

The EU Artificial Intelligence Act (the EU AI Act) represents a significant regulatory framework aimed at ensuring the safe and ethical deployment of AI technologies across various sectors, including healthcare. In this Insight article, Michael Borrelli, from AI & Partners, delves into the implications of the EU AI Act on the healthcare sector, highlighting the opportunities, challenges, and regulatory requirements that healthcare providers and AI developers must navigate.

On July 12, 2024, the EU officially published the Artificial Intelligence Act in the Official Journal of the European Union, marking a significant milestone in the regulation of artificial intelligence (AI) technologies. This comprehensive legislative framework aims to ensure the safe, transparent, and ethical use of AI within the EU, while fostering innovation and protecting fundamental rights. For businesses, the EU AI Act brings both opportunities and challenges. Sean Musch and Michael Charles Borrelli, from AI & Partners, and Charles Kerrigan, from CMS UK, explore the key provisions of the Act and what they mean for businesses operating in the EU.

We are currently seeing a vast development and deployment of artificial intelligence (AI)-based systems and solutions across sectors and society as a whole. At the time of writing, these deployed AI systems are about to become subject to a detailed and comprehensive regulatory burden in and of itself as the EU is about to roll out its newly finished AI Act.

While the AI Act will introduce new obligations on AI developers and deployers, it should not be forgotten that medical sector technology is already subject to its own rules. Particularly, the EU Medical Devices Regulation (MDR) and In Vitro Diagnostic Medical Devices Regulation (IVDR) have already been established to reform the European medical device (MD) regulatory framework and set high safety and performance standards for medical devices in the EU. As AI is increasingly being deployed in the medical sector, one may ask, how do the sector-agnostic AI Act and the sector-specific MD regulations coincide? Otto Lindholm, Counsel at Dottir Attorneys Ltd, looks at the overlap between the different legislation and provides some key takeaways for navigating these.

The legal framework for direct marketing activities is regulated by two main legislations in the EU, namely the General Data Protection Regulation (GDPR) and the Directive on Privacy and Electronic Communications (2002/58/EC) (as amended) (the ePrivacy Directive).

The GDPR is the general data protection framework applicable to companies and natural persons established in the EU or who direct their services towards EU citizens. This is an important consideration to make in terms of direct marketing because it includes US companies that are directing services to EU customers and sending marketing emails to EU customers, and that will need to respect the GDPR rules. From a material scope of applicability, the GDPR only applies when processing personal data of natural persons that are identifiable (either directly or indirectly). This means that mailing lists that solely consist of generic professional email addresses are not subject to the strict requirements in the data protection legislation in the EU.

The ePrivacy Directive is the data protection framework applicable in the electronic communications sector. The ePrivacy Directive provides a set of specific rules on data protection in the area of electronic communications, such as on the confidentiality of electronic communications, the treatment of traffic data (including data retention), and rules on spam and cookies.

A proposal for an ePrivacy Regulation was published on January 10, 2017, as the ePrivacy Directive is no longer optimally suited to the fast-changing nature of the electronic communications sectors. However, the discussions on the proposal for an ePrivacy Regulation have been stalled at the Council for almost six years, and it is uncertain whether the proposal will be adopted in the foreseeable future. The ePrivacy Directive, therefore, remains the law of the land, complementing the GDPR. Jolien Clemens, Attorney-at-Law at Timelex, explores the ePrivacy Directive rules and the GDPR as the currently applicable legal frameworks in the context of direct marketing.

Six years after the go-live of the General Data Protection Regulation (GDPR), covered organizations have gotten very used to Data Protection Impact Assessments (DPIA). Seasoned privacy professionals have definitively been part of many talks about the difference between DPIAs and Privacy Impact Assessments (PIA), and if there is or should be any difference.

In a time when everyone is talking about artificial intelligence (AI) and the upcoming EU AI Act (the AI Act), organizations are turning to privacy experts to see if this new legislative and regulatory focus will lead to a similar level of compliance work (and expense). In particular, they are wondering whether the AI Act's Conformity Assessments (CA) and Fundamental Rights Impact Assessments (FRIA) will find their way into every organization's compliance framework.

In this article, Maarten Stassen, of Crowell & Moring LLP, compares the GDPR's DPIAs with the AI Act's CAs and FRIAs, considering their key practical considerations and impact on organizations.

In today's digital age, businesses are constantly seeking innovative ways to connect with their customers and drive growth. One technology that has been making waves in the marketing industry is artificial intelligence (AI). According to the definition laid down in the EU AI Act in the last version available, 'AI system' means 'a machine-based system designed to operate with varying levels of autonomy, that may exhibit adaptiveness after deployment and that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments.' Gianluigi Marino and Andrea Cantore, from Osborne Clarke, define AI in marketing and discuss risks and obligations.

In this Insight article, Iain Borner, Chief Executive Officer at The Data Privacy Group, delves into the transformative impact of the EU Artificial Intelligence Act (AI Act), which establishes a regulatory framework aimed at fostering trustworthy artificial intelligence (AI) aligned with European values. With a focus on high-risk AI systems, the AI Act introduces mandatory compliance processes and provisions, setting a precedent for ethical innovation that prioritizes people's rights and safety.

In this article, Arun Babu and Gayathri Poti, from Kochhar & Co., delineate the primary disparities between the Digital Personal Data Protection Act (DPDPA) and the General Data Protection Regulation (GDPR) from a business perspective, analyzing the rationale behind these distinctions and their practical implications.

On April 17, 2024, the European Data Protection Board (EDPB) published the Opinion 08/2024 on Valid Consent in the Context of Consent or Pay Models Implemented by Large Online Platforms. The supervisory authorities of some EU Member States asked the EDPB to issue this opinion in order to obtain clarity on the circumstances in which consent or pay models for behavioral advertising can be used by large online platforms on the basis of valid consent or under which circumstances valid consent can be given in such cases. According to the supervisory authorities, there is no uniform answer to this question. However, the clarification is particularly relevant for the general application of the principles on the concept of consent. Dr. Carlo Piltz and Alexander Weiss, from Piltz Legal, unpack the opinion, looking specifically at the opinion's implications on both platforms and European legal frameworks.

On April 17, 2024, the European Data Protection Board (EDPB) published Opinion 08/2024 on Valid Consent in the Context of Consent or Pay Models Implemented by Large Online Platforms. In this Insight article, OneTrust DataGuidance provides an overview of the opinion.

On March 13, 2024, the European Parliament adopted the European Union's (EU) Regulation laying down harmonized rules on artificial intelligence (AI), commonly known as the Artificial Intelligence Act (the AI Act) (see the European Parliament press release and OneTrust DataGuidance News article). Almost three years after the European Commission's first legislative proposal, and after the EU legislators reached a political agreement on the key aspects of the AI Act in December 2023 in the course of the trilogue following months of negotiations, the world's first comprehensive regulatory framework for AI has officially been approved. 

This Insight article addresses the most important questions as to what companies and other entities should know and consider when conducting any activities involving AI. Valentino Halim, Junior Partner at Oppenhoff & Partner, unpacks the AI Act and provides insight into the scope and key obligations of the new regulatory framework for AI at the EU level.