Support Centre

Europe

Insights

The discipline of marketing and advertising has received quite some attention from data protection practitioners over the years. The marketing and advertising sector was critically examined for its use of profiling, followed by big data and, most recently, real-time bidding. Despite all this, many may wonder why companies, and, in particular, their marketing departments, are still keen on engaging in this highly controversial activity of personalizing advertising messages.

In this Insight article, Dr. Sachiko Scheuing, from Acxiom, examines recent regulatory developments affecting personalized advertising and how organizations can ensure they are compliant.

The EU Artificial Intelligence Act (the AI Act) and the Regulation (EU) 2023/1230 on machinery (the Machinery Regulation) and repealing Directive 2006/42/EC (the Machinery Directive) are closely intertwined. In this Insight article, Rosa Barcelo, Partner at McDermott Will & Emery, analyzes the purposes, interplay, and scope of the AI Act, the Machinery Directive, and its successor, the Machinery Regulation.

The EU Artificial Intelligence Act (the AI Act) is set to become a landmark regulation governing artificial intelligence (AI), introducing requirements and responsibilities for various actors in the AI value chain, including providers and deployers.

In part one of this Insight series, Katie Hewson and Eva Lu, from Stephenson Harwood LLP, discussed the definitions of providers and deployers under the AI Act and how these roles are allocated. In part two, they focus on the differences in obligations and risk exposure between the two, as well as steps organizations can take to mitigate those risks.

The increasing use of dashcams across Europe has raised several data protection concerns, particularly around the collection and processing of personal data. As dashcam footage can capture individuals and vehicles, understanding the implications under the EU's strict data protection laws is essential.

In the first part of this Insight series on dashcam regulations in Europe, OneTrust DataGuidance consulted with legal experts in the EU, UK, France, and Belgium to delve into each country's legal regulations on dashcams and how they can be used legally. This Insight article offers guidance on how to stay compliant while benefiting from this technology.

India's commitment towards the promotion and development of artificial intelligence (AI) was recently highlighted in the Union Budget of 2024-25 that was announced by the Indian government in July 2024. The Budget allocated $65 million exclusively to the IndiaAI Mission, an ambitious $1.1. billion program that was announced earlier this year to focus on AI research and infrastructure in India. It has also widely been reported that the Ministry of Electronics and Information Technology (MeitY) is in the process of formulating a national AI policy, which is set to address a wide spectrum of issues including the infringement of intellectual property rights and the development of responsible AI. As per reports, MeitY is also analyzing the AI framework of other jurisdictions to include learnings from these frameworks in its national AI policy. Part I of this series focussed on understanding the regulatory approaches adopted by some key jurisdictions like the EU and the USA. In Part two, Raghav Muthanna, Avimukt Dar, and Himangini Mishra, from INDUSLAW, explore measures that India can adopt, and lessons it can take from such markets, in its journey in the governance of AI systems.

The EU Artificial Intelligence Act (the AI Act) is set to become a landmark regulation governing artificial intelligence (AI). It introduces stringent requirements and responsibilities for various actors in the AI value chain. Two key actors in that value chain are AI providers and deployers.

Determining whether an entity is a provider or deployer is crucial, as the roles carry distinct obligations under the AI Act. As with the General Data Protection Regulation (GDPR) and its distinction between controller and processor, the classification of provider or deployer will always require an assessment of the facts, rather than simply being allocated through contractual arrangements.

In practice, as businesses increasingly seek AI solutions trained on their proprietary data and materials to achieve outputs that are more tailored to their needs, the line between a provider and deployer may become blurred. It may even be possible for an entity to be both provider and deployer for the same AI system, depending on how it is implemented.

In part one of this Insight series, Katie Hewson and Eva Lu, from Stephenson Harwood LLP, examine the definitions of providers and deployers under the AI Act and allocating such roles. There are several other operators in the AI value chain - product manufacturer, importer, distributor, and authorized representative - that will not be covered in this article.

In this Insight article, Lara White, Hannah Meakin, Marcus Evans, Hannah McAslan-Schaaf, and Rosie Nance, from Norton Rose Fulbright LLP, explore how the UK's financial services regulators, including the Financial Conduct Authority (FCA) and the Prudential Regulation Authority (PRA), are navigating the evolving landscape of artificial intelligence (AI) through a technology-neutral, principles-based approach. They emphasize the importance of balancing AI's transformative potential with robust regulatory oversight to ensure its safe and responsible use in the sector.

The construction of a fair, thriving, and progressive data ecosystem in the European Union (EU) is a major goal of the European Commission (EC). Following the entry into force of the GDPR in 2018, the backbone of the EU's set of rules on processing personal data, the EC has outlined subsequent regulatory initiatives to address the challenges and unleash the opportunities presented by the data economy in the EU. The success of the EU's Digital Decade initiative hinges on various factors, such as the responsiveness of organizations, the availability of clear guidance, and the establishment of legal certainty and coherence among regulators. One thing is clear: the EU is exploring new horizons in digital rulemaking.

In this Insight article, Heidi Waem, Muhammed Demircan, and Simon Verschaeve, from DLA Piper UK LLP, outline the main points of interplay of the GDPR, Chapter II of the Data Act, and the DSA.

The Regulation on digital operational resilience for the financial sector (DORA) entered into force on January 16, 2023, and forms an integral part of the European Commission's digital finance package, a package of measures to further enable and support the potential of digital finance in terms of innovation and competition, while mitigating the risks arising from it. DORA will become directly applicable in each Member State from January 17, 2025.

In this article, Desislava Krusteva, Partner at Dimitrov, Petrov & Co., gives an overview of the interplay between DORA and the General Data Protection Regulation (GDPR).

In the past few years, the digital market has witnessed an outpour of artificial intelligence (AI) systems, with the AI market expected to reach a valuation of nearly $2 trillion by 2030.  However, the surge in the use of AI has led to the birth of several pertinent issues ranging from concerns about data privacy and intellectual property rights infringements to issues around transparency and ethical concerns, among others. In the first part of this series on navigating the AI frontier, Raghav Muthanna, Avimukt Dar, and Himangini Mishra, from INDUSLAW, aim to analyze and assess the regulatory position around AI in three key jurisdictions, namely the EU, USA, and India. Part two of this series will evaluate the diverse approaches of these jurisdictions and the learnings that India can adopt from the EU and the USA while framing its own set of AI regulations, as well as what lies ahead for India in the AI regulatory space.  

Trustworthy artificial intelligence (AI) has become a crucial topic, and the recently published EU Artificial Intelligence Act (the AI Act) represents a significant legislative development. This landmark AI regulation will reshape AI deployment across sectors, requiring organizations to comply within two years from August 2, 2024 (36 months for certain types of high-risk AI systems). In this Insight article, Sean Musch and Michael Borrelli, from AI & Partners, and Victoria Hordern, from Taylor Wessing, briefly examine how implementing ISO/IEC 42001:2023 standards can facilitate compliance with the EU AI Act. Moreover, they provide an analysis of a research report from DIGITALEUROPE highlighting key aspects of ISO/IEC 42001:2023 that align with the EU AI Act.

The Information Commissioner's Office (ICO) published guidance on the use of artificial intelligence (AI), recognizing risks and that responsible deployment of AI has the potential to make a positive contribution to society.

Part one of this Insight series discusses chapter one of the ICO's guidance on the lawful basis for web scraping, part two focuses on chapter two on the application of the purpose limitation principle to different phases of the generative AI lifecycle discussed in chapter two of the guidance, and part three explores the ICO's third chapter concerning the accuracy of data and outputs. In part four, James Castro-Edwards, from Arnold & Porter, delves into chapter four of the guidance on individual rights, particularly in the stages of training and fine-tuning generative AI.