Artificial Intelligence
The Federal Commissioner for Data Protection and Freedom of Information (BfDI) announced, on September 8, 2023, that the International Working Group on Data Protection in Technology (the Berlin Group) had released a working paper on smart cities, which includes recommendations to service providers and regulators.
On September 20, 2023, the Polish data protection authority (UODO) announced that it initiated an investigation into ChatGPT for unlawful processing of personal data following a complaint.
Background
On September 21, 2023, the Office of the Privacy Commissioner (OPC) published guidance on Artificial Intelligence and the Information Privacy Principles. The guidance is directed at any New Zealander that uses artificial intelligence (AI) tools and is designed to help them comply with the Privacy Act 2020 (the Act).
On September 19, 2023, the Office of Privacy Commissioner of Canada (OPC) released its annual report for 2022-2023. The report notes that the OPC accepted 1,241 complaints under the Privacy Act 1985 (the Privacy Act) and 454 under the Personal Information Protection and Electronic Documents Act (PIPEDA).
On September 20, 2023, the Spanish data protection authority (AEPD) released a blog post on the evolving landscape of transparency in artificial intelligence (AI).
On September 14, 2023, the Saudi Data & Artificial Intelligence Authority (SDAIA) published their Artificial Intelligence Ethics Framework version 2.0, focused on helping entities develop responsible artificial intelligence (AI) based solutions that limit the negative implications of AI systems while encouraging innovation.
On September 19, 2023, the Department for Science, Innovation and Technology (DSIT) announced a new advisory service to help businesses launch artificial intelligence (AI) and digital innovations.
On September 19, 2023, the Future of Privacy Forum (FPF) announced the publication of its guide titled 'Best Practices for AI and Workplace Assessment Technologies.' The guide focuses on providing recommendations for organizations as they deploy artificial intelligence (AI) tools in their hiring and employment decisions, based on the premise tha
On September 19, 2023, the Consumer Financial Protection Bureau (CFPB) published Consumer Financial Protection Circular 2023-02 'Adverse action notification requirements and the proper use of the CFPB's sample forms provided in Regulation B.' In particular, the Circular concerns the use of artificial intelligence (AI) or complex credit models in
On September 13, 2023, U.S.
On September 18, 2023, the Organization for Economic Cooperation and Development (OECD) published the paper entitled 'Initial Policy Considerations for Generative Artificial Intelligence.' In particular, the paper discusses the impact of generative artificial intelligence (AI) in relation to policy.
On September 18, 2023, the Competition and Markets Authority (CMA) announced, that it had published a report following its initial review, of competition and consumer protection considerations in the development and use of artificial
In a statement following the Senate's inaugural AI Insight Forum, Colorado U.S. Senator Michael Bennet emphasized the need for a new, independent agency to oversee and regulate the realms of artificial intelligence (AI) and social media.
On September 14, 2023, the Electronic Privacy Information Center (EPIC) published the report Outsourced and Automated on the use of artificial intelligence (AI) by government organizations.
On September 13, 2023, the U.S. Chamber of Commerce published a letter addressed to the Biden Administration outlining concerns regarding the EU AI Act.
On September 14, 2023, the Dutch data protection authority (AP) announced that it had requested information from a company using artificial intelligence (AI) for products aimed at children. The AP highlighted its concerns regarding the handling of personal data by a chatbot integrated into the company's app which is aimed at children.
In this Insight Article, Lara White, Miranda Cole, and Polina Maloshchinskaia, from Norton Rose Fulbright LLP, explain the aims and key components of the EU digital strategy, outlining at a high-level key legislation that has been published in this space in the past three years.
In a time when competing approaches to artificial intelligence (AI) governance develop in different parts of the world, Singapore is charting a path that emphasizes pragmatism and enablement.
In this Insight article, Sarah Nasrullah, from Norton Rose Fulbright LLP, delves into Canada's AI regulatory landscape, examining key aspects of the AI Act, enforcement mechanisms, penalties, and implications for organizations and individuals.
The emergence of artificial intelligence (AI), particularly with the introduction of powerful generative AI-powered chatbots like Open AI's ChatGPT, Google LLC's Bard, Microsoft Corporation's Bing Chat, Baidu, Inc's ERNIE Bot, and Alibaba's Tongyi Qianwen, has captured considerable attention this year.
The last year has seen significant advances in the use of chatbots, exemplified by the ChatGPT service developed by OpenAI. The service is underpinned by both a language model and a knowledge base. The language model allows it to generate text by predicting which string of words is most likely to follow on from a user prompt.
Artificial intelligence (AI) solutions can save a lot of time and money. Even before the emergence of generative AI, a study conducted by the EU-US Trade and Technology Council (TTC) revealed that as early as 2021, 28% of companies in the EU with more than 250 employees deployed AI technology.
Artificial intelligence (AI) has captivated the interest of both the American public and policymakers. In fact, within the US, the 118th Congress has held 17 hearings to date on this subject.
At the beginning of this year, the World Economic Forum Annual Meeting at Davos applauded the growth in artificial intelligence (AI), particularly generative AI. Against the backdrop of the world's biggest challenges, reports from Davos suggested that world leaders and business executives were cautiously optimistic for 2023.
In this Insight article, Brian McElligott and Conor Califf, from Mason Hayes & Curran LLP, explore the risks and safeguards when engaging vendors in the EU for AI-powered services, covering data security, legal compliance, transparency, child data processing, the EU AI Act, and intellectual property concerns.
In this Insight article, Colin Lambertus and Neil Williamson, from EM Law, delve into the complexities and legal implications of data scraping, a practice gaining renewed attention in the age of artificial intelligence (AI) and widespread web-based information.
Artificial intelligence (AI) has become a transformative force across every industry, revolutionizing the way businesses operate and impacting employment practices worldwide. In the US, the rapid advancement of AI has led to significant changes in the job market, with both positive and negative effects on employment.
In this Insight article, Mark Francis and Sophie Kletzien, from Holland & Knight LLP, delve into New York City's pioneering regulations, making it the first US jurisdiction to govern artificial intelligence's (AI) role in employment decisions.
The EU AI Act
The Council of the European Union announced, on 6 December 2022, the adoption of its general approach on the Proposal for a Regulation of the European Parliament and of the Council Laying Down Harmonised Rules on Artificial Intelligence ('the AI Act').
Key feautures of the AI Act include:
- The AI Act is a specific legal framework for AI.
- Legislation must support AI's potential to support break throughs.
- The General Data Protection Regulation (Regulation (EU) 2016/679) ('GDPR') acts as a 'crystal ball'.
- Consumer nexus determines risk profile.
- Conformity assessments are a pre-market requirement for any AI system.
In this article, Sean Musch and Michael Borrelli, from AI & Partners, and Charles Kerrigan, from CMS, provide clarity on this ground-breaking piece of legislation on artificial intelligence ('AI') and why firms should take note.
Introduction
An AI system is a machine-based system that can, for a given set of human-defined objectives, generate output, such as content, predictions, recommendations, or decisions influencing real or virtual environments. AI systems are designed to operate with varying levels of autonomy.
To understand that statement, it is necessary to know that:
- AI systems do not operate entirely without human intervention;
- AI covers a wide range of systems that can be used to deliver multiple outcomes; and
- self-learning AI systems (or self-supervised learning), a type of AI system, recognise patterns in training data in an autonomous way, without the need for supervision.
The legal literature on AI extends to extensive discussion of the ethical and moral use of AI in general use, and how it should be treated under the laws of different jurisdictions. Under current English law, there is no bespoke framework governing the development, production, and/or operation/use of AI to benefit the myriad of stakeholders.
Notwithstanding, with the forthcoming AI Act, a European legal framework for AI to address fundamental rights and safety risks specific to the AI systems, emerging areas of risks are poised to be addressed.
The AI Act came about given that EU law currently did not:
- have a specific legal framework for AI;
- provide a definition of an AI system; or
- have a set of horizontal rules, covering a single definition of AI and a single set of horizontal requirements and obligations to address in a proportionate, risk-based manner and limited to the strictly necessary the risks to safety and fundamental rights specific to AI related to the classification of risks, related to AI technologies.
The development and uptake of AI systems generally takes place in the context of the existing body of EU law that provides non-AI specific principles and rules on the protection of fundamental rights, product safety, services, or liability issues. It is necessary to understand how this influenced the AI Act's and, crucially, how firms are affected.
Impact on UK businesses
At its core, the AI Act aims to ensure the proper functioning of the European single market by creating the conditions for the development and use of trustworthy AI, that is, how AI systems are made and deployed by businesses for user consumption. AI systems can be viewed in different ways, which affects the way in which they are treated from a legal standpoint.
Firstly, from a technological context, systems are typically software-based, but are often also embedded in hardware-software systems. A bimodal approach in the use of algorithms by businesses, mainly rule- and learning-based, differentiates its recognition and makes it harder to define. Secondly, in a social-economic context, the use of AI systems has led to important break throughs in a multitude of domains. An ability to support socially and environmentally beneficial outcomes and provide key competitive advantages to companies to name a few. Just as this has been aimed at European-based businesses, third-country firms should expect to understand the legal origins of the AI Act and what its intending to achieve. Products and services sold are subject to one form of regulation or another, regardless of the industry. Why should AI be any different?
Comparison with the GDPR
Businesses are still feeling the effects of the EU's legislative action to control personal data, otherwise known as the GDPR. The GDPR aimed to protect the fundamental rights and freedoms of natural persons, and in particular their right to the protection of personal data, whenever their personal data is processed. Not only did businesses stand up and take notice of it, they felt the commercial ramifications if they did not. Reputational, financial, and legal costs were deemed high for non-compliance. Similar to the GDPR, the AI Act has an extremely broad coverage, intended to cover the processing of personal data through 'partially or solely automated means', including any AI system. Comparisons drawn are both at the level of its scope of application, as well as the granularity with which the provisions apply.
Although costs of non-compliance with the AI Act are potentially not directly comparable (i.e. AI costs are given per product, whereas GDPR costs are given for the first year, they nevertheless give an idea of the order of magnitude). For example, regarding the GDPR, studies have found that 40% of small- and medium-sized enterprises ('SMEs') spent more than €10,000 on GDPR compliance in the first year, including 16% that spent more than €50,000. Depending on the final form of the AI Act costs of compliance could also be in this range.
Meaning of 'high risk'
AI systems would be considered high-risk because they pose significant risks to fundamental rights and freedoms of individuals or whole groups thereof. This remains a contentious point given the degree of impact perceived to have been caused by AI. One of the discussion points of the AI Act was a need for common criteria and a risk assessment methodology to separate 'high-risk' from 'non-high-risk' AI applications. Knowing the distinction can mean the difference between a lean go-to-market strategy versus one filled with a range of complexities and administrative hurdles.
At a high level, it could be reasonable to assume that:
- AI systems that are safety components of products are high-risk if the product or device in question undergoes a third-party conformity assessment pursuant to the relevant new approach or old approach safety legislation; and
- for all other AI systems, it should be assessed whether the AI system and its intended use generates a high risk to the health and safety and/or the fundamental rights and freedom of persons on the basis of a number of criteria that would be defined in the legal proposal.
Again, the message is clear - those AI systems that have an ability to affect the status of an individual, tangible or otherwise, are at the forefront of legislators's minds. 'People, planet, profit', as the recognised saying goes.
Compliance obligations/requirements
Providers and users are first in line. The AI Act proposes horizontal mandatory requirements for high-risk AI systems that would have to be fulfilled for any high-risk AI system to be authorised on the EU market or otherwise put into service. The same requirements would apply irrespective of whether the high-risk AI system is a safety component of a product or a stand-alone application with mainly fundamental rights implications.
As an example, to ensure compliance with the AI requirements, a provider would have to:
- do a conformity assessment to demonstrate compliance with AI requirements before the system is placed on the market; and
- re-assess the conformity in case of substantial modifications to take into account the continuous learning capabilities.
For high-risk AI systems, these clear and predictable requirements and obligations placed on all AI value chain participants are mostly common practice for diligent market participants and would ensure a minimum degree of algorithmic transparency and accountability in the development and use of AI systems.
Conclusion
To wrap things up, the AI Act brings widescale changes to the development, provision, and use/operation of AI. The obligations for firms should not be taken lightly.
Key things to note are:
- Implementation timeline: Q1 2024 is the expected enforcement date. Pre-emptive actions are strongly recommended.
- Preparation steps: Depending on the nature, scale, complexity, and nature of the business. Putting in place systems and controls to categorise AI systems marks a prudent first step.
Once published, the AI Act would lay down the first landmark regime governing the AI space in a comprehensive and harmonised manner; thus, its breadth would affect the AI industry and could represent a blueprint for other jurisdictions to follow. Therefore, now is a good time to prepare for the main disruptive changes the AI Act is on the point of introducing.
Sean Musch Director [email protected]
Charles Kerrigan Partner [email protected]
Michael Borrelli Director [email protected] AI & Partners, London