Support Centre

You have out of 5 free articles left for the month

Signup for a trial to access unlimited content.

Start Trial

Continue reading on DataGuidance with:

Free Member

Limited Articles

Create an account to continue accessing select articles, resources, and guidance notes.

Free Trial

Unlimited Access

Start your free trial to access unlimited articles, resources, guidance notes, and workspaces.

UK: AI White Paper - the UK's approach to regulating AI

The artificial intelligence (AI) industry in the UK has experienced significant growth over recent years, putting the UK in a strong position in the global AI market. The UK Government has expressed a 'pro-innovation' stance on AI, in terms of public funding, technology policy, and its regulatory approach, and has favored an iterative, sectoral approach to the regulation of AI. The UK's pro-innovation aim is explicit in the Government's white paper on AI regulation (the AI White Paper) and its response to the AI White Paper consultation, published in February 2024 (the Response). Amy Smyth, Fiona Maclean, and Georgina Hoy, from Latham & Watkins LLP, discuss the key ideas of the AI White Paper and the Response and compares the UK's AI regulatory landscape to other approaches around the globe.

Angel Santana//Moment via Getty Images

Key takeaways from the AI White Paper

Scope

The UK approach is focused on regulating potentially harmful uses of AI, rather than AI systems themselves. AI products and services are identified by reference to two characteristics that are more likely to result in novel risks and regulatory implications:

  • systems that are adaptive to their training, such that the AI system is able to perform new forms of inference not directly envisioned by its human programmers; and
  • systems that are autonomous in that they are able to make decisions without the express intent or ongoing control of a human.

Overarching principles

The sectoral approach set out in the AI White Paper and the Response relies heavily on existing regulators to develop proportionate and context-driven updates to adapt existing laws and regulations in the UK to manage the potential harms of AI systems. UK regulators, including the Information Commissioner's Office (ICO), the Financial Conduct Authority (FCA), the Competition and Markets Authority (CMA), and the Equality and Human Rights Commission, are required to take into account five principles when responding to AI risks and opportunities in their respective sectors, which are built upon the Organisation of Economic Cooperation and Development (OECD) principles for responsible stewardship of trustworthy AI:

  • safety, security, and robustness;
  • appropriate transparency and explainability;
  • fairness;
  • accountability and governance; and
  • contestability and redress.

In the Response, the Government requires key regulators - including the ICO, FCA, and CMA, as well as the UK's media, education, and healthcare regulators, among others - to outline their strategic approach to AI by April 30, 2024.

New central functions to support the Government

The AI White Paper and the Response recognize that there is a pressing need for regulatory coordination to ensure that the Government's stated aim of providing certainty to businesses can be achieved. To address this, the Government has established a 'central function'1 to facilitate collaboration and drive regulatory coherence, as well as leverage existing expertise from across the wider UK economy. In the Response, the Government emphasizes the regulator coordination aspect of the central function and the international angle to its central function activities, seeking to ensure international regulatory interoperability.

Consultation

Subsequent to the publication of the AI White Paper for consultation in March 2023, there have been calls by various organizations for the Government to reconsider introducing targeted AI legislation. Critics argue that a sector-based approach could result in significant gaps in protection for consumers.

Further, regulators are not consistently and appropriately resourced to develop and implement new regulations and guidance. For example, while certain regulators such as the FCA, the CMA, the Office of Communications (Ofcom), and the ICO already cooperate - including on AI topics - through forums such as the Digital Regulation Cooperation Forum (DRCF), others such as the Human Rights Commission have much more limited experience, capacity, and access to AI expertise.

The interim report from the House of Commons Science, Innovation and Technology Committee published in August 2023 and the Ada Lovelace Institute report published in July 2023 recommend a gap analysis among the UK's regulators to consider whether any regulators require new powers to implement and enforce the principles outlined in the AI White Paper. Both reports also advocate for the Government to take immediate action to introduce a statutory obligation on regulators to pay due regard to the AI White Paper principles. The July 2023 report from the House of Lords goes a step further and recommends establishing an AI regulator in the medium term. On November 22, 2023, a UK Artificial Intelligence (Regulation) Bill (the AI Bill) was introduced to the House of Lords as a Private Member's bill. The AI Bill's proposals include:

  • the creation of an AI Authority to coordinate sectoral regulators;
  • the mandatory designation of an AI Officer for any business developing, deploying, or using AI;
  • an obligation on businesses involved in training AI to provide to the AI Authority a record of all third-party data and intellectual property (IP) used in that training, and an assurance that they use all such data and IP with informed consent and in compliance with applicable IP obligations; and
  • a requirement for businesses supplying a product or service involving AI to give users clear and unambiguous health warnings, labeling, and opportunities to give or withhold informed consent in advance.

The AI Bill is at an early stage in the legislative process, and as a Private Member's bill, it is unlikely to become binding law (at least not in its proposed form). However, the AI Bill evidences ongoing support from the House of Lords for the creation of specific AI legislation and indicates a potential direction of travel.

The Response

In the Response, the Government confirms its intention to maintain its light touch and principles-based approach and not introduce specific AI legislation at this stage. The Government has, however, stated that targeted binding cross-sectoral requirements may be introduced for 'highly capable general-purpose AI systems' (determined via compute thresholds and capability benchmarking with detail to be confirmed). The Government sets out a number of tests for the introduction of new binding measures on the AI sector, such as voluntary measures and existing legal powers being deficient, appropriate targeted mitigations being available, and innovation and competition in the market being maintained.

In relation to copyright protection in the context of AI, following the AI White Paper the UK Intellectual Property Office convened a working group of rights holders and AI developers seeking to agree on a voluntary code of conduct for the use of copyright materials in AI training and development. Whilst the Government stated in the Response that the working group had provided a valuable forum for stakeholders to share their views, it confirmed that an agreement on an effective voluntary code could not be reached. The Government indicates that it is intending to facilitate further engagement between rights holders and the AI sector, with further details expected in the coming months.

Comparison to global approaches

While AI governance initiatives generally remain nascent across the globe, the UK Government's approach is distinct to that of the EU, the US, and China.

In the EU, the AI Act has been politically agreed upon by EU legislators and the final text is expected to be published in spring 2024, with the AI Act entering into force in 2024 and the majority of the substantive requirements applying two years later. The AI Act is a wide-ranging new regulation that seeks to harmonize the rules on AI systems applicable in the European internal market. The AI Act categorizes AI practices into four categories according to a risk-based approach:

  • an unacceptable risk;
  • a high risk;
  • a low risk; or
  • a minimal risk.

The AI Act seeks to mitigate potential risks posed by AI systems before those systems are placed onto the market and ensure that those risks are managed on an ongoing basis thereafter. For example, requirements applicable to high-risk AI systems include risk assessment and mitigation processes, human oversight, security and robustness, and conformity assessments. The AI Act includes a range of specific obligations for general purpose AI and foundation models, such as transparency requirements, labeling requirements for AI-generated content, and enhanced requirements for models posing systemic risk. A separate proposal for an AI Liability Directive is currently moving through the EU's legislative process. The AI Liability Directive focuses on facilitating redress for individuals, in cases where the adoption of high-risk AI systems causes harm to those individuals.

In the US, the Biden Administration issued an Executive Order on October 30, 2023 (the Order), setting out a far-reaching approach to AI regulation. The Order requires federal agencies to issue new standards and guidance and to use their existing powers to proactively regulate the use of AI. The Order also introduces specific requirements for certain AI developers and AI data center owners, if relevant computing power and capacity thresholds are met, focusing on Federal Government reporting requirements and security standards.

China was the first country in the world to enact specific regulations targeted at AI systems with its March 2022 regulation requiring algorithmic recommendation service providers to enable users to opt out of personalization or to disable algorithmic recommendation services. China has since established a series of regulatory measures for advanced algorithms, the latest being a new law designed to regulate generative AI which came into force in August 2023.

Practical steps

While the regulatory landscape in the UK and elsewhere continues to develop, there are a number of practical steps that organizations can take as they consider their AI compliance and governance approaches:

  • Understand how AI is being used in the business and its respective risks. Taking stock of existing AI use cases and future AI plans will help businesses prepare and respond to AI guidance from regulators and adapt to shifting commercial demands and market practices along the technology and data supply chains.
  • Adopt (and adapt) existing risk management strategies to manage AI. While AI raises certain specific risks and novel legal issues, businesses may benefit from applying their existing risk and governance frameworks (including training for employees) to their AI practices in order to identify and mitigate potential risks early on in AI projects. As businesses deploy AI in more sophisticated, complex, or extensive ways, they should consider developing AI-specific governance frameworks, to better future-proof their AI risk management.
  • Continue to monitor the regulatory landscape and guidance published by the UK Government. The regulation of AI in the UK is an evolving space and remains subject to change as various policy pressures come into play, including intense public interest, regulatory scrutiny, and competing industry interests. Though the UK does not currently have specific AI legislation, users and developers of AI systems will need to understand and comply with existing and evolving legislation that applies across a raft of areas, including IP, data protection, anti-trust, and consumer protection. For example, an AI tool for assessing credit-worthiness of loan applicants could fall within the remit of the ICO (use of personal data), the FCA (provision of financial services), and the Equality and Human Rights Commission (in the event of potentially discriminatory treatment based on protected characteristics, such as race or gender). The Government has also published a series of guidance notes, comprised in its Responsible AI Toolkit to support organizations and practitioners to safely and responsibly develop and deploy AI systems.

Upcoming developments

The fast pace of AI innovation and the increasingly sophisticated nature of AI adoption across industries - and international political pressure in the 'race to regulate' AI - are expected to drive further development of the Government's regulatory framework for AI. In particular, we expect to see further iterative updates to the Government's proposed regulatory framework set out in the Response, as well as more developed regulatory strategies from key UK regulators, which may in turn lay the groundwork for more active enforcement by those regulators.

Amy Smyth Knowledge Management Counsel
[email protected]
Fiona Maclean Partner
[email protected]
Georgina Hoy Associate
[email protected]
Latham & Watkins LLP, London


1. The AI White Paper and the Response identify a set of functions that the Government seeks to coordinate centrally, including a monitoring and assessment function, a cross-sectoral risk assessment function, an education and awareness function, and a horizon scanning function. The Government will initially be responsible for delivering the coordinated central function, working in partnership with regulators and other stakeholders in the AI ecosystem. Looking to the longer term, the Government states that it recognizes that there may be value in a more independent delivery of the central function.