Support Centre

You have out of 5 free articles left for the month

Signup for a trial to access unlimited content.

Start Trial

Continue reading on DataGuidance with:

Free Member

Limited Articles

Create an account to continue accessing select articles, resources, and guidance notes.

Free Trial

Unlimited Access

Start your free trial to access unlimited articles, resources, guidance notes, and workspaces.

UK: Five ways in which the UK is showing leadership in AI governance

Despite having a smaller economy than the US, EU, and China, the UK is leading by example on AI regulation and governance. In this Insight article, Var Shankar, Executive Director of Policy at the Responsible AI Institute, focuses on five ways in which the UK Government is leading on AI.

wigglestick / Essentials collection / istockphoto.com

AI Safety Summit

On November 1 and 2, 2023 the UK will host the AI Safety Summit at Bletchley Park, bringing together international lawmakers to address the dangers posed by 'frontier AI.'

UK Prime Minister Rishi Sunak announced plans for the summit during a meeting with US President Joe Biden in June 2023. The UK Government has extended invitations not just to representatives of the three leading global economies, the US, China, and the EU, but also to those from emerging economies, including India, Brazil, and South Africa. In a tense geopolitical environment characterized by increasing technological, economic, and military tensions, it is commendable that the UK government is seeking a common approach to this important issue.

The UK Government defines 'frontier AI' as 'highly capable general-purpose AI models that can perform a wide variety of tasks and match or exceed the capabilities present in today's most advanced models.' The summit will focus primarily on misuse risks and loss of control risks related to frontier AI. Its five objectives include:

  • arriving upon a shared understanding of the frontier AI's risks;
  • a process for international collaboration;
  • safety measures for organizations:
  • areas for collaboration on AI safety research,and
  • how safe AI development might enable the global use of AI for good.

A context-specific approach to AI governance

The UK's approach to AI regulation is outlined in its AI White Paper dated March 29, 2023. The paper promises a 'pro-innovation' framework that brings clarity and coherence to AI regulation while remaining agile and iterative. It presents five principles for AI use across sectors without granting these principles statutory authority:

  • safety, security, and robustness;
  • appropriate transparency and explainability;
  • fairness;
  • accountability and governance; and
  • contestability and redress.

While it stops short of creating a central AI regulator, the AI White Paper also describes central support functions to support its regulatory framework for AI. These functions include, for example, monitoring the implementation of the principles, monitoring AI risks to the economy, and supporting testbeds and sandboxes to foster AI innovation.

Although the five principles for AI use are meant to inform the approaches of sectoral regulators, initially, regulators will not be legally obligated to take them into account. However, the UK Government anticipates that it will introduce a statutory duty on regulators requiring them to have due regard to the principles in the future.

Sectoral regulators are likely to readily fulfill this statutory duty. During the January 2023 UK-Canada AI in Finance Regulatory Roundtable, organized by the Responsible AI Institute and hosted by the UK's Financial Conduct Authority (FCA), regulators and interested parties demonstrated a nuanced understanding of these principles and how their application would differ in specific use cases.

UK standards hub at the Alan Turing Institute

The UK Government has correctly recognized that AI standards will play a major role in AI governance. The EU's proposed AI Act, which is the world's most prominent and advanced effort to regulate AI across various domains, relies on the development of foundational AI standards. These foundational AI standards are being developed by recognized standards development organizations, such as ISO and IEEE.

To help UK stakeholders participate in the development of AI standards and navigate the AI standardization landscape, the UK Government formally launched the UK's AI Standards Hub in October 2022.

Hosted at the Alan Turing Institute, the UK's national institute for data science and AI, the Hub has four pillars:

  • observatory: Online libraries monitoring AI standards and related developments, including an online Standards Database that lists and tracks AI standards being developed in the UK and globally;
  • community and collaboration: Forming connections between stakeholders via workshops, live events, and discussion forums to facilitate shared understandings of priorities, strategies, and best practices;
  • knowledge and training: Virtual and in-person training to help stakeholders contribute to standards development and use published standards; and
  • research and analysis: Using the expertise around the Hub to address research questions related to AI standardization.

Consequently, the Hub will enable stakeholders across the UK to monitor, learn about, and network around AI standardization efforts. At its launch, Dr. Florian Ostmann, Head of AI Governance and Regulatory Innovation at the Alan Turing Institute, said that the Hub will "ensure that industry, regulators, civil society, and academic researchers have the tools and knowledge they need to contribute to the development of standards."

AI Research Resource

Developing advanced AI models and tools to observe and research their abilities has largely been the domain of just a few large technology companies headquartered in the US and China. The aim of a publicly funded AI Research Resource is to make cutting-edge AI computing capacity, storage, data, and tools available to researchers in government, universities, and civil society organizations.

In March 2023, the UK Government announced the funding of an AI Research Resource as part of a £900 million investment to improve the UK's computing capacity. The AI Research Resource will be based at the University of Bristol and is known colloquially as 'Isambard-AI.' The UK's Science, Innovation and Technology Secretary Michelle Donelan stated that the facility "will catalyze scientific discovery and keep the UK at the forefront of AI development."

The funding of an AI Research Resource in the UK in March 2023 led Stanford scholars Russell Wald and Daniel Zhang to note that "the UK is rapidly outpacing the US both in terms of AI investment and regulation." In the US, the effort to create an AI Research Resource was progressing slowly. Congress established the National AI Research Resource (NAIRR) Task Force in 2020. The NAIRR Task Force delivered its final report in January 2023. In July 2023, the bipartisan Creating Resources for Every American To Experiment with Artificial Intelligence Act (CREATE AI Act) was introduced in the US House and Senate, proposing the creation of the NAIRR in the US. If the CREATE AI Act is passed, the US NAIRR is expected to receive $1 billion in annual funding from the National Science Foundation. Given the close collaboration between the US and the UK in the fields of science and technology, the AI Research Resource concept receiving government support in both countries is likely to lead to research collaboration in the future.

Foundation Models Taskforce

The UK Government has allocated £100 million to fund a Foundation Models Taskforce, led by AI expert Ian Hogarth. The Taskforce will help develop the safety and reliability of foundation models, both at a scientific and commercial level. It is modeled upon the UK's Vaccine Taskforce, which coordinated public and private vaccine development and deployment during the Covid-19 pandemic. Therefore, it will operate with 'agility and delegated authority.'

Interestingly, the precise priorities and activities of the Taskforce remain somewhat unclear. Jack Clark, an AI scholar and co-founder of Anthropic, has suggested that the Taskforce should explore ways to evaluate frontier models, a notoriously difficult task.

While the US, EU, and China will continue to be the major jurisdictions to monitor for the establishment of AI laws, policies, and guidelines, the UK government is setting an example of AI regulation and governance in the areas discussed in the article and is playing an important international convening role in the field of AI governance.

Var Shankar Executive Director of Policy
[email protected]
Responsible AI Institute, New York

Feedback