Support Centre

You have out of 5 free articles left for the month

Signup for a trial to access unlimited content.

Start Trial

Continue reading on DataGuidance with:

Free Member

Limited Articles

Create an account to continue accessing select articles, resources, and guidance notes.

Free Trial

Unlimited Access

Start your free trial to access unlimited articles, resources, guidance notes, and workspaces.

International: Where does the Bletchley Summit fit in the global scheme of AI and regulation?

In this Insight article, Sarah Cameron and Krish Khanna, from Pinsent Masons LLP, delve into the intricacies of global artificial intelligence (AI) regulation, examining diverse national approaches and their implications for businesses, standards, and international collaboration.

Violka08 / Essentials collection /

Where does the Bletchley Summit fit in the global scheme of AI and regulation?

At techUK's Digital Ethics Summit in December last year, the UK's Information Commissioner, John Edwards, warned that 2024 could be the year that the public loses confidence in AI. While the potential economic, societal, and environmental benefits of responsible AI development and deployment are increasingly well understood, the meteoric rise of generative AI and its potential for harm has shifted the complexity dial within that discussion and introduced new considerations to the ongoing debate about how best to regulate AI...

Against this backdrop, the UK Prime Minister hosted a global AI Safety Summit at Bletchley Park (Bletchley Summit) on November 1 and November 2. What did the Bletchley Summit achieve? To what extent, if at all, does it help businesses to navigate differing regional regulatory requirements while making strategic decisions for the development and deployment of AI in a global market?

The Bletchley Summit - objectives and outcomes

The Bletchley Summit brought together 150 representatives and leaders from international governments, including China, civil society groups, and research experts, aiming to address the risks of AI, particularly those associated with 'frontier' model development. The discussion focused on exploring how these risks could be mitigated through internationally coordinated action, encompassing research and the establishment of new standards.

At a time of intense competition, where many countries are seeking to position themselves as global leaders in AI, some critics quickly renounced the Bletchley Summit as political posturing. The question arose: What could it add to the work of the OECD and the G7 leaders' Hiroshima Process, which had already considered the opportunities and challenges of generative AI earlier in 2023? The UK Government acknowledged the various existing international efforts and initiatives addressing the capabilities and risks of frontier AI but underscored that the Bletchley Summit aimed to provide a platform for more in-depth discussions and consider further action to complement existing initiatives. The expert report, Frontier AI: Capabilities and Risks (released to inform Bletchley Summit discussions), highlighted that significant uncertainty exists around both the capabilities and risks of AI.

Listening to opposing views in the run-up to the Bletchley Summit, I was struck by the account of a leading computer scientist at an event in London, who explained that he had moved to this country from the US specifically because he believed the UK would be the only country to hold a summit focused on safety.

What has been achieved?

The Bletchley Summit was described as a global first. On day one, 29 countries, including the UK, Australia, France, Germany, India, Singapore, and the UAE, signed the Bletchley Declaration. A central theme of this declaration was the need for sustained international collaboration in addressing AI safety risks.

On day 2, an agreement was reached on state-led testing of the next generation of models before their release, facilitated through partnerships with AI Safety Institutes. There was also support for an independent 'State of the Science' report, led by the Turing Award-winning scientist Yoshua Bengio. Inevitably, the Summit only marked the beginning of a new forum for discussion. The Republic of Korea is set to co-host a mini-virtual summit on AI within six months, with France to host the next in-person summit 12 months after Bletchley. More ambitious measures are expected to be discussed at these upcoming summits.

Two days before the Bletchley Summit, the US published its Executive Order on Safe, Secure, and Trustworthy AI which 'establishes new standards for AI safety and security, protects Americans' privacy, advances equity and civil rights, stands up for consumers and workers, promotes innovation and competition, advances American leadership around the world...' On the same day, G7 Leaders issued a Statement on the Hiroshima AI Process and its International Guiding Principles for Organizations Developing Advanced AI systems. Some have questioned the timing of these announcements; however, whether they were a deliberate snub to the Bletchley Summit or simply aimed at further galvanizing progress on planned policy developments in this area, there is increased momentum in international collaboration and alignment in addressing the most serious concerns raised by AI. Ultimately, this momentum is intended to benefit all, and that is what matters most.

There have been encouraging further developments since the Bletchley Summit. The National Cyber Security Centre (NCSC), a part of Government Communications Headquarters (GCHQ), and the US's Cybersecurity and Infrastructure Security Agency (CISA) released the first global guidelines to ensure the secure development of AI technology in cooperation with industry experts and 21 other international agencies and ministries from across the world. The guidelines aim to help developers of any systems utilizing AI make informed decisions and adopt behaviors for the secure design, development, deployment, operation, and maintenance of AI systems. Additionally, the US and UK have both announced their own AI Safety Institutes. The US Safety Institute announced as part of the Bletchley Summit, will sit within the National Institute of Standards and Technology (NIST) and operationalize the NIST AI Risk Framework. It will produce technical guidance that regulators will use to address rulemaking and enforcement.

On December 6, 2023, as part of the G7 Leaders' wider geopolitical statement on global challenges, the group welcomed the Hiroshima AI Process Comprehensive Policy Framework which includes guiding principles and the code of conduct to address the impact of advanced systems on societies and economies. It also welcomed the UK Summit, referencing the ensuing summits in Korea and France. Whatever political lens one chooses to take, this is clearly positive progress towards greater international cohesion.

Different approaches to regulation across the globe

For some years, countries have been developing their own national or federal AI strategies and policies. While there has been considerable consensus around the foundational ethical principles for these domestic approaches (for example, the Organization for Economic Cooperation and Development (OECD) principles), there has been considerable divergence in the approach taken when seeking to implement more tangible and granular policy. For example, the EU has adopted a prescriptive, product safety-based regulatory approach whilst others (including the UK and even the pro-state China) have adopted a lighter touch 'pro-innovation' stance.

The EU AI Act

The EU was an early mover when it proposed the EU AI Act, the world's first comprehensive legislative framework. Although initial discussions about regulation took place in 2018, the draft AI Act wasn't published until 2021, and political agreement was finally reached in December 2023. Their ambition is, like that of others, to become a global leader in AI and implement a global regulatory standard, as was done with GDPR. While the Brussels effect in data protection regulation is undeniable, it remains uncertain whether the EU will achieve the same predominance with the EU AI Act.

The AI Act follows a horizontal, prescriptive risk-based approach primarily centered on product safety and the protection of fundamental rights. The new rules establish obligations for providers and users depending on the level of risk associated with AI. AI systems are classified into four categories of risk: unacceptable, high, limited, or minimal risk. AI systems falling into the unacceptable risk category are prohibited, while high-risk systems face strict regulatory requirements. AI systems posing limited or minimal risk will need to be assessed and may be subject to transparency requirements.

Critics argue that the AI Act relies on outdated product safety regulations, with schedules that will need constant updating, and is too rigid to comply with, risking stifling innovation, as recently raised by President Macron. The late introduction of provisions addressing foundation models arguably demonstrates the importance of tech neutrality in finding an approach that doesn't rapidly need updating. Intense negotiations during trilogues revealed widely varying approaches of the Commission, Council, and Parliament, particularly concerning AI systems falling within the unacceptable risk category and the approach to law enforcement.

Much of the detail required for compliance with the provisions of the AI Act in its final form will need to come from new or expanded standards, a task that the Commission has assigned to the European Committee for Standardization (CEN) and the European Electrotechnical Committee for Standardization (CENELEC) with developing.


The UK has produced numerous policy papers since its AI Sector Deal in 2018 culminating in its long-anticipated response published in February 2024 to the consultation on its March 2023 AI White Paper which proposed a light touch, pro-innovation, contextual, and sector-based framework. The response reaffirms its principles-based and context-specific regulatory approach, grounded in five high-level horizontal principles. Existing regulators are expected to consider these principles when providing guidance along their vertical lines of responsibility. Many respondents in the consultation supported the vertical approach, citing the established history of regulators being best placed to manage risks in their own sector, such as in health and financial services. However, some argued that a light-touch approach was inappropriate and would increase uncertainty and lead to a lack of cohesion and inconsistency. In response to these challenges and also how to ensure the correct skills exist across multiple regulators, the response details various support mechanisms including funding for new tools and research and a Central Function to facilitate coordination, knowledge exchange, and horizon scanning. Deadlines are set for regulators to set out their plans for responding to risks over the coming year.

While maintaining essentially the same overall policy as proposed in the White Paper, the response does specifically distinguish between highly capable general-purpose models, which are not yet fully understood, and the challenges they present to a vertical sector-based approach. A range of targeted, binding measures for a higher standard of regulation will, therefore, be explored in the coming year.

One overarching criticism of the Government has been its perceived lack of decisiveness and progress in policy. While the response sets out a program of specific actions and deadlines for 2024, it still leaves key issues such as liability and intellectual property (IP) subject to ongoing consideration and consultation. While the UK deliberately aims not to rush into regulation before fully understanding the landscape, it cannot afford to 'sit on its hands' indefinitely. Although its enthusiasm for encouraging and engaging in international collaboration is, this should not come at the expense of timely progress with its national policy.


The US held its first AI summit in 2018 to discuss the promise of AI and the policies needed to realize it, appointing its AI Select Committee. In October 2022, the White House secured voluntary commitments from leading AI companies to help move toward safe, secure, and transparent development of AI. These commitments underscore three principles fundamental to the future of AI: safety, security, and trust. Companies have pledged to implement security testing and cybersecurity measures. Transparency measures include providing greater visibility into the use of AI, detailing its capabilities, limitations, and appropriate areas of use, along with commitments to research.

In addition, the US published the Blueprint for an AI Bill of Rights and five measures to protect the public from harm. These measures are designed to be proportionate to the nature of the harm or risk posed by AI systems to people's rights, opportunities, and access.

In January 2023, NIST produced its voluntary framework for the trustworthiness of AI systems, aiming to balance risks among stakeholders, innovation, and risk management. The framework addresses four key functions. Govern, Map, Measure, and Manage, and is complemented by a playbook that suggests actions stakeholders can take. Widely recognized as an invaluable resource, the framework and playbook serve as a reference for stakeholders in developing their own governance.

As mentioned, the US published its Executive Order on Safe, Secure, and Trustworthy AI (reflecting the OECD principles) on the eve of the Bletchley Summit. This directive was described as directing 'the most sweeping actions ever taken to protect Americans from the potential risks of AI systems.' The order includes new measures for companies developing foundation models that pose serious risks to national security, economic security, or public health and safety. This includes notifying the Federal Government when training models and sharing the results of all red-team safety tests. The US also highlighted its intention to address standards, tools, and testing for AI systems that pose risks to the safety and security of critical infrastructure and other chemical, biological, radiological, nuclear, and cybersecurity threats.


Japan was a fast mover with its 2017 R&D Guidelines, prepared as a basis for international discussions at G7 and OECD to promote the benefits and reduce the risks of AI. It produced its own AI Strategy in 2019, including early considerations of generative AI, and separate Social Principles of Human-Centric AI.

In 2021, it commissioned the Expert Report on Governance of AI in Japan to operationalize the Japanese AI Principles. The in-depth report on different approaches and options rightly recognized that designing practical AI governance is not easy. On the one hand, horizontal regulation can address issues unique to AI, such as lack of explainability; on the other hand, solutions to the issues can be sector or use-case-specific. The versatility of AI use can therefore raise different issues in each application.

It concluded that, from the perspective of balancing respect for AI principles with the promotion of innovation (except for some specific areas), AI governance should be designed mainly with soft laws. This is favorable to companies that respect AI principles. An intermediate, non-binding guideline-based approach was therefore deemed appropriate, with legally binding horizontal requirements for AI systems considered unnecessary for now. This aligns with Japan's agile-based approach to governance in the digital arena.

Under Japan's Presidency, in May 2023, leaders of the G7 decided to take stock of the opportunities and challenges of generative AI, leading to a report informing and guiding discussions in the Hiroshima Process, and the furtherance of common policy, as discussed above.


In 2017, China published its ambitious National New Generation AI Plan, followed by the Principles for New Generation AI for Responsible Development in 2019. In 2021, it also published its White Paper on the Trustworthiness of AI and Ethical Norms for the Use of New Generation AI in China.

Initially adopting a principles-based approach, China shifted its focus when it released its draft generative AI regulation in the summer of 2023. The regulation mandates developer responsibility for the outputs created by their AI, imposes restrictions on the sourcing of training data, and sets challenging requirements for model accuracy. The draft also includes additional proposals related to facial recognition technology, building upon existing legislation regarding deepfakes and data security. 


Canada was also an early mover in addressing AI risks and governance. Like the EU, it has chosen to regulate AI systems across various domains and applications with comprehensive frameworks applicable to all sectors, albeit without any specific banned systems or uses. Canada was a founding partner of GPAI in 2019, designed to support and guide the responsible adoption of AI that is human-centric and grounded in human rights, diversity and inclusion, innovation, and economic growth. It established an Advisory Council in 2019 and introduced its Algorithmic Impact Assessment in the same year. In 2023, Canada published guidance to federal institutions on their use of generative AI tools. Its Voluntary Code of Conduct on generative AI outlines measures that should be applied in advance of binding regulation pursuant to the Artificial Intelligence and Data Act. It applies to firms developing or managing the operation of a generative AI system with general-purpose capabilities and has additional measures that should be taken by firms developing or managing the operations of these systems that are made widely available for use, and which are therefore subject to a wider range of potentially harmful or inappropriate use.

The role of standards

There has long been the adage that if there are too many standards on a specific subject area, the solution is to create another standard, thereby only increasing complexity. However, there is real potential for the work of standards bodies to play a central role in driving cohesion internationally around responsible AI development and deployment.

The EU Commission has tasked CEN-CENELEC to produce a number of standards to fit with the requirements of the EU AI Act within the next 12 months. 

Considerable work has already been undertaken by the ISO/IEC SC 42 sub-committees, focusing on a range of issues, including governance, foundational standards, data, trustworthiness, use cases, and computational approaches and characteristics of AI systems. ISO/ IEC standards on the governance of AI, AI management, and risk management provide solid foundations. ISO/IEC 42001, an AI management system standard, has just been published and can be used for conformity assessment and certification, thereby enhancing trust in the complex AI supply chain. It can be compared to ISO 9001 for quality management or ISO 27001 for information security management.

Other relevant standards include ISO/IEC 23053 (2022), establishing a framework for AI systems using machine learning, and ISO/IEC 25059, defining quality requirements for AI systems and providing guidelines for measuring and evaluating the quality of AI systems. This includes testing the accuracy, reliability, and robustness of AI models, as well as ensuring that the system meets ethical and legal requirements. These should be considered in conjunction with wider standards such as ISO 31000 for risk management.

Encouragingly, the Technical Committee for SC4 is working to increase cooperation with CEN-CENELEC. This, along with the international collaborative efforts of the UK's recently created AI Standards Hub, can collectively bring some sense of convergence and cohesion around best practices or compliance.

Impact on business of the varying approaches to regulation

Against this backdrop of national activities and wider international collaboration, businesses, particularly SMEs, are trying to plan for the short to medium-term development and deployment of AI. A constant question arises: How should they navigate the diverse approaches to regulation (whether vertical or horizontal, hard laws versus guidelines and principles) and how should they engage in the development of standards? One might question whether this growing matrix does anything other than hinder innovation.

It is frequently highlighted that a number of existing laws already govern significant elements of the AI development and deployment landscape. Ensuring compliance with existing best practices around general risk management, data governance, information security, data protection, employment, intellectual property, and competition laws will go a long way in preparing for future regulation. Additionally, engaging with regulatory bodies, standards organizations, and industry bodies, who often have direct communication channels with lawmaking governments, will help influence policy and ensure an understanding of specific challenges. Equally, government and regulatory bodies need to be keenly aware of the limited ability and resources for SMEs to engage in the same way as large multinationals. 


The significance of the Bletchley Summit and what it achieved therefore needs to be viewed in the wider context of ever-growing developments around the world. There has been a general consensus around the overriding principles that should be central to national policy and the development and deployment of AI for some time. Through Bletchley, there is a new momentum internationally to address the longer-term risks of the most advanced forms of frontier AI models.

For businesses focused on more practical, commercial, and strategic decision-making, the Bletchley Summit did not resolve the complexity they are already facing in addressing the vertical, horizontal, and 'hard' versus 'soft' approaches of different nations. This has long been a discussion within the tech industry.

It is clear that robust governance guidance is widely available and often overlapping. The OECD Observatory points to an increasing portfolio of assurance guidance, standards, and free tools that are available. Existing laws cover a lot of ground in terms of required practice and governance. There is clearly signposted activity over the next 12-24 months concerning standards.

Some have said that the EU AI Act will be extremely demanding to comply with and deter investment, development, and deployment in the EU. Others argue that compliance with good practice will be enough for most AI systems to achieve the requisite level for governance. Challenges are and will increasingly be evident when addressing safe and responsible user deployment, particularly as foundation models are more widely adopted. It may be that international collaboration coalesces around where there need to be red lines for the most dangerous use cases as the collective understanding of these grows, bringing on-side an increasing number of tech providers. Comparisons with nuclear non-proliferation treaties are often drawn.

Meanwhile, companies are grappling with existing but increasingly complex issues around contracting for commercial risk and liability, IP, data protection, governance, and dispute resolution. This will evolve into a framework of accepted best practices for different situations, as we have seen with other new technologies in the past. For example, some of the provisions of the EU's proposed liability regulation may find their way into commercial contracts between businesses outside the EU, shifting the burden of proof in certain circumstances.

While addressing day-to-day concerns, we can also look ahead to see how international cooperation with tangible outcomes progresses at next year's summits in Korea (virtual) and thereafter in person in France.

Sarah Cameron Legal Director
[email protected]
Krish Khanna Associate
Pinsent Masons LLP, London