Support Centre

You have out of 5 free articles left for the month

Signup for a trial to access unlimited content.

Start Trial

Continue reading on DataGuidance with:

Free Member

Limited Articles

Create an account to continue accessing select articles, resources, and guidance notes.

Free Trial

Unlimited Access

Start your free trial to access unlimited articles, resources, guidance notes, and workspaces.

EU: The Roles of the provider and deployer in AI systems and models - part two

The EU Artificial Intelligence Act (the AI Act) is set to become a landmark regulation governing artificial intelligence (AI), introducing requirements and responsibilities for various actors in the AI value chain, including providers and deployers.

In part one of this Insight series, Katie Hewson and Eva Lu, from Stephenson Harwood LLP, discussed the definitions of providers and deployers under the AI Act and how these roles are allocated. In part two, they focus on the differences in obligations and risk exposure between the two, as well as steps organizations can take to mitigate those risks.

Grant Faint/The Image Bank via Getty Images

Obligations of providers and deployers

The distinction between the definitions of a provider and deployer is crucial because the bulk of the obligations under the AI Act are imposed on providers. As mentioned, these will vary depending on the risk level posed by the AI system or model.

Provider obligations for high-risk AI systems

Providers of high-risk AI systems must perform a prior conformity assessment to ensure the system complies with the requirements set out in Chapter III Section 2 of the Act. After which the high-risk AI system is registered, with a declaration of conformity drawn up and a CE mark attached. Chapter III Section 2 obligations require:

  • establishing a risk management system;
  • data and data governance;
  • technical documentation;
  • recordkeeping;
  • transparency;
  • human oversight; and
  • accuracy, robustness, and cybersecurity.

Providers of high-risk AI systems must also:

  • establish a quality management system;
  • establish a post-marketing monitoring system;
  • keep logs and documentation;
  • take corrective actions if the system presents a risk to the health, safety, or fundamental rights of persons;
  • report serious incidents;
  • appoint an authorized representative if not established in the EU; and
  • cooperate with and provide information to competent authorities.

The Act also recognizes that, along the AI value chain, multiple parties often supply AI systems, tools, and services but also components or processes with various objectives that are incorporated by the provider into the AI system. These parties have an important role to play in the value chain. Article 25 of the Act therefore also places requirements on the provider of a high-risk AI system to enter into detailed written contractual terms with any third-party suppliers of other AI systems, tools, services, components, or processes that are used or integrated into the high-risk AI system. These must specify the necessary information, capabilities, technical access, and other assistance based on the generally acknowledged state of the art to enable the provider of the high-risk AI system to fully comply with the obligations set out in the Act. The Act also sets out that the AI Office could develop and recommend voluntary model contractual terms for this purpose to facilitate cooperation along the AI value chain.

Deployer obligations for high-risk AI systems

Deployers of high-risk AI systems are responsible for:

  • using the AI system in accordance with instructions;
  • assigning human oversight;
  • keeping logs; and
  • monitoring the performance and compliance of the AI system.

As deployers are more likely to have direct interaction with individual end users, they are also responsible for:

  • informing users subject to a high-risk AI system;
  • conducting impact assessments;
  • explaining decisions to individuals;
  • reporting if the system presents a risk to the health, safety, or fundamental rights of persons;
  • reporting serious incidents; and
  • cooperating with and providing information to competent authorities.

Provider and deployer obligations for certain AI systems

Under Article 4 of the Act, both providers and deployers of any AI systems within the scope of the Act must take measures to ensure, to their best extent, a sufficient level of AI literacy among their staff. This should take into account the staff's technical knowledge, experience, education, and training and the context the AI systems are to be used in, including the persons on whom the AI systems are to be used.

Regardless of the risk level, providers and deployers also both have a range of transparency obligations under Article 50 of the Act to provide information in a clear and distinguishable manner at the latest at the time of the first interaction or exposure to an AI system.

For providers, who have more responsibility with respect to the design and development of an AI system, they must ensure:

  • AI systems intended to interact directly with individuals are designed and developed in a way that the individual is informed that they are interacting with an AI system unless this is obvious; and
  • outputs generated or manipulated by an AI system are marked in a machine-readable format and detectable as such.

Deployers, who have a more direct interaction with individuals, must:

  • inform individuals when they are exposed to the operation of an emotion recognition or biometric categorization system;
  • ensure deepfakes generated or manipulated by an AI system are disclosed as such; and
  • ensure text generated or manipulated by an AI system that is published with the purpose of informing the public on matters of public interest is disclosed as such.

AI systems authorized by law for criminal offense purposes are exempt from these obligations.

Provider obligations for GPAI models

Providers are the only operators that bear responsibilities for general-purpose AI (GPAI) models, including where they pose systemic risk.

Providers of GPAI models must:

  • create detailed technical documentation of the model;
  • enable downstream users of their model to comprehend their capabilities and limitations;
  • put in place a policy to comply with copyright law;
  • draw up and make publicly available a sufficiently detailed summary of the content used for training; and
  • appoint an authorized representative if not established in the EU.

In addition, providers of GPAI models posing systemic risks must also:

  • notify the European Commission;
  • conduct model evaluations;
  • assess and mitigate possible systemic risks;
  • keep track of, document, and report without undue delay serious incidents and corrective measures to address them; and
  • ensure an adequate level of cybersecurity protection.

Providers of GPAI models will be subject to a higher level of scrutiny by the European Commission.

Deployer obligations for GPAI models

Deployers do not have any obligations in relation to GPAI models alone. However, they may have obligations in relation to any AI system of which a GPAI model forms part, and these obligations will depend on the risk level of the AI system.

Tips for managing risk

Given that providers bear most of the responsibilities and compliance requirements under the AI Act, correct pre-contract and contractual classification of the relative roles of providers and deployers is vital for managing legal and reputational risk exposure in the event of regulatory challenges or litigation. Entities should not assume that because they are engaging a third party to help them design and develop the AI system, the third party will be the provider and will bear all the responsibilities of compliance with the AI Act. In some cases, such third party may not be a provider at all, or it may be a provider, with its client acting as a (separate) provider.

Aside from the Act's specific requirements, for example, and as stated above, on the providers of high-risk AI systems to enter into detailed written contractual terms with any third-party suppliers of other AI systems, tools, services, components, or processes that are used or integrated into the high-risk AI systems, it will be crucial to ensure the roles and responsibilities between the parties are clearly specified contractually with respect to the AI system. It will also be essential to, where necessary, ensure that suppliers provide sufficient cooperation and assistance to support their clients (whether acting as a provider or as a deployer) in their compliance with the AI Act. The parties will also need to contractually allocate liability between them covering, among other areas, non-compliance with the AI Act, as well as for claims, losses, and damages arising from the application of the Act and current law more generally.

As between a provider and deployer, where they are two different entities, while deployers may have less onerous obligations under the AI Act, they are not necessarily exposed to less risk. This is because deployers will be responsible for verifying their provider's compliance with the AI Act and the AI system's performance. Given the current reluctance of many leading AI and ML developers to reveal the workings of their models and systems, this could be a challenge in practice, at least in the early days of the AI Act's operation. Deployers should also ensure that the provider will provide sufficient cooperation and assistance to support their compliance with the AI Act.

No doubt, over time, we shall see a wide range of new forms of contract develop to reflect the allocation of roles, risks, and liabilities between providers and deployers.

Katie Hewson Partner
[email protected]
Eva Lu Associate
[email protected]
Stephenson Harwood LLP, London