Support Centre

You have out of 5 free articles left for the month

Signup for a trial to access unlimited content.

Start Trial

Continue reading on DataGuidance with:

Free Member

Limited Articles

Create an account to continue accessing select articles, resources, and guidance notes.

Free Trial

Unlimited Access

Start your free trial to access unlimited articles, resources, guidance notes, and workspaces.

EU: Roles of the provider and deployer in AI systems and models - part one

The EU Artificial Intelligence Act (the AI Act) is set to become a landmark regulation governing artificial intelligence (AI). It introduces stringent requirements and responsibilities for various actors in the AI value chain. Two key actors in that value chain are AI providers and deployers.

Determining whether an entity is a provider or deployer is crucial, as the roles carry distinct obligations under the AI Act. As with the General Data Protection Regulation (GDPR) and its distinction between controller and processor, the classification of provider or deployer will always require an assessment of the facts, rather than simply being allocated through contractual arrangements.

In practice, as businesses increasingly seek AI solutions trained on their proprietary data and materials to achieve outputs that are more tailored to their needs, the line between a provider and deployer may become blurred. It may even be possible for an entity to be both provider and deployer for the same AI system, depending on how it is implemented.

In part one of this Insight series, Katie Hewson and Eva Lu, from Stephenson Harwood LLP, examine the definitions of providers and deployers under the AI Act and allocating such roles. There are several other operators in the AI value chain - product manufacturer, importer, distributor, and authorized representative - that will not be covered in this article.

JARAMA/iStock via Getty Images

The AI Act's risk-based approach

The AI Act takes a 'risk-based approach,' i.e., the higher the risk of an AI system or model, the stricter the rules. Providers and deployers will have different obligations under the AI Act depending on the risk level of the system or model involved.

AI systems that pose an 'unacceptable risk' will be prohibited, while stringent regulatory requirements will be imposed on 'high-risk' AI systems and general-purpose AI (GPAI) models. High-risk AI systems are classified in Article 6 and include AI used in product safety, certain biometrics technologies, recruitment and employment, essential public infrastructure (utilities), and the insurance and banking sectors.

A separate layer of obligations applies to GPAI models, which are defined as AI models that display significant generality and are capable of competently performing a wide range of distinct tasks. These are likely to include foundation models. Additional obligations apply to GPAI models that pose 'systemic risks,' due to their high-impact capabilities. Certain AI systems are also subject to transparency obligations. Part two of this series covers provider and deployer obligations in each of these contexts.

Definition of provider and deployer

A provider under the AI Act is defined in Article 3(3) as a natural or legal person or body that:

  • develops an AI system or GPAI model or has an AI system or GPAI model developed; and
  • places that system or model on the market, or puts that system into service, under the provider's own name or trademark, whether for payment or free of charge.

A deployer, as defined under Article 3(4) of the AI Act, is a natural or legal person or body using an AI system under its authority, except in the course of a personal, non-professional activity.

Territorial scope of the AI Act for providers and deployers

Under Article 2(1) of the AI Act, providers will be within the scope of the Act if they are:

  • placing on the market or putting into service AI systems in the EU (regardless of where the provider is located);
  • placing on the market GPAI models in the EU (regardless of where the provider is located); or
  • established or located outside of the EU, where the output produced by the AI system is used in the EU.

Deployers will be within the scope of the Act if they are:

  • established or located in the EU; or
  • established or located outside of the EU, where the output produced by the AI system is used in the EU.

Allocating roles of providers and deployers

Providers

Under the AI Act, providers bear overall responsibility for ensuring the compliance and safety of AI systems. Traditionally, it would be the entity that designs, builds, or develops an AI model or system, such as an AI developer or machine learning (ML) specialist company, which would be seen as the provider. However, under the AI Act definition, even if an entity outsources the development of the AI system or GPAI model but is responsible for placing it onto the market or into service, that entity would be the provider.

For example, where an entity procures the services of a third-party ML specialist to design or develop the AI system for them, using the entity's own data, materials, or even bespoke algorithms, that entity is more likely to become a provider.

It is possible that in this example, the AI developer or ML specialist, as it does not place on the market or put an AI system into service, does not play any role and therefore has no obligations under the AI Act. It is also possible that the procuring entity and its AI developer or ML specialist may both be providers, if components of the AI system, such as a GPAI model that it uses, are placed on the market by the developer or specialist company.

Deployers

Deployers under the AI Act have the critical responsibility of ensuring the safe and compliant use of AI systems when they are rolled out. In the example above, the entity procuring the services that may be classed as the provider will also be the deployer if it puts the newly designed AI system into service.

Becoming providers or deployers under the AI Act for output used in the EU

In addition, as noted in the territorial scope section above, the entity that develops or uses an AI system established or located outside the EU may also become a provider or deployer under the AI Act if the output produced by the AI system is used in the EU even when the AI system is not placed on the market, put into service, or used in the EU.

While Recital 22 of the Act narrows this to the extent the output is intended to be used in the EU, the level of 'intention' could still be open to interpretation pending further guidance. The example provided in Recital 22 relates to where a provider or deployer established in the EU contracts certain services to a provider or deployer established in a third country in relation to an activity to be performed by an AI system that would qualify as high-risk. The AI system used in a third country by the provider or deployer could process data lawfully collected in and transferred from the EU and provide to the contracting provider or deployer in the EU the output of that AI system resulting from that processing, without that AI system being placed on the market, put into service, or used in the EU. The Act contemplates that in such a circumstance, the provider or deployer of the AI system in the third country would be a provider or deployer under the EU AI Act.

It would therefore be prudent for entities established or located outside the EU that develop or use AI systems to take steps to include a clear contractual provision with its downstream third parties, particularly users of its AI system or users of outputs produced by its AI system, to specify that outputs of the AI system are not intended for use in the EU and for the third party to ensure outputs are not used in the EU.

Deployers becoming providers of high-risk AI systems

The AI Act sets out certain conditions under which a deployer could become a provider. This will only apply in connection with high-risk AI systems, as classified under Article 6 of the Act.

Article 25(1) provides that a deployer or other third party will be considered to be a provider of a high-risk AI system and will therefore assume all the relevant obligations of a provider if it:

  • puts its name or trademark on a high-risk AI system placed on the market or put into service (although contractual arrangements may stipulate how the provider obligations are allocated);
  • makes a substantial modification to a high-risk AI system placed on the market or put into service such that it remains a high-risk AI system; or
  • modifies the intended purpose of an AI system, including a GPAI system, which has not been classified as high-risk and is on the market or in service, in such a way that the AI system becomes a high-risk AI system.

In these circumstances, the initial provider may no longer be considered a provider of that specific AI system. However, Article 25(2) stipulates that it will need to closely cooperate with the new provider, make available the necessary information, and provide the reasonably expected technical access and other assistance that is required for the new provider to fulfill its obligations under the AI Act. This is particularly important where the new provider is only putting its name or trademark on the high-risk AI system and has had little involvement in its design or development, so, as stated, clear contractual provisions should be in place to this effect.

Article 25(2) also provides that the initial provider is not obliged to undertake such cooperation if it has clearly specified that its AI system is not to be changed to a high-risk AI system. It is therefore particularly important for providers of AI systems to include a clear and enforceable contractual provision to this effect with its downstream third parties, including deployers, that have the capability of modifying the AI system.

Equally, a deployer of in-scope AI systems may want to include a clear provision in its third-party supplier contracts not to take any of the actions set out in Article 25(1) on its behalf that could make the deployer become a provider of a high-risk AI system.

Katie Hewson Partner
[email protected]
Eva Lu Associate
[email protected]
Stephenson Harwood LLP, London