Support Centre

You have out of 5 free articles left for the month

Signup for a trial to access unlimited content.

Start Trial

Continue reading on DataGuidance with:

Free Member

Limited Articles

Create an account to continue accessing select articles, resources, and guidance notes.

Free Trial

Unlimited Access

Start your free trial to access unlimited articles, resources, guidance notes, and workspaces.

EU: Navigating the AI Act - key practical considerations for provider obligations

In the context of business operations, understanding providers' obligations under the EU Artificial Intelligence Act (the AI Act) remains key given their position as a key actor in the artificial intelligence (AI) value chain. The first part of this series on the AI Act explored what types of AI are covered and what obligations are applicable to each AI actor. In this article, Sean Musch and Michael Charles Borrelli, from AI & Partners, and Charles Kerrigan, from CMS UK, aim to offer a brief explanation of the profound importance of providers' comprehending and adhering to these obligations, which extend beyond a mere checklist of regulatory requirements. Part three of this series explores the significance of comprehending these provider obligations.

This article is accurate as of its time of publication and will be updated to reflect any changes to the AI Act.

farakos / Essentials collection / istockphoto.com

Introduction

Significance of provider obligations

At its core, a provider's AI Act obligations are about more than legal compliance - it is a strategic imperative. Providers, whether in goods or services, play a pivotal role in the broader business ecosystem, particularly as global AI proliferation continues to grow. A nuanced grasp of their obligations empowers organizations to foster transparency, mitigate risks, and elevate the overall quality of their offerings. From ensuring ethical sourcing to cultivating robust partnerships, the scope of these regulatory obligations significantly influences a company's regulatory footprint and market standing.

Significance of compliance in the business landscape

The significance of AI Act compliance has started to reverberate through every facet of the modern business landscape. Beyond avoiding legal repercussions, AI Act compliance is the bedrock of modern corporate responsibility. Embracing and exceeding regulatory standards is a testament to an organization's commitment to ethical conduct. This commitment, in turn, resonates with stakeholders, fostering trust, and enhancing the brand's reputation. Ensuring trustworthiness along the entire value chain of general-purpose AI models (GPAI) and its ​diversified​​​ business models presents significant challenges for policymakers. Future-proofing policies and ensuring compliance and effective enforcement remain critical in building and maintaining public trust in these technologies, not only in the context of the AI Act, but in all jurisdictions aiming at setting a governance regime for AI.

Key regulatory obligations

Introduction to provider obligations

Providers navigating the intricate landscape of regulatory frameworks encounter a spectrum of obligations, such as maintaining a quality management system to ensure compliance with Article 17 (Quality management system) under the AI Act, forming the cornerstone of ethical and compliant business practices. This section provides a concise overview of these obligations, emphasizing the symbiotic relationship between providers and the AI Act.

Primarily, 'providers' of systems are those who develop an AI system with a view to placing it on to the market or putting it into service under their own name or trademark (Article 3). The AI Act mandates providers of high-risk AI systems to perform a prior conformity assessment before placing them on to the market (Articles 16 and 43). Providers, pursuant to the New Legislative Framework (NLF) model, are required to ensure their systems are compliant with the 'essential requirements' set out in Title III, Chapter 2 of the Act. Thereafter, they can then attach a CE mark to conforming systems, which can be freely imported and distributed throughout the EU.

In this sense, the majority of requirements pertain to data and data governance; technical documentation; record keeping; transparency and provision of information to users; human oversight; and robustness, accuracy, and security.

Additionally, providers must construct a risk-management system that documents and manages risks across the AI system's entire lifecycle, when utilized as envisaged, or, under conditions of 'reasonably foreseeable misuse.' Risks can also be included as a result of post-market surveillance. The aim is to bring down the 'high risks' of the AI system to an acceptable residual level. Sufficient mitigation and control measures can be deployed when risks cannot be eliminated. Residual risks must then be communicated to users.

Conformity assessment

Pre-market requirements

The AI Act mandates providers to provide that prior to placing on the market that their systems conform with certain requirements, together with complying with a number of other tasks including registering AI systems on a database, having an appropriate quality management system in place, recording the system's technical documentation and keeping automatically generated logs. Following this, their system gets its CE mark, which enables distribution throughout the EU (Article 19).

Providers, in a general sense, will only have to demonstrate conformity by an 'assessment based on internal control' i.e., self-certification (Article 43(1)(a)). All providers can do is self-assess that their quality management system, technical documentation, and post-market monitoring plan align with the essential requirements. They can do this either by their own customized plans for conformity, or, far more probable, by undertaking a relevant harmonized technical standard. Currently, only a subset of high-risk AI systems must make use of a third-party body – a 'notified body' – to externally audit their conformity.

In this context, the systems in question are:

  • AI systems for biometric identification or categorization of natural persons (Article 43(1)) but only if no technical harmonized standard is made, which is unlikely to result.
  • AI systems already regulated under existing NLF or other EU laws, listed in Annex II, where that legislation already demands a notified body be involved (Article 43(3)).

How do providers self-certify?

The AI Act holds that harmonized technical standards for high-risk AI will be created by technical committees. These harmonized standards play an important role in EU legislation by making what are at times vague essential requirements into concrete technical requirements. A conceptualization of these standards is offered by the Future of Life Institute (FLI) (available here).

It is these standards that will specify, for example, what the 'suitable risk management measures' mentioned in the AI Act include. They are standards specifically designed to support EU legislation, and adhering to them carries a 'presumption of conformity' with the essential requirements.

Not all standards developed in the EU are harmonized standards, only those intended to support EU legislation. High-risk AI systems that self-certify as confirming with such standards are then presumed to have met the requirements of Article 40. However, providers can disregard these standards, and opt to justify that they have adopted technical solutions at least equivalent.

Post-market requirements

Providers are required to 'establish and document a post-market monitoring system in a manner that is proportionate to the nature of the artificial intelligence technologies and the risks of the high-risk AI system.' In this sense, the monitoring system will 'collect, document and analyse relevant data provided by users or collected… throughout their lifetime.' Users (i.e., deployers) are also mandated to monitor systems 'on the basis of the instructions of use' and report new risks, serious incident or 'malfunctioning' (Article 29(4)).

Provider monitoring of accidents and malfunctions must liaise with the relevant Market Surveillance Authority (MSA), at the minimum, within 15 days of becoming aware of it (Article 62). Member States can appoint national supervisory authorities, which by default act as MSAs (Article 59). However, in some cases, other bodies such as Data Protection Authorities are likely to assume the role.

Fines for non-compliance

The AI Act sets out a strict enforcement regime for non-compliance. There are three notional levels of non-compliance, each with significant financial penalties. Depending on the level of violation (in line with the risk-based approach), the Act applies the following penalties:

  • Breach of AI Act prohibitions: fines up to €35 million or 7% of total worldwide annual turnover (revenue), whichever is higher.

  • Non-compliance with the obligations set out for providers of high-risk AI systems, authorized representatives, importers, distributors, users, or notified bodies: fines up to €15 million or 3% of total worldwide annual turnover (revenue), whichever is higher.
  • Supply of incorrect or misleading information to the notified bodies or national competent authorities in reply to a request: fines up to €7.5 million or 1.5% of total worldwide annual turnover (revenue), whichever is higher.

In the case of small and medium enterprises, fines will be as described above, but whichever amount is lower.

Conclusion

Practical considerations in conformity assessments

Providers must ensure pre-market conformity for AI systems by self-certifying adherence to specified requirements, utilizing internal controls or third-party audits for high-risk cases. Harmonized technical standards, developed by EU committees, play a crucial role, providing concrete guidelines for compliance. Post-market, providers establish proportional monitoring systems, collecting and analyzing user-provided data throughout the AI system's lifespan.

Challenges in meeting obligations

Meeting the AI Act requirements poses multifaceted challenges for providers. Navigating the complex regulatory landscape demands a nuanced understanding of obligations, extending beyond mere legal compliance to strategic imperatives. Providers must establish robust quality management systems, adhere to essential requirements, and engage in pre-market conformity assessments. The reliance on harmonized technical standards introduces complexity, requiring alignment or justification for alternative technical solutions. Post-market obligations involve proportional monitoring and collaboration with MSAs. Additionally, the stringent enforcement regime, with substantial fines linked to violation severity, necessitates meticulous adherence, adding financial and compliance pressure for providers, particularly in the dynamic field of AI.

Recap and emphasis on the importance of compliance

In summary, providers are urged to embrace a holistic approach to compliance, transcending the mere fulfilment of legal obligations. The key practical considerations - alignment with regulatory standards, proactive resolution of implementation challenges, meticulous documentation, and continuous monitoring - form the pillars of a robust AI Act compliance framework. Compliance, in essence, is not just a legal requirement; it is a strategic imperative that underpins ethical business conduct and fosters industry leadership.

Call to action for providers

As custodians of ethical business practices, providers are called upon to prioritize and navigate their obligations with unwavering commitment. The importance of compliance reverberates not only within legal frameworks but throughout the broader business landscape. Providers must recognize that compliance is not a static obligation but an ongoing commitment to excellence. Therefore, the call to action is clear - prioritize compliance, integrate it into the organizational DNA, and proactively navigate the intricate terrain of obligations. In doing so, providers not only safeguard their interests but contribute to a business ecosystem built on transparency, trust, and sustainable growth.

Sean Musch Co-CEO/CFO
[email protected]
Michael Charles Borrelli Co-CEO/COO
[email protected]
AI & Partners, Amsterdam

Charles Kerrigan Partner
[email protected]
CMS UK, London

Feedback