Support Centre

You have out of 5 free articles left for the month

Signup for a trial to access unlimited content.

Start Trial

Continue reading on DataGuidance with:

Free Member

Limited Articles

Create an account to continue accessing select articles, resources, and guidance notes.

Free Trial

Unlimited Access

Start your free trial to access unlimited articles, resources, guidance notes, and workspaces.

EU: Mitigating the risks of the EU's high-risk AI systems requirements

In this Insight article, Iain Borner, Chief Executive Officer at The Data Privacy Group, delves into the transformative impact of the EU Artificial Intelligence Act (AI Act), which establishes a regulatory framework aimed at fostering trustworthy artificial intelligence (AI) aligned with European values. With a focus on high-risk AI systems, the AI Act introduces mandatory compliance processes and provisions, setting a precedent for ethical innovation that prioritizes people's rights and safety.

dem10 / Signature collection / istockphoto.com

Introduction

The AI Act is a landmark piece of legislation that aims to create a harmonized regulatory framework for AI across the EU. Adopted in April 2021, the AI Act establishes rules and guidelines for the development, placement on the market, and use of AI systems.

The overarching goal is to foster trustworthy AI that respects fundamental rights and aligns with EU values. As the first-of-its-kind regulatory framework for AI globally, the AI Act serves as a model for ethical and trustworthy innovation that puts people first. It seeks to minimize the risks and potential harms of certain AI applications while maximizing their benefits for individuals and society.

The AI Act categorizes AI systems based on the level of risk they pose, with additional rules and requirements applying to what it defines as high-risk AI systems. By regulating only high-risk applications like facial recognition and medical AI, the EU aims to strike a balance between promoting innovation and protecting rights. The new law establishes mandatory compliance processes for organizations that use or provide high-risk systems.

What are high-risk AI systems?

The AI Act defines high-risk AI systems as those that pose significant risks to the health, safety, and fundamental rights of individuals. Specifically, an AI system is classified as high-risk if it meets both of the following criteria:

  • it is intended to be used as a safety component in a product covered by existing EU harmonization legislation for products such as machinery, toys, elevators, medical devices, and more; and
  • it is used in one of the following high-risk areas:
    • biometric identification and categorization of natural persons;
    • management and operation of critical infrastructure;
    • education or vocational training;
    • employment, workers management, and access to self-employment;
    • access to and enjoyment of essential private services and public services and benefits;
    • law enforcement;
    • migration, asylum, and border control management; or
    • administration of justice and democratic processes.

The key factors are whether the AI system is used as a safety component and whether it operates in a high-impact area related to significant risks for rights and safety. If both conditions apply, the system will be designated high-risk under the EU regulations.

Key provisions for high-risk AI systems

The EU AI Act introduces several key provisions that apply to providers of high-risk AI systems:

Risk management: Organizations must implement risk management systems to analyze, evaluate, mitigate, and document the potential risks associated with their high-risk AI systems. This includes risks to health, safety, fundamental rights, and discrimination.

Technical documentation: Detailed technical documentation must be maintained on the design, development, training, testing, and continuous monitoring of high-risk AI systems. This documentation enables oversight by regulators.

Transparency: Providers must ensure transparency to users by providing clear information on the AI system's capabilities and limitations. Users must be notified when interacting with an AI system rather than a human.

Human oversight: High-risk AI systems must have appropriate human oversight and control. Humans must be able to interpret the system's outputs and override dangerous or erroneous decisions.

Accuracy, robustness, and cybersecurity: High-risk systems must meet benchmarks for accuracy, robustness, cybersecurity, and overall performance. They must be resilient and secure against threats such as data poisoning or model hacking.

By introducing these provisions, the AI Act aims to ensure high standards for trustworthy AI that respects fundamental rights and aligns with European values. The requirements place accountability on providers of high-risk systems.

Examples of high-risk AI systems

Some examples of AI systems considered high-risk per the AI Act include:

  • Biometric identification systems: Systems using biometric data like facial recognition or gait analysis to identify individuals. This includes systems used for remote biometric identification, which can enable mass surveillance.
  • AI systems used in recruitment and workplace management: AI is increasingly used to screen and evaluate candidates, assess performance, and make promotion or termination decisions. Risks around bias and discrimination exist.
  • AI systems that determine access to education: Algorithms that evaluate students and determine access to educational programs carry risks around bias, transparency, and fairness.
  • AI systems for credit-worthiness assessments: With AI assessing who gets loans and at what rates, risks like algorithmic bias, and lack of transparency arise.
  • AI systems used in law enforcement: Risks around accuracy, bias, and improper use of systems like predictive policing algorithms.
  • AI systems that determine access to essential services like housing: Bias and discrimination risks arise when algorithms make decisions that impact basic needs.
  • Remote biometric identification systems in public spaces: Enables mass surveillance and erosion of privacy when used without proper oversight.

The AI Act aims to ensure high standards around these and other high-risk AI systems.

Risks for organizations using high-risk AI

  • Legal compliance risks: Depending on the use case, organizations utilizing high-risk AI systems covered by the AI Act, such as facial recognition, may not comply with legal regulations. This could lead to substantial fines or even bans on the technology if requirements such as human oversight, risk management systems, and adequate transparency are not sufficiently fulfilled.
  • Fines and penalties: Violations of the AI Act provisions can result in financial penalties in the billions. For example, companies could face fines of up to 6% of worldwide turnover for infringements. These hefty fines provide an additional incentive for rigorous compliance.
  • System failures and defects: The complex nature of many high-risk AI systems also introduces the potential for errors, biases, and unexpected failures. Without sufficient testing, risk assessment, and ongoing monitoring, defects in high-risk systems could lead to safety risks or discrimination against certain groups.
  • Reputational risks: Any incidents stemming from non-compliant or defective high-risk AI systems also pose major reputational risks. Public awareness and concerns around AI ethics are growing. Any organization tied to problematic AI applications risks significant backlash and loss of trust.

Mitigation strategy 1: Establish robust compliance frameworks and responsible AI practices

Organizations utilizing high-risk AI systems face significant compliance obligations and risks under the AI Act. However, by taking a proactive approach and implementing robust frameworks, auditing, and responsible AI practices, they can mitigate these risks. 

Some key steps include:

  • adopting an ethical AI framework that aligns with EU values and principles for trustworthy AI - this provides guidance on developing and deploying AI responsibly;
  • implementing internal reviews and impact assessments for high-risk systems - rigorously evaluate AI systems pre and post-deployment for biases, errors, and potential harms;
  • establishing model risk management programs - continuously monitor AI systems in production to identify issues and enable rapid response;
  • implementing mechanisms for transparency, explainability, and human oversight over high-risk systems - this builds trust and enables accountability;
  • auditing AI systems regularly for technical robustness, security, fairness, and compliance with regulations - identify and resolve gaps;
  • training developers and users of AI systems on responsible and ethical practices - promote AI literacy and understanding of compliance obligations; and
  • partnering with third-party auditors to provide independent validation of compliance for high-risk systems.

By taking these proactive steps, organizations can demonstrate adherence to EU regulations for high-risk AI systems and build trust with regulators and the public.

Mitigation strategy 2: Training, testing, validation, and human oversight

AI systems, especially high-risk ones, should be thoroughly tested before deployment to ensure they behave as intended. This requires extensive training data to build the system, testing protocols to validate performance, and human oversight to monitor outputs.

Training data

Sufficient, high-quality, and unbiased training data is essential for any AI system. Training data should cover all potential use cases and include diversity across gender, race, age, culture, etc. Without comprehensive training data, the system risks behaving unpredictably or exhibiting bias.

Testing protocols 

Rigorous testing protocols should be established to validate the AI system's performance across various conditions. Testing should cover expected use cases, edge cases, stress tests, and potential failure modes. Testing early and often is crucial to identify any undesirable behavior before deployment.

Independent validation

In addition to internal testing, third-party validation helps ensure impartiality. Independent auditors can verify that testing protocols were executed appropriately and provide an unbiased assessment of the AI system's capabilities and limitations.

Human oversight

Even the most advanced AI requires some level of human oversight. Humans must remain 'in the loop' to monitor system outputs, promptly detect errors, override incorrect decisions, and continuously evaluate performance after deployment. Proper training and staffing are necessary to enable effective human oversight of AI systems.

A combination of extensive training data, testing, validation, and human oversight helps mitigate risks and ensures high-risk AI systems operate safely and as intended. This multi-layered approach provides essential safeguards for organizations deploying AI.

Mitigation strategy 3: Risk management, impact assessments, and documentation 

Implementing a strong risk management framework is critical for organizations using high-risk AI systems. This includes conducting in-depth impact assessments to identify potential risks across areas like fundamental rights, discrimination, data quality, and model robustness. Documenting these impact assessments shows regulators that the organization has taken a proactive approach to identifying and mitigating risks. 

Specifically, organizations should:

  • assign responsibility for risk management to a dedicated role or team - this demonstrates accountability;
  • conduct impact assessments on all high-risk AI systems before deployment and at regular intervals post-deployment - assess discriminatory impacts, effects on rights like privacy impacts on children or vulnerable groups, and other key areas;
  • maintain extensive documentation on impact assessments, risk identification, mitigation strategies, and outcomes - thorough documentation shows regulators that the organization is taking a thoughtful approach;
  • establish processes for continuously monitoring risks and biases in high-risk AI systems - this allows emerging risks to be identified and addressed proactively; and
  • implement mechanisms for affected parties to inquire about automated decisions made by high-risk AI systems - this promotes transparency.

Taking a rigorous approach to risk management, impact assessments, and documentation shows regulators an organization committed to deploying high-risk AI responsibly and addressing risks proactively. This responsible approach is key for compliance.

The role of trust and privacy

As organizations adopt high-risk AI systems, building user trust through transparency and ethical practices is crucial. Consumers want to understand how AI systems work and impact their lives. They need to trust AI is fair, accurate, and unbiased.

Organizations have a responsibility to ensure high-risk AI upholds privacy rights. Using techniques like differential privacy and federated learning helps protect user data while delivering AI benefits. Continuously evaluating AI for unwanted bias and conducting impact assessments builds accountability. 

Implementing tools to provide explainability into AI model decisions enhances transparency. Creating ethical review boards and getting diverse input shows the organization values multiple perspectives. Adopting frameworks like the EU's Ethics Guidelines for Trustworthy AI demonstrates best practices.

The public will be more supportive of AI systems they understand and see as trustworthy. Organizations that make trust and privacy core principles will be poised for AI success.

Conclusion

The AI Act introduces new requirements for organizations developing or using high-risk AI systems. Key provisions include mandatory risk assessments, transparency requirements, human oversight measures, and more. While this increases obligations for providers of high-risk AI, the goal is to ensure these systems can be trusted and do not infringe on rights and safety.

As the AI Act continues through the legislative process, organizations using high-risk AI should begin preparing now. Conduct thorough risk assessments of their AI systems and data practices, implement strategies to minimize bias and errors, and increase transparency for users. Adopting responsible AI practices promotes fairness, accountability, and compliance.

Organizations looking to get started should consider leveraging privacy solutions to streamline the assessment of AI risks and the implementation of critical privacy and governance controls. With the right preparation, companies can follow the spirit of the AI Act today and build trustworthy AI systems that benefit society. Through collaboration between regulators, technology providers, and enterprises, the EU can lead the way in ethical AI development and use.

Iain Borner Chief Executive Officer
[email protected]
The Data Privacy Group Ltd., UK

Feedback