Support Centre

UK: Explaining decisions made with AI: Part 1

Artificial intelligence ('AI') is now a commonly used technology, and its widespread adoption increasingly raises the issue of how to ensure transparency and accountability for the underlying data processing activities involved in its use. Following the publication of a report commissioned by the UK Government in 2017, the Information Commissioner's Office ('ICO') and the Alan Turing Institute were tasked with developing 'guidance to assist in explaining AI decisions' with a view to improving transparency and accountability. The guidance, 'Explaining decisions made with AI1' ('the Guidance'), was published in May 2020 and offers organisations a framework for considering how to explain decisions made using AI systems processing personal data to individuals. In the first of a three-part series on the Guidance, Bridget Treacy and Olivia Lee, of Hunton Andrews Kurth LLP, examine some of the data protection challenges associated with using AI systems and discuss how the Guidance helps to highlight some of these challenge, such as ensuring transparency.

akinbostanci / Signature collection / istockphoto.com

The use of personal data in AI systems

The Guidance describes three key phases of AI development during which personal data is likely to be used: training, testing, and deployment. During the training stage, data is fed into the AI system to enable it to identify associations between data points and to build a framework of understanding. This can be achieved through 'supervised' learning, where the AI system is taught to recognise associations between pre-labelled data points (e.g. pictures of animals labelled as such), and to reproduce these patterns using the rules it has learned. Alternatively, the training may involve 'unsupervised' learning, where the AI system is not provided with pre-determined associations but is instead fed a large data set and left to identify patterns, similarities, and anomalies on its own. AI systems can also be taught through 'reinforcement,' whereby the system is either punished or rewarded for the steps it takes to solve a problem, enabling it to develop problem-solving strategies to maximise its rewards. Whichever type of learning is used for training, the training phase requires a significant amount of data.

During testing, data is used to check the accuracy of the AI system's understanding. In the deployment phase, data related to the use case under examination is fed into the AI system, and the AI system generates output, in the form of a classification, prediction, or recommendation. This output allows a decision to be made, either by the AI system itself, or by a human assisted by the AI system's output.

Data protection issues raised by AI systems

The use of personal data in AI systems raises a number of data protection compliance issues under the General Data Protection Regulation (Regulation (EU) 2016/679) ('GDPR') and the Data Protection Act 2018 ('the Act'). Most obviously, the use of personal data invokes several transparency requirements, such as the notice requirements under Articles 13 and 14 of the GDPR. These provisions require that individuals whose personal data is processed be provided with information related to that processing, including the purposes of the processing, the applicable legal basis, and the recipients with whom the data are shared. Organisations that process personal data must also inform individuals of their rights in relation to their personal data, such as rights of access and the right to object to processing.

In addition to the general transparency requirements, Articles 13 and 14 of the GDPR specifically address automated decision-making, including profiling, that produces legal or similarly significant effects (e.g. decisions made or profiling conducted by an AI). Where automated decision-making is used, the individual must be provided with meaningful information about: (i) the logic involved; and (ii) the significance and envisaged consequences of the automated processing for them.

Individuals also have a qualified right not to be subject to solely automated decisions under Article 22 of the GDPR. Similar provisions are set out in the Act in relation to data processing by law enforcement and intelligence services.

Further, transparency is one of the core data protection principles in Article 5 of the GDPR, which requires that data be processed lawfully, fairly, and transparently. Also of relevance are the accountability obligations under Articles 5(2) and 24(1) of the GDPR, which require organisations to demonstrate compliance with the GDPR. In this context, organisations need to be able to show that they have treated individuals in a fair and transparent manner when using AI systems to make decisions about them. The Guidance emphasises that providing explanations regarding AI-assisted decisions to individuals is one way to demonstrate that the individual has been treated fairly and in a transparent manner.

Challenges in ensuring transparency and explainability

Ensuring transparency and explainability can be challenging in the context of AI systems. These systems are often designed to solve problems and spot patterns beyond the capability of humans, and the manner in which they achieve this may not be fully understandable to those deploying the AI, let alone explainable to those whose personal data is utilised. In addition, AI systems may provide unexpected outputs, changing the nature of the processing first envisioned by their designers. As such, providing information that is both comprehensive and comprehensible to individuals may amount to a near-impossible task at times. A balance needs to be struck between providing sufficient information to allow individuals to understand the nature of an AI system's processing activity, without overwhelming them with technical details that, while relevant, may obscure rather than provide sufficient clarity.

The Guidance describes the benefits of pursuing transparency and explainability. Besides the obvious avoidance of enforcement by regulators for non-compliance with data protection law, a comprehensive explanation of data use is likely to reassure consumers that their personal data is used responsibly, increasing consumer trust. At a wider level, better public awareness of how AI systems use personal data may promote more constructive debate and involvement in their design. Better awareness also enables individuals to exercise their rights and places the interests of individuals front-and-centre in the minds of those designing and deploying the technology. Furthermore, better public awareness encourages those designing and deploying AI systems to maintain full oversight of the systems, in order to be able to provide such explanations. As a by-product of this approach, overseers are likely to have better insight into how and in what respects an AI system falls short of expectations, enabling them to remedy deficiencies and address any discriminatory outcomes caused or exacerbated by the AI system.

If interpreted too stringently in the context of AI, however, transparency requirements may also raise some risks. The Guidance notes that incomprehensibly detailed explanations of AI may hinder more than help a layperson's understanding of the system, increasing public distrust. AI systems, particularly proprietary systems, can be extremely valuable to companies. Divulging extensive information regarding their use may risk revealing sensitive commercial or design details. Such information may also equip malicious actors to exploit or manipulate an AI system, disrupting its functioning. There is also a risk that personal data relating to other individuals whose data is processed by the AI system could be exposed if organisations provide overly detailed explanations to individuals.

The Guidance describes several approaches designed to assist in circumventing these risks and achieving the goal of effective explainability and transparency. We will discuss these explainability models in our next Insight.

Bridget Treacy Partner
[email protected]
Olivia Lee Associate
[email protected]
Hunton Andrews Kurth LLP, London


1. See: https://ico.org.uk/for-organisations/guide-to-data-protection/key-data-protection-themes/explaining-decisions-made-with-ai/