Support Centre

UK: Explaining decisions made with AI: Part 3

Organisations making use of artificial intelligence ('AI') systems must carefully consider their compliance strategy in how they communicate AI-powered decisions to data subjects, as the best approach to take here will be highly context specific. In this third part of a series on guidance published by the UK Information Commissioner's Office ('ICO') and the Alan Turing Institute, 'Explaining Decisions Made with AI' ('the Guidance'), Bridget Treacy and Olivia Lee, Partner and Associate respectively at Hunton Andrews Kurth LLP, develop upon the various types of explanation explored in Part 2 of the series and examine the practical steps organisations can take to 'design and deploy appropriately explainable AI systems,' highlighting that developing AI models with explainability in mind can improve transparency and accountability, as well as ensure compliance with data protection laws, not just at the point of deployment of the system but from the point of data collection as well.

akinbostanci / Signature collection / istockphoto.com

As has been discussed previously, determining what information to provide, and how to communicate it meaningfully to all relevant constituents is a key compliance challenge for operators of AI systems. Part 1 of this series provided an overview of key data protection requirements associated with AI, in particular where systems use automated decision making, including profiling, that produces legal or similarly significant effects. Where automated decision making is used, organisations are required to provide individuals with meaningful information about both the logic involved and the significance and envisaged consequences of the automated processing for them. The Guidance proposes six key types of explanation, discussed in detail in Part 2 - the rationale, responsibility, data, fairness, safety and performance, and impact explanations. These can be used individually or in combination to ensure that information provided to data subjects is clear, meaningful and understandable, as well as satisfying legal requirements. The Guidance highlights that elements of the rationality and responsibility explanations are likely to be required in most instances, although the information provided under one explanation heading may overlap with other types of explanation.

Selecting appropriate types of explanation

Determining which types of explanations to use can be extremely challenging. The Guidance identifies five key contextual factors that can assist, namely:

  • Domain: the sector or setting in which AI is deployed.
  • Impact of the AI system: consider impact on individuals and on society as a whole.
  • Nature of the data used by the AI system: type, sensitivity, and source of the data and whether or not the data is malleable (i.e. whether it can be changed through behaviour).
  • Urgency: will a decision be taken quickly using the AI system or will there be time to reflect on the output? Should some types of explanation be prioritised and delivered quickly?
  • Audience: who is the intended recipient of the explanation, what do they need to know, and how can this best be communicated to aid understanding?

The Guidance suggests that the domain factor will likely be the most crucial consideration. In scenarios in which bias and discrimination are a particular concern for individuals, such as in the criminal justice system or in relation to access to higher education, the fairness explanation is likely to be key. Those affected by the outcome need to be reassured that the AI system will operate fairly in processing their data.

Conversely, in a domain in which the potential impact on data subjects is lower, the rationale and responsibility explanations may be more appropriate. The Guidance emphasises, however, that even in sectors in which discrimination appears to pose less of a risk, AI operators should be sensitive when seeking to target specific demographics or utilise existing stereotypes, for example through targeted advertising, as this raises potential societal impacts. Providing a fairness explanation may be appropriate even when the domain appears to be low risk.

The fairness, safety and performance, and impact explanations will be appropriate for situations in which AI-assisted decisions could have a high impact, such as in the context of medical or health-related decisions. Prioritising these particular explanations may reassure those subject to high-impact determinations that decisions have been taken fairly and having regard to the decision's likely impact on them. Where the output of the AI system is subjective and therefore more susceptible to challenge, the Guidance suggests that the rationale and responsibility explanations will also be useful. When considering the likely impact, the Guidance recommends that AI operators select the relevant explanation approach on a case-by-case basis, ensuring that the potential impact of the AI system's decision is fully understood.

Next steps

The selection of an appropriate explanation, or combination of explanation types, is the first step recommended by the Guidance, which points to the benefits of developing AI systems in an 'explanation-aware' fashion, rather than seeking to explain a system once it has been built. Viewed through a data protection lens, this approach embodies Data Protection by Design and by Default. It is also about planning ahead: given that an explanation will be required for legal compliance, the nature, scope, and content of the explanation should be a consideration from the outset. As the Guidance notes, the way in which data is collected and pre-processed will have a bearing on the quality of the explanation that is ultimately given, as will an understanding of the system's design. As an example, when using the fairness explanation, organisations must consider how they will demonstrate that the data collected was representative, or whether certain factors should be weighted before processing in order to achieve a fair result. If using the impact explanation, a Data Protection Impact Assessment undertaken in relation to the AI system at the outset of the project may help determine relevant impacts to be included in the explanation itself.

Organisations must also ensure that when building or selecting an AI system, they focus on the need to ensure transparency; this is not merely for the compliance reasons discussed in Part 1 but to reassure consumers that their personal data is used responsibly, increasing consumer trust. The chosen model should be capable of explanation, allowing extraction of relevant information needed to inform the explanation. By way of example, the Guidance suggests selecting an optimally interpretable model when using data that relates to demographics, given the potential for discrimination, whereas for less risky data, or where the AI system will be used purely for scientific purposes, a 'black box' AI model that limits information extraction may be sufficient.

Once extracted, organisations need to be able to translate what may be complex and obscure information into an explanation that is practically understandable to its audience. The Guidance suggests a number of approaches that may assist with this translation process, including visualisation media, graphics, or summary tables. The Guidance also highlights that this will require judgment on the explainer's part, both in identifying which factors assessed by the AI were relevant or determinative to its output, as well as ensuring sensitivity to the data subject's specific circumstances where individuals are involved. The Guidance states that '[d]ecision recipients should be able to easily understand how the statistical result has been applied to their particular case, and why the implementer assessed the outcome as they did.' It is important, therefore, that those assigned to oversee the functioning of the AI system have sufficient expertise to make sense of the system's output.

In line with this, implementers and explainers of AI systems require adequate training and preparation in order to be able to use the AI system responsibly and explain outcomes when required. Where AI systems are used to assist decision making, those using them should be made aware of its limitations and the importance of using their judgment to sense-check the output. Implementers must also be aware of the potential for bias, either in favour of overreliance on the AI system's output, or distrust of it, and instructed on how best to address this.

Finally, when it comes to the actual delivery of the explanation, AI users need to consider the best medium. That may be a standard privacy notice, but in certain cases delivery of the information in person may even be appropriate. A layered approach can also be used, providing access to key information upfront and making further details available if desired, to avoid overloading a recipient with technical detail. The Guidance recommends reconsidering the contextual factors highlighted above in determining the most appropriate delivery method.

The Guidance notes that the process outlined above may not be linear, and that organisations may wish to develop their own procedures for implementing explainability rather than follow the steps described above. There is also more in-depth technical guidance prepared specifically for those involved in model selection. The Guidance further highlights the specific roles and teams within the organisation that will likely play a part in the explanation process and the policies and documentation that should be implemented to support the process. This latter section targets senior management.

At a time in which there is both an increased awareness of the benefits of AI systems, and increased suspicion and scrutiny from individuals, in particular in relation to their data protection rights, the Guidance offers a welcome and multilayered approach to improving transparency, and holding organisations to account. Explainability amounts to more than merely meeting a legal requirement; it requires developers and users of AI systems to actively understand how their systems operate and to be accountable for them. As Albert Einstein said, "If you can't explain it simply, you don't understand it well enough." The Guidance is a welcome contribution to the complexity of improving transparency and accountability in AI.

Bridget Treacy Partner
[email protected]
Olivia Lee Associate
[email protected]
Hunton Andrews Kurth LLP, London