UK: ICO releases best-practice guidance as part of AI auditing framework
The Information Commissioner's Office ('ICO') published, on 30 July 2020, a blog post announcing the issuance of guidance ('the Guidance') on artificial intelligence ('AI') and data protection as part of the development of its wider AI framework ('the Framework'). In particular, the Framework consists of a methodology for auditing AI to ensure personal data is fairly processed, focusing on best practices including auditing tools and procedures that will be used by the ICO in investigations, detailed guidance on AI and data protection, and a toolkit providing practical support to organisations carrying out internal audits of AI systems.
More specifically, the Guidance covers the accountability and governance implications of AI; requirements for ensuring lawfulness, fairness, and transparency in AI systems; assessments for security and data minimisation in AI; and methods for ensuring individual rights are respected within AI systems. In addition, the Guidance is aimed at both those with a focus on compliance such as data protection officers ('DPO'), as well as technology specialists. Furthermore, the Guidance recommends, among other things, conducting a Data Protection Impact Assessment ('DPIA') on the AI system to demonstrate compliance, and using techniques to aid in data minimisation and effective AI development which include implementing risk management practices and starting with mapping out machine learning processes in which personal data might be used.