Continue reading on DataGuidance with:
Free Member
Limited ArticlesCreate an account to continue accessing select articles, resources, and guidance notes.
Already have an account? Log in
Hong Kong: PCPD publishes AI Model Personal Data Protection Framework
On June 11, 2024, the Office of the Privacy Commissioner for Personal Data (PCPD) published the Artificial Intelligence: Model Personal Data Protection Framework.
In particular, the Framework provides recommendations and best practices regarding the governance of artificial intelligence (AI) for the protection of personal data when organizations procure, implement, and use any type of AI system, including generative AI. The PCPD noted the Framework draws on the Guidance on the Ethical Development and Use of AI published by the PCPD in 2021 (the 2021 Guidance).
What is the scope of the Framework?
The Framework considers organizations to mean organizations that procure AI solutions from third parties and engage in the handling of personal data in:
- customizing an AI system to improve its performance for a specific domain or use case; and/or
- operating the AI system.
An 'AI supplier' means both AI developers and/or AI vendors who provide AI solutions to the organizations above. Organizations that develop in-house AI solutions are recommended to refer to the 2021 Guidance.
The AI Framework centrally recommends that organizations comply with the Personal Data (Privacy) Ordinance (Cap. 486) as amended in 2021 (PDPO) when handling personal data in procuring, implementing, and using AI solutions. However, the AI Framework notes its recommendations are not exhaustive.
AI Strategy and Governance Program
Firstly, the Framework provides for the establishment of an AI Strategy and Governance Program for the procurement of AI solutions. Central to an AI Strategy is the principle of accountability when implementing and using AI systems. On governance, the Framework cites the need to engage third parties in procurement, laying out suggested considerations when sourcing vendors.
Risk assessments
Secondly, the Framework details the need for risk assessments to ensure, for example, that AI solutions are used for the specific purpose they were acquired for. Risk assessments should be conducted during the procurement process or when significant updates are made to an existing AI system. Once the risks are identified and evaluated, organizations should adopt appropriate measures commensurate with the risks.
Customization of AI systems
Thirdly, the Framework considers the customization of AI systems, meaning the process of adjusting or adapting pre-trained AI models to meet the purposes of an organization in using the AI system. This also includes AI systems which learn and evolve through use. Specifically, the Framework recommends principles to ensure compliance with the PDPO, including data minimization, purpose limitation, data accuracy, data security, and the anonymization of personal data when its original purpose has been fulfilled.
Such steps require continuous monitoring of AI systems because the risk factors may change over time, with the Framework noting that human oversight should be exercised to prevent and minimize risks. More specifically, the Framework includes an AI Incident Response Plan for the management of AI risks.
Communications and engagement with stakeholders
Organizations are recommended to be transparent with data subjects on the purpose for which their data is used in relation to AI, the classes of persons to whom data may be transferred, and the organization's policies and practices in relation to personal data and AI. Organizations are also recommended to be similarly transparent with stakeholders on their use of AI, alongside disclosing the results of risk assessments of their AI systems.
In relation to data subject rights specifically, the Framework stipulates that organizations must comply with access and rectification requests under the PDPO, and may engage an AI supplier when necessary to fulfill data subject requests. Central to the above, the Framework suggests that the decisions and output of AI be explainable, particularly where the use of the AI system may have a significant impact on individuals.