Support Centre

You have out of 5 free articles left for the month

Signup for a trial to access unlimited content.

Start Trial

Continue reading on DataGuidance with:

Free Member

Limited Articles

Create an account to continue accessing select articles, resources, and guidance notes.

Free Trial

Unlimited Access

Start your free trial to access unlimited articles, resources, guidance notes, and workspaces.

International: Privacy to AI - building on a strong foundation

The use of artificial intelligence (AI) technologies has exploded in recent years, with businesses eager to harness the power of machine learning and data analytics. However, the rapid adoption of AI has raised significant privacy concerns, leading to increased scrutiny from regulators. In this Insight article, Iain Borner, Chief Executive Officer at The Data Privacy Group, explores the intersection of privacy and AI, unraveling the complexities surrounding responsible implementation in the ever-evolving regulatory landscape.

Elizabeth Fernandez/Moment via Getty Images

New privacy regulations like the General Data Protection Regulation (GDPR) and the California Consumer Privacy Act (CCPA) now require organizations to consider the privacy implications of any processing of personal data, including AI systems. Meanwhile, laws focused specifically on AI, like the EU's upcoming Artificial Intelligence Act (AI Act), aim to mitigate risks around opaque algorithmic decision-making.

This complex patchwork of regulations presents challenges for companies seeking to implement AI responsibly. Organizations must carefully assess their current privacy programs and controls to ensure they provide an adequate foundation for AI governance.

With some enhancements, existing privacy frameworks can effectively support ethical and compliant AI development. This involves conducting comprehensive Data Protection Impact Assessments (DPIAs) for AI systems, minimizing data use, and designing models with privacy in mind.

By leveraging privacy programs as a starting point, companies can build AI systems that comply with applicable laws, while maintaining public trust through transparency and accountability.

Assessing the current privacy program

The first step in building an ethical and privacy-protective AI program is to conduct a comprehensive assessment of your organization's current privacy program. This will identify existing privacy controls and mechanisms that can be leveraged for AI governance.

Key areas to review include:

Data mapping: Review current data flow mapping documentation to identify all personal data processed by the organization, data subjects, purposes of the processing, legal bases relied on, third-party providers, and retention schedules. Ensure that your organization has full visibility into personal data usage across business units and systems.

Retention schedules: Evaluate if retention schedules remain appropriate, especially for personal data used to train AI systems. Data-minimizing approaches should be considered to limit retention to what is necessary.

Consent mechanisms: Assess consent collection processes to confirm that opt-in consent is obtained for personal data usage in training AI systems. Individual consent may be required in certain jurisdictions. Ensure consent mechanisms allow choice and control.

Lawful bases: Confirm that your organization has established appropriate lawful bases for personal data processed in existing systems. Assess if these remain adequate as use cases evolve to include AI systems. Legitimate interest or public interest legal bases may apply to some AI use cases.

Data protection: Review current technical and organizational measures for securing personal data across your data estate. Identify any gaps that need to be addressed for high-risk AI use cases.

Third parties: Consider whether adequate due diligence and contracts are in place for any third parties accessing personal data, including cloud providers hosting AI environments.

Leveraging existing DPIAs and risk analysis will provide a strong foundation for building an ethical AI program aligned with an organization's overall data governance strategy.

Identifying AI use cases

When developing an AI program, it is important to first identify potential use cases that align with your organization's goals and values. Consider where AI could provide the greatest benefits or efficiencies within your existing processes and systems. Some potential AI applications to explore include:

  • automating manual or repetitive tasks;
  • extracting insights from unstructured data like documents or images;
  • personalizing customer experiences with recommendations or custom content;
  • improving predictive analytics for forecasting and planning;
  • enhancing cybersecurity threat detection capabilities; 
  • assisting human decision-making to reduce bias and errors;
  • monitoring operations and infrastructure for anomalies; 
  • generating natural language content like summaries or translations;
  • classifying and tagging documents and records for improved search; and
  • automating customer service interactions with chatbots.

The key is to evaluate use cases based on feasibility, costs, and expected business impact or return on investment (ROI). Focus on high-priority needs where AI can augment human capabilities. Document how each use case would work, what data is required, and how AI outcomes would be measured.

Some questions to consider:

  • What tasks or processes are ripe for automation?
  • What data assets could yield new insights with AI?
  • Where could predictive analytics improve planning?
  • What customer experiences could be enhanced?
  • What decisions could AI assist with?

Keeping the focus on responsible AI practices, carefully evaluate the data requirements and ethical implications for each potential use case. Align potential AI applications with your organization's privacy and ethical principles.

Conducting DPIAs

As organizations implement AI systems, conducting robust DPIAs is crucial to assess and mitigate privacy risks. AI technologies can present unique risks compared to traditional data processing, so DPIAs may need to go beyond existing assessments. 

When conducting a DPIA for an AI system, key risks to analyze include:

  • biased or unfair outcomes: Assess whether the training data and algorithms could lead to discrimination against protected groups. Re-examine criteria used by the system;
  • inaccurate predictions: Evaluate potential real-world harm from erroneous outputs. Consider implementing checks and human oversight;
  • transparency concerns: Review whether the AI system's logic and internal processes are sufficiently clear and explainable to users. Obscure AI can undermine trust;
  • data security: scrutinize Whether adequate cybersecurity measures are in place to safeguard training data and AI models;
  • unintended data usage: Verify that the AI system only accesses and processes data for authorized purposes. Audit to prevent data misuse; and
  • legal compliance: Confirm that use cases align with relevant laws and regulations. Assess impact levels to determine notification requirements. 

AI-specific DPIAs allow organizations to probe these unique risks, document justifications, and prescribe controls to mitigate dangers to individuals' rights and freedoms. They provide a key basis for trustworthy AI governance.

Minimizing data use

When building an AI system, a key consideration that aligns with privacy principles is minimizing the amount of data that is collected, processed, and stored. Only collect the minimum data truly needed to power the AI system and fulfill its purpose. Evaluate each data element and remove anything unnecessary that does not directly contribute to improved functionality or outcomes. 

Some strategies to minimize data collection and use include:

  • performing data minimization assessments for each AI system to limit data strictly to what is required. Document clear justifications for every data element;
  • anonymizing or pseudonymizing data where possible to remove personally identifiable information. This protects privacy while still allowing analytical insights to be derived;
  • sampling data or using synthetic data sets rather than entire raw data stores. This reduces the volume of real personal data needed;
  • aggregating data or grouping it into larger batches rather than storing granular transaction-level records;
  • establishing retention limits to delete data that is no longer necessary for the AI system after a defined period; and
  • restricting access to only essential personnel through access controls and multi-factor authentication.

The goal is to give the AI model enough data to function properly while limiting extraneous data collection. This aligns with fundamental privacy principles around data minimization and helps reduce compliance risk. Evaluating necessity and proportionality is key when determining data needs.

Ensuring lawful basis

Before implementing any AI system, it is crucial to determine the lawful basis for processing the personal data used to train, test, and run AI models. There are several potential lawful bases that may apply:

Consent

  • Consent from individuals must be freely given, specific, informed, and unambiguous. Obtaining valid consent requires providing transparency about how data will be used and allowing individuals to opt-in.
  • Consent is often hard to obtain and maintain over time as data uses evolve. It may not be the best lawful basis for broad AI use cases.

Legitimate Interest

  • The legitimate interest of the data controller or third party may serve as a lawful basis if processing is necessary and balanced with individuals' interests and rights.
  • A legitimate interest assessment should evaluate the purpose, necessity, and balance of interests for any AI use case.

Contract

  • Processing personal data to fulfill or prepare for a contract with an individual provides a lawful basis.
  • AI uses that enhance contracted services may rely on this basis, if specified in the terms and conditions.

Compliance with legal obligation

  • The processing required to comply with the law provides a lawful basis in some cases.
  • AI systems must adhere to applicable laws and regulatory requirements.

When implementing AI capabilities, a lawful basis for processing personal data should be clearly defined and documented for each intended purpose. Relying on consent alone is often insufficient due to its limitations. Conducting legitimate interest assessments and identifying contractual or legal bases helps ensure lawful AI practices are aligned with privacy principles.

Implementing data protection by design

Data protection by Design and Default are key principles of privacy regulations like the GDPR. When implementing AI systems, organizations must apply these concepts to minimize data use and build privacy into the design of AI models and applications.

Some best practices include:

  • conducting DPIAs early in the development process to identify and mitigate risks. Assessments should cover the entire data lifecycle;
  • using privacy-enhancing technologies like differential privacy, federated learning, and homomorphic encryption. These techniques allow insights to be gained from data while minimizing exposure of personal information;
  • implementing granular access controls and role-based access. Limiting data access to only what is needed for specific AI tasks;
  • anonymizing or pseudonymizing data whenever possible. Remove personal identifiers and use randomized replacements;
  • employing end-to-end encryption for data in transit and at rest. This protects information from unauthorized access or use; 
  • adopting data minimization techniques like synthetic data generation. Only real data that is necessary should be used by AI systems;
  • allowing individuals to access the data used to make automated decisions about them and provide opt-out choices; and
  • building review processes and checkpoint evaluations throughout the AI model lifecycle. Continuously assessing privacy risks.

Data protection by design requires organizations to place privacy at the foundation of all AI efforts. With the proper technical and process controls, personal data can be protected while enabling responsible AI innovation.

Establishing AI governance

Developing effective AI governance is a critical aspect of using AI responsibly and ethically. An AI governance framework outlines the policies, procedures, organizational structures, and controls needed to oversee an AI system throughout its lifecycle. Some important elements of AI governance include:

Policies

  • Develop an overarching AI policy that sets guidelines for how AI should be created, deployed, and monitored in alignment with ethics, values, and legal compliance.
  • Create detailed policies for aspects like data management, algorithmic transparency, human oversight, risk assessments, and model monitoring.
  • Align policies with existing organizational policies on privacy, security, ethics, etc.
  • Ensure policies are flexible enough to adapt to new use cases and guard against misuse.

Procedures

  • Establish step-by-step procedures for the AI model development lifecycle, such as collecting training data, evaluating bias, documenting processes, and monitoring outputs.
  • Outline protocols for regular reviews of policies, models, and performance.
  • Standardize processes for assessing and mitigating risks from AI systems.

Training

  • Provide training to teams involved in AI development and deployment on responsible AI practices, ethical concerns, and policies.
  • Educate end users and decision-makers on the appropriate usage and limitations of AI systems.
  • Promote AI literacy and critical thinking at all levels of the organization.

Audits

  • Conduct regular audits to ensure AI systems comply with policies and procedures.
  • Review system documentation, risk assessments, and impact evaluations.
  • Assess models and data for bias, accuracy, transparency issues, and mission drift over time.
  • Verify human oversight mechanisms function properly.
  • Report audit findings to AI ethics boards and senior management.

Implementing robust AI governance ensures alignment with organizational values and legal obligations. It also builds trust in AI systems through greater accountability, transparency, and responsible practices.

Complying with AI regulations

As AI systems continue to expand in capability and application, governments around the world are taking notice and starting to implement regulations around the use of AI. It is important for organizations to understand these regulations and how they may impact their AI programs.

One of the most comprehensive AI regulatory frameworks is the EU's AI Act. The EU AI Act categorizes AI systems based on risk level and restricts or prohibits certain uses for high-risk AI applications. Systems that could pose significant risks to health, safety, and rights are subject to strict requirements around data, documentation, transparency, accuracy, and human oversight. Organizations will need to implement rigorous testing and risk management practices to comply with the EU AI Act.

In the US, the Federal Trade Commission (FTC) has indicated that it plans to ramp up oversight of AI through its authority to police unfair and deceptive practices. The FTC will likely scrutinize harms resulting from inaccurate or biased algorithms. New AI regulations are also emerging at the state level, such as Illinois's Artificial Intelligence Video Interview Act, which imposes obligations around consent and data retention when AI is used to screen and evaluate job candidates through automated video interviews.

As with privacy, organizations should view compliance as an ongoing exercise. The regulatory landscape for AI is rapidly evolving across jurisdictions. By taking a principles-based approach focused on accountability, transparency, fairness, and human rights, companies can build ethics into their AI systems and more smoothly adapt to new laws as they emerge. Leveraging privacy frameworks as a foundation enables continuous legal compliance as AI progresses.

Continuous improvement

To ensure your AI program continues to meet privacy principles and regulatory requirements, it is essential to monitor outcomes and continuously improve policies and practices. This involves several key steps:

  • monitoring AI systems: Carefully monitor your AI systems in action to identify any issues with fairness, bias, or unlawful discrimination. Keep detailed records of model training data and algorithms used. Continuously assess whether the AI is operating as intended;
  • conducting regular audits: Schedule thorough audits of your AI systems, data practices, policies, and documentation. Internal and external audits help assess gaps and risks. Examine input data sets, intended system uses, actual performance metrics, and more;
  • updating programs: Use insights from monitoring activities and audits to update your AI governance program. Revise policies and procedures to close gaps. Retrain models using improved data sets if biases are identified. Retire models that are underperforming or high-risk;
  • involving stakeholders: Inform and involve internal and external stakeholders throughout the improvement process. Seek regular feedback from affected groups to identify issues. Fostering collaboration and transparency leads to more robust, ethical, and compliant AI systems;
  • maintaining documentation: Keep detailed documentation about changes made to AI systems, data practices, policies, and procedures. Well-maintained documentation demonstrates accountability and aids future improvement initiatives; and
  • reporting to leadership: Provide regular reports to company leadership on the state of the AI program, highlighting successes, risks, and areas needing improvement. Leadership support is crucial for allocating resources to enable continuous enhancement.

By actively monitoring, regularly auditing, and continuously updating the AI privacy program, companies can identify and mitigate emerging risks. This helps ensure that AI systems operate responsibly and align with core privacy principles on an ongoing basis. The improvement process is never complete - privacy and compliance must remain top priorities throughout the AI system lifecycle.

Iain Borner Chief Executive Officer
[email protected]
The Data Privacy Group Ltd., UK