Support Centre

You have out of 5 free articles left for the month

Signup for a trial to access unlimited content.

Start Trial

Continue reading on DataGuidance with:

Free Member

Limited Articles

Create an account to continue accessing select articles, resources, and guidance notes.

Free Trial

Unlimited Access

Start your free trial to access unlimited articles, resources, guidance notes, and workspaces.

International: Navigating generative AI and compliance

In this Insight article, Conor Hogan and Matthew Goodbun, from the British Standards Institution (BSI), delve into the transformative impact of generative artificial intelligence (AI) on diverse industries, exploring its acceleration of business outcomes and the concurrent rise of privacy compliance challenges.

wonry/E+ via Getty Images

Generative artificial intelligence (AI) systems are accelerating business outcomes and impacting a wide range of industries, from marketing to healthcare, product design to finance. Reports indicate that 45% of organizations are scaling generative AI across multiple business functions, with customer-facing functions seeing the highest investment.

The use of these innovative models is continuing to grow and reshape business operations, and with this comes a myriad of new privacy compliance challenges. According to a research series on generative AI snapshots, 60% of employees surveyed are uncertain about how to use the technology while ensuring data is safeguarded. This reiterates a need for industry leaders to become front-line defenders of data security and embrace innovation responsibly.

Emerging standards, legislation, and frameworks for AI are helping to shape the future of industries, bringing about a myriad of benefits, challenges, and opportunities.

As the AI landscape continues to evolve, these guidelines play a pivotal role in fostering responsible development, deployment, and governance of AI technologies including generative AI.

Impacts of generative AI

The increase in AI-generated content has seen a surge in data volumes and placed ethical considerations under the spotlight. Issues such as misinformation, bias amplification, and privacy violations are becoming inherent to the technology and 58% of employees feel that ethical use guidelines for AI would be beneficial.

Generative AI also introduces challenges around consent. The nuanced aspects of data use, secondary or further data use, and complex data processing activities demand a robust approach to data governance to ensure compliance.

With generative AI being used to create realistic fake media like deepfakes, there are rising concerns about misinformation and reputational damage. Deepfakes can be used to spread false news or manipulate the evidence in legal proceedings. This demonstrates the need for organizations to authenticate AI-generated content before use and implement safeguards against misuse.

Another key consideration is algorithmic bias. As generative AI systems are trained on existing datasets, there is potential for historical biases to be replicated or amplified. For instance, a hiring chatbot trained on biased historical hiring data may exhibit discriminatory behavior. Mitigating unfair outcomes requires diversity in training data, testing for bias, and human oversight of AI systems.

There are also challenges in terms of intellectual property rights and attribution. Generative AI can remix copyrighted works or be used to impersonate a person's identity without consent. Clarifying ownership, rights of use, and proper accreditation for AI-generated content is an emerging issue.

Role of regulations

Despite AI introducing new challenges, the fundamental principles of data protection including accountability, transparency, data minimization, security, and ethical considerations remain.

Privacy laws including the General Data Protection Regulation (GDPR) in the EU and the California Consumer Privacy Act (CCPA) in California are essential to address ethical concerns, requiring AI systems to align with existing laws on how personal data is handled. International harmonization in data protection laws and emerging AI regulations is becoming a shared responsibility.

For example, the EU's proposed Artificial Intelligence Act (AI Act) aims to minimize AI risks through requirements around transparency, human oversight, robustness, and accuracy. It categorizes AI uses into unacceptable, high-risk. and low-risk, banning certain applications like social scoring and imposing stricter rules on high-risk uses like hiring tools.

In the US, states like Illinois, New York, and California have enacted biometrics privacy laws regulating the use of facial recognition and other biometric data. The Federal Trade Commission (FTC) has also issued guidance on reducing bias and enhancing transparency in AI.

Collaborative efforts can simplify the compliance landscape for organizations operating globally, ensuring consistent protection for international clients.

Role of standards and frameworks

One of the primary benefits of the establishment of standards and frameworks is that it promotes ethical AI development, ensuring that AI systems adhere to principles that prioritize fairness, transparency, and accountability. This helps build trust among users and stakeholders, mitigating concerns about biased algorithms and unjust decision-making.

Industry-wide standards facilitate interoperability among different AI systems and technologies. This compatibility encourages collaboration and seamless integration, fostering innovation and efficiency. It enables organizations to adopt diverse AI solutions without the fear of incompatibility issues.

Guidelines and frameworks provide a structured approach to managing risks associated with AI. By adhering to established standards, businesses can mitigate legal, operational, and reputational risks, ensuring compliance with regulatory requirements and avoiding potential pitfalls.

The implementation of robust standards and frameworks enhances the security of AI systems. As AI becomes more prevalent, ensuring the confidentiality, integrity, and availability of data processed by AI models is crucial. Adhering to established security protocols safeguards against potential breaches and cyber threats.

However, there are also considerable challenges when considering the development, adoption, and maintenance of standards and frameworks due to the fast-paced evolution of AI technology and the necessity to keep up with the accelerated technological progress and innovative deployments. Emerging standards may struggle to address the latest advancements, leading to potential gaps in oversight and appropriate controls.

There is also the need to consider the complexities of achieving a global consensus on AI standards. This is particularly challenging due to varying cultural, legal, and ethical perspectives. Harmonizing diverse opinions and interests to create universally accepted guidelines remains a significant hurdle.

The accessibility and feasibility of achieving compliance with new standards often require significant resources, both in terms of time and investment. Smaller businesses may face challenges in adapting to these standards, potentially creating a barrier to entry, and limiting market competition.

Risk mitigation

Navigating the uncharted waters of generative AI requires organizations to take proactive and strategic approaches to manage risk:

  • begin by understanding the scope of generative AI applications within your operations. Conduct thorough Data Protection Impact Assessments (DPIAs) and AI System Impact Assessments (SIAs) to identify and mitigate potential risks before deploying any innovative technical solutions.
  • establish a robust framework for obtaining informed consent, addressing the nuances introduced by generative AI. Consent mechanisms should cover how data will be used to train AI systems, whether outputs will be shared publicly, and risks of misuse.
  • clearly communicate to clients and stakeholders how their data will be used. Transparency builds trust and allows people to make informed decisions about participating.
  • provide mechanisms for users to easily opt in and out of specific data uses, especially for AI-enabled applications. Consent should be granular, and people given ongoing choices.
  • prioritize data minimization and purpose limitation, ensuring that AI-generated content aligns with these principles. Only use the minimum data needed and restrict usage to stated purposes.
  • invest in employee training to enhance awareness of the risks associated with generative AI and foster a culture of accountability. Ensure staff understand their obligations.
  • implement security measures to protect against data breaches, unauthorized access, and cyber threats, ensuring that generative AI systems adhere to the highest standards of data security.
  • utilize techniques like differential privacy, federated learning, and on-device processing to reduce reliance on raw personal data and mitigate privacy risks.
  • proactively assess datasets used to train AI systems to identify any biases or quality issues that could lead to unfair or unsafe outcomes. Remediate concerns through data enhancement or model tweaking.
  • subject AI systems to rigorous real-world testing to validate performance across diverse use cases and surface any unintended harms. Continuously monitor outputs and make refinements as needed.
  • maintain meaningful human oversight and decision-making to ensure AI augments rather than replaces human judgment and discretion.
  • provide transparency into how AI systems operate so stakeholders can understand and contest automated decisions that impact them. Clearly explain limitations and inaccuracies.
  • develop rigorous protocols for determining when and how to use AI-generated content, including verification processes to prevent the use of misinformation or copyright violations.
  • implement organizational checks and balances that empower ethics review committees and responsible innovation practices. Incorporate diverse perspectives into AI development.

Business opportunities

Adhering to established standards can open new markets and opportunities for businesses. Companies that demonstrate a commitment to ethical AI practices and compliance with regulations are likely to gain the trust of consumers, investors, and partners, fostering long-term growth.

Integrating and implementing emerging standards provides a competitive advantage. Businesses that proactively adopt responsible AI practices differentiate themselves in the market, attracting customers who prioritize ethical considerations in their purchasing decisions.

The development of global standards encourages international collaboration. By fostering a shared understanding of AI principles, countries and industries can work together to address common challenges, creating a more cohesive and interconnected AI ecosystem.

A commitment to ethical AI and compliance with standards can enhance an organization's reputation. This, in turn, attracts top talent who values ethical practices, contributing to a positive workplace culture and bolstering employee retention.

It is important to stay informed and adapt continuously. Keeping abreast of the latest developments in both generative AI technology and data protection regulations is essential. There are considerable opportunities for businesses that harness the abilities of AI in compliance with standards, frameworks, and regulations.

Embracing innovation while upholding ethics stands as a paramount objective of successfully using generative AI. Striking the right balance between innovation, ethics, compliance, and transparency can create a future where AI is deployed in a way that respects individuals' rights and empowers responsible data practices.

The emergence of standards, legislation, and frameworks for AI presents a complex landscape with a range of benefits, challenges, and opportunities for industries. Striking the right balance between regulation and innovation is essential to harness the full potential of AI while ensuring responsible and ethical development.

As the field continues to evolve, ongoing collaboration among industry stakeholders, regulatory bodies, and the global community will be crucial in shaping a future where AI serves humanity ethically and effectively.

Conor Hogan Global Practice Director
[email protected]
Matthew Goodbun Senior Privacy Consultant
[email protected]