Support Centre

You have out of 5 free articles left for the month

Signup for a trial to access unlimited content.

Start Trial

Continue reading on DataGuidance with:

Free Member

Limited Articles

Create an account to continue accessing select articles, resources, and guidance notes.

Free Trial

Unlimited Access

Start your free trial to access unlimited articles, resources, guidance notes, and workspaces.

Canada: Understanding bias in AI for marketing – a comprehensive guide to avoiding negative consequences with AI

The Interactive Advertising Bureau of Canada ('IAB Canada') released, on 24 November 2021, a guide on Understanding Bias in AI for Marketing: A Comprehensive Guide to Avoiding Negative Consequences with Artificial Intelligence1. In particular, IAB Canada stated that the guide provides an excellent starting point for organisations to develop frameworks for better artificial intelligence ('AI') solutions. OneTrust DataGuidance discusses the guide in this article.

Selman Keles / Signature collection /

Scope of the guide

Marketing and advertising mechanisms are increasingly being enhanced by AI so as to achieve efficiency and effectiveness. As such, organisations seeking to deploy the use of AI should be able to explain the use of the AI system to ensure that it can be understood and trusted.

As such, the guide notes that focusing on bias in AI systems should be a starting point for organisations as they develop better frameworks for AI solutions. Thus, the guide does not focus on one element or phase of AI development, but is intended to guide organisations through the four phases of AI development, namely:

  • awareness and discovery;
  • exploration, solutions, and design;
  • development, tuning, and testing; and
  • activation, optimisation, and remediation.

To facilitate this understanding, the guide defines key terminology and explores the roles of stakeholders, while noting that, generally, bias in AI systems is introduced by humans unintentionally, but the duality of humans and machines makes bias detectable. As such, risk mitigation helps organisations do the right thing, both for customers and society.

Considerations for businesses

The guide highlights the following as key considerations in understanding bias in AI systems:

  • understanding that bias is a human cognitive condition that is passed on to AI systems, but this can be eliminated by auditing approaches and assumptions;
  • understanding that the AI system itself is not inherently biased and, as such, it is possible to create a non-biased AI system;
  • understanding that biases have a ripple effect and can affect an organisation's products and systems; and
  • understanding that bias may originate from different sources and during different stages of development of an AI system and, as such, organisations should train teams on how biases may occur and how to prevent them.

Additionally, various forms of biases may appear after eliminating common forms of bias such as gender, sex, and ethnicity, among others. Thus, organisations should take actions to become aware of other forms of bias that may originate during the development stage and not have been previously considered, evaluate the relevance of these biases to the business, determine the risks, and come up with a plan on how to address them.

However, while the guide addresses several considerations for a business to make which stem from conscious or unconscious assumptions, intentions, and proposed applications of AI, it is equally important to consider certain other additional factors, as described below.

Data volume

To be able to train an AI system to analyse and minimise bias, a sufficient volume of data is required and is just as important as the data elements themselves. One possible challenge of not using enough data is 'underfitting', which is where a machine learning system fails to establish an underlying trend in the dataset presented due to the low volume of data used. On the other hand, using too much data may result in 'overfitting', where the algorithm is set up to be accurate to the dataset used but the results cannot be transferred to other datasets.

Thus, the guide highlights the importance of organisations reducing the dimensions of the datasets used so as to reduce the parameters, thereby leading to results and the successful development of a solution.

Data quality

High data quality is determined by its intended purpose and whether the data reflects the real-world ideas that it refers to. In this respect, the guide notes that '[d]iversity, scale, and quality of input data are fundamental characteristics that determine the predictive effectiveness of AI models.'

Thus, the guide encourages organisations to monitor the accuracy of the data they hold, evaluate any datasets that are not useful to the project, and fill any gaps that may be existing in the datasets, while obtaining and implementing appropriate datasets regularly in order to minimise bias.

Computing power

The guide notes that in many cases, for an AI system to perform well it requires the computational power of a supercomputer. Thus, the resources required to generate such power may be expensive or have supply difficulties. As such, the guide determines that organisations should consider the purposes of the proposed AI system and its benefits to the needs of the business before investing in such hardware.

Data privacy and security risks

Just like any technological advancement, AI systems pose risks to data privacy as the datasets used may include sensitive personal data. Thus, organisations should be aware of the type of data they collect, the regulatory consequences of processing such data, and the security needs to protect the datasets.

In addition, various threats could exist which may affect the AI system or datasets, with the guide addressing certain aspects in this regard. As such, the guide highlights the importance of organisations 'threat modelling' their systems with threat modelling frameworks which help organisations ascertain whether any risks exist in and for the system, and to determine the severity of the risks.

Doing so allows for the inclusion of security controls within the AI system or organisation. The guide notes some key minimum security controls to implement, such as strong identity and access management controls, and adopting privileged access management controls and policies.

Legal and regulatory risks

Organisations should be mindful of the type of data that is needed, be aware of the data they have, should minimise the data used, and ensure that no groups of people are excluded from the model. Failure to do so could result in regulatory investigations or litigation, which are not new phenomena, particularly in the advertising industry.

As such, the guide details that businesses should evaluate the risks associated with each governance model and be aware of the application of these laws and regulations both locally and internationally. Moreover, the guide notes that the core approach taken by businesses should consider the dangers and biases of AI's underlying technology, which includes machine-learning algorithms, data sources, algorithm testing, the decision model, and outcomes, while also considering whether users can understand and developers can explain the technology itself.

Public trust and reputational risk

Public knowledge of the existence of bias in AI systems may lead to public distrust and ultimately the organisation's reputation may suffer. The guide outlines that reputational risks in this respect may stem from factors such as:

  • discriminatory or unfair algorithms;
  • unreliable or unintended outcomes;
  • data misuse or mishandling; or
  • increased exposure to cyberattacks.

To mitigate or minimise these risks, the guide encourages organisations to assess a technology's benefits and disadvantages, as well as how it relates to people, culture, and the business strategy.


Among the considerations discussed above, the guide also includes IAB Canada's recommended concepts to consider, a checklist, and a list of questions for each phase of an AI system's lifecycle, all of which aim to assist organisations in identifying and mitigating any AI bias.

Moreover, the guide details that organisations should establish accountability mechanisms for the AI project, establish a legal basis for processing any personal data used in the development of the AI system, align the organisation's problem statement to ascertain the purpose and use for the AI system, conduct business and technical assessments, and establish a testing documentation process so as to know how the AI system will perform and interact with other datasets 'in the wild' before it is fully deployed.

Wangari Thuo Privact Analyst
[email protected]

1. Available at: