Support Centre

You have out of 5 free articles left for the month

Signup for a trial to access unlimited content.

Start Trial

Continue reading on DataGuidance with:

Free Member

Limited Articles

Create an account to continue accessing select articles, resources, and guidance notes.

Free Trial

Unlimited Access

Start your free trial to access unlimited articles, resources, guidance notes, and workspaces.

Canada: A comprehensive overview of AI Regulation

In this Insight article, Sarah Nasrullah, from Norton Rose Fulbright LLP, delves into Canada's AI regulatory landscape, examining key aspects of the AI Act, enforcement mechanisms, penalties, and implications for organizations and individuals. It provides valuable insights into the evolving governance of AI technologies in the Canadian context.

blackred / Signature collection / istockphoto.com

On June 16, 2022, Canada's Minister of Innovation, Science and Economic Development Canada (ISED), François-Philippe Champagne, presented Bill C-27 for the Digital Charter Implementation Act 20221 (Bill C-27) for first reading. Part III of Bill C-27, introduced the Artificial Intelligence and Data Act (the AI Act), to set the foundation for the responsible design, development, and deployment of artificial intelligence (AI) systems that impact the lives of Canadians. The AI Act is currently in second reading with the Parliament, specifically in the Standing Committee on Industry and Technology, and hearings are expected to resume this autumn.

In the meantime, ISED has also launched a consultation process to create a Code for the use of Generative AI. It intends the code to be sufficiently robust to ensure that developers, deployers, and operators of generative AI systems are able to avoid harmful impacts, build trust in their systems, and transition smoothly to compliance with Canada's forthcoming regulatory regime. Additionally, this code will reinforce Canada's contributions to active international deliberations on proposals to address the risks of generative AI.

Overview of the AI Act

The AI Act aims to ensure that AI systems deployed in Canada are safe and non-discriminatory while holding businesses accountable for their development and usage of these technologies. Similar to the Proposed EU AI Act, the AI Act creates different rules for different risk levels. It creates a framework for high-impact AI systems with the objective of prohibiting specific behavior in relation to AI systems that may result in serious harm to individuals or their interests. However, the term 'high impact' has not been defined but will be clarified in subsequent regulations to be introduced over time.

The AI Act provides definitions for key terms, which offer insights into how ISED is planning on regulating AI:

  • Harm: physical or psychological harm to an individual; damage to an individual's property; or economic loss to an individual; and 
  • Biased output: content that is generated, or a decision, recommendation, or prediction that is made, by an AI system and that adversely differentiates, directly or indirectly and without justification, in relation to an individual on one or more of the prohibited grounds of discrimination set out in Section 3 of the Canadian Human Rights Act, or on a combination of such prohibited grounds.

It excludes content, decisions, recommendations, or predictions that are intended to prevent, eliminate, or reduce disadvantages that are likely to be suffered by any group of individuals when those disadvantages would be based on or related to the prohibited grounds.

Definition

The AI Act defines an AI system as a technological system that, autonomously or partly autonomously, processes data related to human activities through the use of a genetic algorithm, a neural network, machine learning, or another technique in order to generate content or make decisions, recommendations, or predictions.

However, this definition is overly broad. To effectively regulate AI, it must be defined keeping in mind anticipated benefits and risks. AI technologies are still evolving, making it challenging to pin down a stable legal definition that can be principles-based and technology-neutral. Both AI developers and regulators will need to be inordinately creative in envisioning ways that the system might behave and try to anticipate potential violations of social standards and responsibilities while keeping in mind the fairly innocuous uses of AI which can be allowed to develop and deploy without regulatory obstacles. For instance, open-sourced AI models, AI used for research and development purposes, and AI that is not capable of reasoning or decision-making should be excluded from this definition.  

Application

The AI Act applies to the private sector engaged in international and interprovincial trade and commercial activities. However, it currently lacks clear allocation of responsibilities of different actors within the AI value chain. As currently drafted, the AI Act's requirements are directed toward a person who is responsible for a high-impact system. The AI Act specifies a person as 'responsible' for a system if they design, develop, or make it available for use…or manage its operation. This broad definition captures multiple entities, creating confusion and uncertainty about which actor is ultimately responsible for complying with the AI Act's requirements.

The current wording of the AI Act suggests that developers and deployers will be simultaneously responsible for complying with the bill's requirements. However, such an approach is unworkable, particularly for AI developers. Many of the AI Act's requirements presuppose that the responsible party will have visibility into the operation and maintenance of a high-impact system. However, developers will generally lack visibility into how their customers (i.e., deployers) are using a system, and will thus be unable to comply with most of the AI Act's core requirements.

Obligations under the AI Act

The AI Act imposes some baseline obligations on organizations to ensure transparency in their use of AI. Organizations responsible for an AI system must conduct an assessment to determine whether their AI system qualifies as high-impact. This means that an organization must conduct an evaluation, perhaps, a preliminary one, to determine if any of its current and future systems meet the criteria for being classified as high-impact.

Once an AI system falls under this classification, organizations are also required to establish measures to identify, assess, and mitigate the risks of harm or biased output that could result from the use of the high-impact system. Additionally, organizations must establish measures to monitor compliance with their mitigation measures along with their effectiveness.

The AI Act also states that the organization engaged in any 'regulated activity' must keep records describing:

  • the measures they establish to identify, assess, and mitigate risks; and
  • the reasons supporting their assessment of whether a system is 'high-impact' or not. 

For organizations using a high-impact AI system, they must publicly disclose an AI policy that describes:

  • how the system is used;
  • the types of content that it generates and the decisions that it makes;
  • the mitigation measures; and
  • any other information prescribed by regulations.

Organizations responsible for a high-impact system must notify the Minister if the use of the system could potentially lead to significant harm. Harm, in this context, is defined as physical or psychological harm to an individual; damage to an individual's property; or economic loss to an individual.

While this is not in the AI Act directly, there is a related right in the Consumer Privacy Protection Act of Bill C-27. When requested, organizations must inform the requester about:

  • the nature of personal information used to make the prediction, recommendation, or decision;
  • the source of the information; and
  • the reasons that led to the prediction, recommendation, or decision.

What should organizations do?

AI governance

Organizations can initiate the process by evaluating their current processes to see if AI governance can be integrated within them. Processes which have been built to capture the use of personal information can potentially be relied upon. However, the AI Act has a broader scope, encompassing the use of anonymized information as well.

Organizations need to think about whether AI governance and risk management should be vested in technology and legal departments, or be cross-functional with input from relevant experts throughout the organization. If the organization is involved in AI development, then risk may reside with their engineering teams. However, if they are only using AI-based vendors, then risk can lie with the business team that procures the vendor solution. In the latter scenario, organizations will need to ensure that their vendor contracts have robust contractual protections and indemnities. 

AI policy

Once a foundational governance structure has been established, organizations can incorporate use cases within their public-facing AI policy.

Typical use cases of AI that will require cross-functional oversight may include:

  • HR employing AI-based collaboration tools for employees (which detect employees' tone or facial expressions to determine employee performance) or AI-based recruitment; or
  • Customer Operations utilizing AI to train chatbots.

The AI Act penalties

Organizations will be liable to a fine ranging from 3% to 5% of their gross global revenues if they use, design, or develop AI systems without:

  • establishing measures for the use of anonymized data sets;
  • establishing measures to identify, assess, and mitigate the risks of harm or biased output;
  • establishing measures to monitor compliance;
  • keeping general records;
  • publishing relevant information on their website;
  • notifying the Minister if the use of the system results or is likely to result in material harm; or
  • engaging in obstructive practices or providing false or misleading information to the Minister or auditor.

Organizations or individuals may be liable to fines and imprisonment if they:

  • possess or use personal information obtained illegally to design or develop AI to commission an offense;
  • recklessly use AI systems to cause serious harm; or
  • engage in fraudulent activities against the public and cause substantial economic losses for individuals.

Enforcement of the AI Act

The initial focus would be on voluntary compliance and working with the industry to meet the law's objectives. An administrative monetary penalty scheme could be introduced through regulation as the framework evolves. Criminal offenses could be investigated by law enforcement

The Minister of Industry would be empowered to request information, order third-party conformity assessments, or require additional mitigation measures. The Minister can also share information with key regulators. The proposed legislation would create an Artificial Intelligence Data Commissioner, within a ministry, who would play a role in enforcement. The Minister may delegate information-sharing powers and the authority to issue orders to the Commissioner. These powers may include requesting records, requiring organizations to conduct audits, taking action to address issues, and suspending the operation of certain high-impact systems where there is a 'serious risk of imminent harm.

Sarah Nasrullah Associate
[email protected]
Norton Rose Fulbright LLP, Toronto

Feedback