Support Centre

You have out of 5 free articles left for the month

Signup for a trial to access unlimited content.

Start Trial

Continue reading on DataGuidance with:

Free Member

Limited Articles

Create an account to continue accessing select articles, resources, and guidance notes.

Free Trial

Unlimited Access

Start your free trial to access unlimited articles, resources, guidance notes, and workspaces.

Canada: Government launches code of practice on development and management of advance generative AI systems

On September 27, 2023, the Government of Canada launched its Voluntary Code of Conduct on the Responsible Development and Management of Advanced Generative AI Systems, following its public consultation. The Government confirmed that the code of practice does not in any way change existing legal obligations that firms may have, for example, under the Personal Information Protection and Electronic Documents Act (PIPEDA), and is voluntary.


The code of practice aims to address and mitigate artificial intelligence (AI) risks and should be applied in advance of binding regulation pursuant to the Artificial Intelligence and Data Act (AIDA) by all firms developing or managing the operations of a generative AI system with general-purpose capabilities. Although the code of practice is specific to advanced generative AI systems, the Government confirmed that many of the measures are broadly applicable to a range of high-impact AI systems and can be readily adapted by firms working across Canada's AI ecosystem.


The code of practice recommends that developers and managers of advanced generative AI systems commit to working to achieve the following outcomes:

  • accountability: firms understand their role with regard to the systems they develop or manage, put in place appropriate risk management systems, and share information with other firms as needed to avoid gaps;
  • safety: systems are subject to risk assessments, and mitigations needed to ensure safe operation are put in place prior to deployment;
  • fairness and equity: potential impacts with regard to fairness and equity are assessed and addressed at different phases during the development and deployment of the systems;
  • transparency: sufficient information is published to allow consumers to make informed decisions and for experts to evaluate whether risks have been adequately addressed;
  • human oversight and monitoring: system use is monitored after deployment, and updates are implemented as needed to address any risks that materialize; and
  • validity and robustness: systems operate as intended, are secure against cyber attacks, and their behavior in response to the range of tasks or situations to which they are likely to be exposed is understood.

In addition, signatories are also expected to commit to developing and deploying AI systems in a manner that will drive inclusive and sustainable growth in Canada, including by prioritizing human rights, accessibility, and environmental sustainability, among other things. Furthermore, the code of practice provides a table of measures to be undertaken pursuant to the same.

You can read the code of practice here.