Support Centre

You have out of 5 free articles left for the month

Signup for a trial to access unlimited content.

Start Trial

Continue reading on DataGuidance with:

Free Member

Limited Articles

Create an account to continue accessing select articles, resources, and guidance notes.

Free Trial

Unlimited Access

Start your free trial to access unlimited articles, resources, guidance notes, and workspaces.

Canada: Code of Conduct on the development and management advanced generative AI systems

On September 27, 2023, Innovation, Science and Economic Development Canada (ISED) published a Voluntary Code of Conduct on the Responsible Development and Management of Advanced Generative AI Systems (the Code). In this Insight article, Christopher Ferguson and Anagha Nandakumaran, from Fasken, discuss the different measures of the Code and what this would mean for organizations. 

BlackJack3D / Signature collection / istockphoto.com

Background 

The voluntary Code resulted from brief consultations with stakeholders, including Canada's Advisory Council on Artificial Intelligence, and representatives from academia, civil society, Canadian artificial intelligence (AI) research institutes, and industry. The consultations centered on the Canadian Guardrails for Generative AI – Code of Practice that ISED released in August 2023. The Code sets out voluntary measures for organizations developing and managing general-purpose generative AI systems to mitigate the risks posed by those systems. It defines the management and development of AI systems as follows: 

  • developing AI systems includes methodology selection, collection, and processing of datasets, model building, and testing; and 
  • managing AI systems includes putting a system into operation, controlling the parameters of its operation, controlling access, and monitoring its operation. 

Though it targets advanced generative AI systems, the Code notes that its measures are more broadly applicable to a range of high-impact AI systems. Importantly, the Code does not alter obligations under existing laws such as the federal private-sector privacy law (the Personal Information Protection and Electronic Documents Act). 

The measures in the Code are intended to be applied in anticipation of binding regulations surrounding AI systems outlined in the Canadian Federal Government's Bill C-27, the Digital Charter Implementation Act, 2022, which includes the Artificial Intelligence and Data Act (AIDA). If enacted, AIDA would regulate the design, development, and deployment of AI systems in Canada, with a focus on high-impact AI systems. The Federal Government also intends to amend AIDA prior to its enactment to regulate general purpose AI systems like generative AI. A companion document to AIDA published on March 13, 2023, outlines a two-year period for regulation development. AIDA would only come into force after that period.   

Moreover, besides serving as an interim risk mitigation measure, the Government views the Code as reinforcing Canada's contributions to ongoing international deliberations on proposals to address common risks encountered with large-scale deployment of generative AI, including at the G7 and 'among like-minded partners.'  

Measures in the Code 

The Code targets specific actors in the AI ecosystem, developers, and managers, who in accepting the Code's voluntary commitments must work to achieve the outcomes listed below. The Code also imposes extra measures on developers and managers of systems 'made widely available for use,' which the Code also refers to as 'public use.' These additional requirements are focused on publishing information on the capabilities and limitations of such systems and on their training data, identifying content generated by those systems, and on system security. The Code commits developers and managers to the following outcomes: 

  • Accountability: The Code requires firms to understand their role in the systems they develop or manage, to establish appropriate risk management systems, and to share information with other firms as needed. In doing so, all developers and managers must:  
    • implement a comprehensive risk management framework proportionate to the nature and risk profile of their activities, including establishing policies, procedures, and training to familiarize staff with their duties and the organization's risk management practices; and  
    • share information and best practices on risk management with firms playing complementary roles in the ecosystem. Developers of systems for public use must also 'employ multiple lines of defense,' including conducting third-party audits prior to release. 
  • Safety: The Code outlines safety measures such as risk assessments and appropriate mitigations to ensure safe operation of systems prior to deployment. All developers and managers must perform a comprehensive assessment of reasonably foreseeable potential adverse impacts, including risks associated with inappropriate or malicious use of the system. Developers are also required to: 
    • implement proportionate measures to mitigate risks of harm (e.g., creating safeguards against malicious use); and  
    • make guidance available to downstream developers and managers on appropriate system usage and measures taken to address risks. 
  • Fairness and equity: This outcome only applies to developers. As part of assessing and addressing potential impacts on fairness and equity during the development and deployment of systems, developers must:  
    • assess and curate datasets used for training to manage data quality and potential biases; and  
    • implement diverse testing methods and measures to assess and mitigate risk of biased output prior to release. 
  • Transparency: This outcome is primarily targeted to developers of public use systems to allow consumers to make informed decisions and experts to evaluate whether risks have been adequately addressed. To achieve this, developers of systems intended for public use are required to: 
    • publish information on capabilities and limitations of the system;  
    • develop and implement a reliable and freely available method to detect content generated by the system; and  
    • publish a description of the types of training data used to develop the system, as well as measures taken to identify and mitigate risks. Managers of all systems (including those not intended for public use) must ensure that systems that could be mistaken for humans are clearly and prominently identified as AI systems. 
  • Human oversight and monitoring: The Code requires managers to monitor systems after they are made available for harmful use, and for updates to be implemented as needed to address any risks that materialize. This monitoring must include third-party feedback channels, and developers are to inform the developer and implement usage controls as needed to mitigate harm. Developers of systems are required to maintain a database of reported incidents after deployment, and provide updates as needed to ensure effective mitigation measures. 
  • Validity and robustness: The Code emphasizes that systems must operate as intended, must be secure against cyberattacks, and that developers and managers must understand their behavior in response to the range of tasks or situations to which they are likely to be exposed. Developers of all systems must: 
    • prior to development, use a wide variety of testing methods across a spectrum of tasks and contexts to measure performance and ensure robustness;  
    • employ adversarial testing (i.e., red-teaming) to identify vulnerabilities;  
    • perform benchmarking to measure the model's performance against recognized standards; and  
    • perform an assessment of cybersecurity risk and implement proportionate measures to mitigate risks, including with regard to data poisoning. The requirement to perform cybersecurity risk assessments and implementing measures to mitigate risks also extends to managers of public use systems.  

Signatories of the Code also commit to support the ongoing development of a robust, responsible AI ecosystem in Canada, and to develop and deploy AI systems for inclusive and sustainable growth in Canada, including to prioritize human rights, accessibility, and environmental sustainability, and to use AI to address the 'most pressing global challenges of our time.' Signatories also commit to standard development, information and best practice sharing, responsible AI research collaboration, and AI public awareness campaigns. 

Next steps 

The Federal Government plans to publish a summary of the feedback received during the consultations leading to the development of the Code. Bill C-27 passed second reading in the House of Commons and is currently being considered by the House Standing Committee on Industry and Technology (Committee), where the Government has detailed its intent to table wide-ranging amendments to AIDA. The House Committee's study of AIDA is still in its infancy, and the proposed legislation may see further study in a Senate committee. Even after Bill C-27 receives Royal Assent, the Government is contemplating a regulation-making period of at least two years before the new law comes into force, meaning that AIDA would come into force no sooner than 2025. This means that the Code may have a relatively long life as a voluntary interim measure.  

Christopher Ferguson Partner
[email protected]
Anagha Nandakumaran Associate
[email protected] 
Fasken, Canada 

Feedback