Support Centre

You have out of 5 free articles left for the month

Signup for a trial to access unlimited content.

Start Trial

Continue reading on DataGuidance with:

Free Member

Limited Articles

Create an account to continue accessing select articles, resources, and guidance notes.

Free Trial

Unlimited Access

Start your free trial to access unlimited articles, resources, guidance notes, and workspaces.

Canada: Government requests public comments on potential elements for code of practice on generative AI

The Government of Canada announced, on August 16, 2023, plans for the creation of a code of practice on generative artificial intelligence (AI). In line with this, the Government released potential elements for its code of practice on generative AI and is requesting public comments on the same.

Background

The Government explained that the code of practice would be implemented voluntarily by Canadian firms ahead of the coming into force of the Artificial Intelligence and Data Act (AIDA) which forms part of Bill C-27 for the Digital Charter Implementation Act. The Government detailed that the code of practice is intended to ensure that developers, deployers, and operators of generative AI systems can avoid harmful impacts and build trust in their systems, among other things. The Government further noted that the code of practice will also serve to reinforce Canada's contributions to active international deliberations on proposals to address the risks of generative AI.

Potential elements

The potential elements outlined by the Government of Canada include:

  • safety;
  • fairness and equity;
  • transparency;
  • human oversight and monitoring;
  • validity and robustness; and
  • accountability.

Regarding transparency, the Government highlighted that developers and deployers of generative AI systems should provide a reliable and freely available method to detect content generated by the AI system, such as watermarking, and provide a meaningful explanation of the process used to develop the system; whereas operators of generative AI systems should ensure that systems that could be mistaken for humans are clearly and prominently identified as AI systems. Furthermore, on the point of accountability, the Government provides that developers, deployers, and operators of generative AI systems should ensure that multiple lines of defense are in place to secure the safety of their system, such as ensuring that both internal and external (independent) audits of their system are undertaken before and after the system is put into operation. In addition, such developers, deployers, and operators should develop policies, procedures, and training to ensure that roles and responsibilities are clearly defined and that staff are familiar with their duties and the organization's risk management practices.

You can read the press release here.