Support Centre

You have out of 5 free articles left for the month

Signup for a trial to access unlimited content.

Start Trial

Continue reading on DataGuidance with:

Free Member

Limited Articles

Create an account to continue accessing select articles, resources, and guidance notes.

Free Trial

Unlimited Access

Start your free trial to access unlimited articles, resources, guidance notes, and workspaces.

Colorado: Bill for consumer AI protection passed by legislature

On May 13, 2024, Senate Bill 24-205 a bill for an act concerning consumer protections in interactions with artificial intelligence systems was signed by the Speaker of the House and the President of the Senate. The bill must now go to the Governor of Colorado for signature. 

Scope of the bill and requirements

Beginning February 1, 2026, the act will require a developer or deployer of a 'high-risk artificial intelligence system' to use reasonable care to avoid algorithmic discrimination within the high-risk system. For developers of artificial intelligence (AI) models, there is a requirement to maintain specific documentation for the general-purpose model, including a policy to comply with federal and state copyright laws and a detailed summary concerning the content used to train the general-purpose model. Developers must also create, implement, maintain, and make available documentation to deployers who intend to integrate the general-purpose model into the deployer's AI systems. The documentation must disclose a general statement regarding the reasonably foreseeable uses and known harmful or inappropriate uses of the high-risk AI system in addition to: 

  • high-level summaries of the type of data used to train the high-risk AI system;
  • known or reasonably foreseeable limitations of the high-risk AI system;
  • the purpose of the high-risk AI system;
  • intended benefits and uses of the high-risk AI system; and
  • all other information.

Developers of high-risk AI systems must also provide documentation describing how the system was evaluated for performance and mitigation before it was offered, sold, leased, given, or otherwise made available to the deployer. Additionally, the act requires that deployers also be made aware of: 

  • data governance measures used to cover the training datasets;
  • intended outputs;
  • measures to mitigate known reasonably foreseeable risks of algorithmic discrimination;
  • how the system should be used, not used, and monitored when making consequential decisions; and
  • any information reasonably necessary to understand the outputs and monitor performance.

The bill also provides definitions including 'algorithmic discrimination,' 'AI system,' 'consequential decision,' and 'high-risk AI system.'

Enforcement

The AG has exclusive authority to enforce the act and may implement additional rules as necessary, including:

  • documentation requirements for developers;
  • content requirements for notices and disclosures;
  • content requirements for risk management policies; 
  • content requirements for impact assessments; 
  • requirements to prove that algorithmic discrimination was avoided; and
  • requirements for an affirmative defense. 

You can read the act here and view its legislative history here.