Support Centre

You have out of 5 free articles left for the month

Signup for a trial to access unlimited content.

Start Trial

Continue reading on DataGuidance with:

Free Member

Limited Articles

Create an account to continue accessing select articles, resources, and guidance notes.

Free Trial

Unlimited Access

Start your free trial to access unlimited articles, resources, guidance notes, and workspaces.

Colorado: Act concerning consumer protections in interactions with AI systems

Colorado became the first state to adopt a comprehensive AI framework when Governor Polis signed Senate Bill 205. The law, unlike the EU Artificial Intelligence Act (AI Act), does not ban certain uses of artificial intelligence (AI). Instead, Colorado focused on accountability; the law adds guardrails designed to prevent discrimination from certain high-risk AI uses and imposes transparency obligations for companies that use or create those tools. But it is not all bad news for companies navigating this fluid field: the law is delayed until February 2026, it is enforced exclusively by the Attorney General (AG), and there are strong safe harbors (both rebuttable presumptions and an affirmative defense). And, if Governor Polis' wishes are heeded, the framework will undergo significant revisions before it takes effect.

The law primarily regulates activities concerning high-risk AI systems, but there is also a transparency obligation for companies using any AI system to interact with consumers. The law applies to a company that does business in Colorado and either creates/modifies a high-risk AI system (developer) or uses such a system (deployer). Most of the obligations apply even if the AI system is not used in Colorado. So, companies cannot avoid the law merely by refusing to sell high-risk AI systems to Colorado companies or refraining from using such systems in the state.

In this Insight article, Camila Tobón and Josh Hansen, from Shook, Hardy & Bacon, provide an overview of the law (including the momentum, already, to change it), compare it to existing AI laws, and conclude with some open questions about the law's impact.

moodboard/Image Source via Getty Images

Key terms

With the exception of 'developer' and 'deployer,' the key terms are:

  • 'Artificial intelligence system' is defined in a way similar to Executive Order 14110, the EU AI Act, and the OECD AI principles - i.e., referring to a machine-based system that infers from the inputs it receives how to generate outputs, including content, decisions, predictions, or recommendations.
  • 'High-risk artificial intelligence systems,' are AI systems that, when deployed, either make or are a substantial factor in making a consequential decision.
  • 'Consequential decision,' which means a decision that has a material legal or similarly significant effect on the provision, denial, cost, or terms of housing, insurance, legal services, health care services, financial or lending services, essential government services, employment or employment opportunities, or education enrollment or education opportunities.
  • 'Algorithmic discrimination,' refers to unlawful differential treatment or impact that disfavors an individual or group based on certain protected characteristics (e.g., age, disability, race, or veteran status). However, it does not include activities to increase diversity or redress historical discrimination.


The requirements in the new law differ depending on where in the AI supply chain an organization sits. The developers are subject to one set of requirements, while deployers are subject to a different set of requirements. Some of the requirements apply to both developers and deployers.

Reasonable care

Both deployers and developers must use reasonable care to protect consumers from known or reasonably foreseeable risks of algorithmic discrimination.

Technical information

Developers must provide, or make available, the following information to deployers using the high-risk AI system or other developers who are intentionally and substantially modifying that system:

  • a description of the reasonably foreseeable uses and known harmful or inappropriate uses of the high-risk AI system;
  • high-level summaries of the training data used as well as the data governance measures used to examine the suitability of the data;
  • information about the known or reasonably foreseeable limitations of the system as well as the risks of algorithmic discrimination and measures taken to mitigate such risks;
  • the system's purpose, intended benefits and uses, and intended outputs;
  • how the system was evaluated for performance and mitigation of algorithmic discrimination;
  • how the system should be used, not used, and monitored; and
  • any other documentation the deployer would need in order to comply with its own obligations as well as to understand the outputs and monitor the system's performance.

Risk management & impact assessments

Deployers must implement a risk-management policy and program to govern their use of high-risk AI systems and annually review the deployment of such systems to ensure they are not causing algorithmic discrimination. In addition, deployers must perform impact assessments of high-risk AI systems at least annually or within 90 days of making an intentional and substantial modification to the system. These impact assessments must cover:

  • the purpose, intended use cases, and deployment context of, and benefits afforded by the high-risk AI system;
  • an analysis of whether the system poses known or reasonably foreseeable risks of algorithmic discrimination and, if so, the nature of the discrimination and the steps taken to mitigate such risks;
  • the categories of data processed as inputs and the outputs of the system;
  • metrics used to evaluate the system's performance and its known limitations;
  • a description of the transparency measures taken to provide adequate notice to consumers; and
  • information about post-deployment monitoring, oversight, and user safeguards.

Consumer notice

Both developers and deployers must make public statements. Developers must post on their website a statement summarizing the types of high-risk AI systems the developer has developed and how they manage known or reasonably foreseeable risks of algorithmic discrimination. Deployers must also post a notice identifying the types of high-risk AI systems they currently deploy; describe how they manage known or reasonably foreseeable risks of algorithmic discrimination; and detail the nature, source, and extent of information collected and used. Additionally, both deployers and developers must disclose to consumers that they are interacting with an AI system (not just high-risk AI systems) unless it is obvious to a reasonable person.

Deployers have additional disclosure obligations with respect to consumers. First, deployers of any high-risk AI system must provide pre-use notice before a consequential decision is made using that system. This notice must include information about the system's purpose, the nature of the consequential decision, the deployer's contact information, and instructions on how to access the public statement described above. In addition, where applicable, the deployer must inform the consumer of their Colorado Privacy Act (CPA) right to opt out of profiling in furtherance of decisions that produce legal or similarly significant effects. Second, deployers must provide post-use notice when the high-risk AI system results in a consequential decision that is adverse to the consumer. The notice must inform the consumer about the principal reasons for such a decision (including the degree to which the high-risk AI system contributed to the decision and the types/sources of data processed by the system in making the decision); the opportunity to correct the relevant personal information; and the right to appeal the decision. [Notably, the correction right applies even if the deployer is not subject to the CPA.]

Attorney General notice

The law also includes disclosure obligations to the AG. Developers must notify the AG when they discover through testing that a high-risk AI system has caused or is reasonably likely to cause algorithmic discrimination or when a deployer has provided a credible report that the system has caused algorithmic discrimination. [The developer must also share that information with known deployers or other developers of that high-risk AI system.] A deployer must notify the AG when it determines that a high-risk AI system has caused algorithmic discrimination. These reports must be made without unreasonable delay but no later than 90 days.


There are robust exceptions - they make up nearly 25% of the law. Some notable carve-outs include legal compliance, legal claims, physical safety, and insurers who comply with certain AI restrictions codified elsewhere. There is also a limited exception for Health Insurance Portability and Accountability Act (HIPAA) covered entities: the law does not apply to their use of AI to generate treatment recommendations so long as the health care provider must take action to implement the recommendation and the usage is not 'high risk.' Additionally, businesses with fewer than 50 full-time employees are exempt from the requirements concerning impact assessments, risk-management policies, and limited public disclosures when their data is not used to train the high-risk AI system, they use the system for its intended purpose, and the deployer provided an impact assessment.

Rulemaking and enforcement

The AG can, but is not required to, engage in rulemaking to flesh out the law.

In a notable deviation from the CPA, the AG has exclusive enforcement authority - it is not shared with district attorneys. To facilitate enforcement, the AG can compel developers to turn over documentation they shared with deployers or other developers and deployers to share their risk management policy, impact assessments, and records concerning those assessments.

But the law throws a bone at companies by codifying some defense-friendly provisions. There is a rebuttable presumption that a company used reasonable care to protect consumers from risks of algorithmic discrimination if the company complied with the law and any rules promulgated by the AG. [But this is somewhat circular: A company gets the presumption of compliance only after establishing that they complied with the law.] There is also an affirmative defense to any purported violation for a company that complied with NIST's AI risk management framework (or a comparable framework) and discovered/cured the violation based on soliciting feedback, conducting adversarial testing, or performing an internal review.

Future action

While Governor Polis signed the bill, he used his signing statement to express significant concerns that are likely a harbinger of significant changes to come. He first urged for federal law to exclusively regulate in the area. Barring that, he called on the legislature to 'significantly improve' the law and work with stakeholders to 'fine tune' it to avoid imperiling innovation. Governor Polis also recommended that the legislature reexamine the law's focus on results because it deviates from the traditional approach of anti-discrimination laws - which focus on preventing intentional conduct. He concluded by urging legislators to work closely with stakeholders to reform the legislation so that it 'conform[s] with evidence-based findings and recommendations… ' The concerns expressed in the signing statement are important because Governor Polis will have sway over future changes - his term does not end until 2027.

Other AI frameworks

Colorado's AI Act, while the first comprehensive state AI regulation, is just one piece of a growing patchwork of domestic and international legal restrictions on AI.


In the US, AI regulation is growing in spurts with action at the state and city level along with executive action at the federal level. Congress is throwing a lot at the wall - but, so far, nothing has stuck.

New York City's Local Law 144 (passed in 2021) presents much narrower restrictions on AI. Like Colorado, NYC is focused on bias and imposes transparency obligations. But the similarities end there. Local Law 144 has a much narrower application - it only regulates AI activity for employment decisions - and does not prohibit discriminatory uses of AI. However, it does add a requirement not clearly present in Colorado's law - the obligation to publicly share the results of a biased audit.

States with comprehensive privacy laws (including Colorado) have dipped their toes into AI regulation. But they focus on a limited subset of AI that uses certain decision-making using personal information. These laws target profiling - using automated processing on personal data to assess/evaluate certain characteristics - in furtherance of decisions producing legal or similarly significant effects. [Such decisions track the 'consequential decisions' in Colorado's AI law.]

A few states have done novel things in the space. Minnesota, in a just-passed bill, upended the paradigm by adding a right to contest such decisions and receive details on how they can (or could have) changed the outcome, review the personal information involved, and request a redo when the information was incorrect. Some states are doing interesting AI work through rulemaking; California is proposing detailed pre/post-use notices, and Colorado requires disclosures that are more extensive than the state's AI law.

But, generally, states with comprehensive privacy laws just require a company engaged in the above-mentioned profiling to offer consumers the right to opt out and conduct a data protection assessment (DPA). Companies may be inclined to try using a DPA to also cover the Colorado AI law's requirement for an impact assessment (or vice versa). That would be a mistake. The DPA is more limited. It merely requires a risk-benefit analysis of certain processing involving personal data, while the impact assessment (as explained above) asks for more details, covers more topics, and is needed even when personal data is not involved.


The EU has also been active in policing AI with the EU AI Act set to enter into force within the next month. The EU AI Act, like the General Data Protection Regulation (GDPR), is a regulation; it sets rules that apply in member states without implementing legislation by those states. There are several similarities between the Colorado AI Act and the EU AI Act, including:

  • Effective dates: Both take effect in 2026, although the Colorado law takes effect a bit earlier in the year (February as opposed to some time in the summer for the EU AI Act).
  • Focus on high-risk uses of AI: Both are concerned with a subset of AI activity - high-risk activity. But the laws identify different uses as high risk, with the EU AI Act arguably covering everything in the Colorado law and then some. The EU AI Act defines eight areas of high risk: biometric identification and categorization of natural persons; management and operation of critical infrastructure; education and vocational training; employment, worker management, and access to self-employment; access to and enjoyment of essential private and public services and benefits; law enforcement; migration, asylum, and border control management; and administration of justice and democratic processes.
  • Role-based requirements: Like the Colorado law, the EU regulation also imposes requirements based on role. Developers (which are referred to as 'providers') must meet the bulk of the requirements, while deployers are subject to a more limited set of obligations. In addition, the EU AI Act contains rules for importers and distributors of high-risk AI systems.

But there are also significant differences. Specifically, the EU AI Act's reach goes beyond high-risk systems and includes prohibitions on certain uses as well as obligations for general-purpose AI models. The regulation also creates a centralized database and requires providers to register themselves and their systems prior to placing on the market or putting into service a high-risk AI system.

Open questions

The bill, passed on the last day of the legislative session, leaves some significant questions (and maybe areas for correction when the legislature reconvenes). Among the key issues:

  • Rulemaking: Will the attorney AG issue comprehensive, targeted, or no rules and how will those rules interact with the CPA's regulations?
  • Technical information: How will deployers and developers navigate the data-sharing obligations? A developer must share with a deployer 'the information necessary' for the deployer to comply with its obligations, but who determines what is necessary? A deployer will inevitably want more information than a developer wants to share.
  • Retention: When does an AI system change so much so as to constitute a new system? Deployers must keep impact assessments for three years after the 'final deployment' of the system. But determining the final deployment can be difficult given that incremental changes over time can arguably result in a completely different system.

Companies will also need to grapple with drafting ambiguities, such as provisions suggesting the law applies to companies who do not use AI in Colorado and imposing disclosure obligations concerning data not connected to the high-risk AI system.

Camila Tobón Partner
[email protected]
Josh Hansen Associate
[email protected]
Shook, Hardy & Bacon L.L.P., Denver