Support Centre

You have out of 5 free articles left for the month

Signup for a trial to access unlimited content.

Start Trial

Continue reading on DataGuidance with:

Free Member

Limited Articles

Create an account to continue accessing select articles, resources, and guidance notes.

Free Trial

Unlimited Access

Start your free trial to access unlimited articles, resources, guidance notes, and workspaces.

Connecticut: Introduction of new AI regulations in State Government

Artificial intelligence (AI) has captivated the interest of both the American public and policymakers. In fact, within the US, the 118th Congress has held 17 hearings to date on this subject. While the federal government continues to review the policy implications of AI technology, state and local governments continue to move forward with their proposals. However, a significant drawback emerges: many of the legislative proposals and efforts nationwide concerning AI include data privacy proposals that could severely harm the ability to train these models due to data limitations put on companies and organizations.

On June 7, 2023, Governor Lamont of Connecticut signed Senate Bill 1103 for an Act Concerning Artificial Intelligence, automated decision-making, and personal data privacy (the Law). The Law passed unanimously in Connecticut's House and Senate.

In this Insight article, Jordan Crenshaw and Michael Richards, from the Chamber of Commerce's Technology Engagement Center (C_TEC), provide a robust section-by-section overview of the Law and highlight the potential issues associated with data limitation.

DenisTangneyJr / Signature collection / istockphoto.com

AI is defined under the Law as "(A) an artificial system that (i) performs tasks under varying and unpredictable circumstances without significant human oversight or can learn from experience and improve such performance when exposed to data sets, (ii) is developed in any context, including, but not limited to, software or physical hardware, and solves tasks requiring human-like perception, cognition, planning, learning, communication or physical action, or (iii) is designed to (I) think or act like a human, including, but not limited to, a cognitive architecture or neural network, or (II) act rationally, including, but not limited to, an intelligent software agent or embodied robot that achieves goals using perception, planning, reasoning, learning, communication, decision-making or action, or (B) a set of techniques, including, but not limited to, machine learning, that is designed to approximate a cognitive task."

Section 1: Government inventory of AI systems

By December 31, 2023, Connecticut's Department of Administrative Services must conduct an inventory of all State agencies' systems and provide a record of those using AI. Subsequently, this inventory process will become an annual requirement. The following information will be a part of the inventory and be provided to the public through the State open data portal:

  • the name of the system and its specific vendor;
  • a description of the general capabilities and uses of the system; and
  • whether the system is used to independently make, inform, or materially support a conclusion, decision, or judgment.

Starting from February 1, 2024, the Department of Administrative Services will be required to assess each system listed in the inventory to determine if any of the outputs are found to unlawfully discriminate against or have a disparate impact on protected groups of individuals.

Section 2: Office of Management policies and procedures

Effective by February 1, 2024, Connecticut's Office of Policy and Management (OPM) will be required to develop policies and procedures around AI systems' development, procurement, implementation, utilization, and ongoing assessments.

These policies and procedures will need to include, at a minimum, the following elements:

  • oversight of each AI system's procurement, implementation, and ongoing assessment;
  • ensuring the systems do not result in any unlawful discrimination against any individuals or groups of individuals;
  • stipulations requiring agencies to assess the impact of each system before implementation; and
  • allowance for the Department of Administration Service to continuously conduct ongoing assessments to ensure their outputs are not unlawfully discriminatory.

Furthermore, the OPM can revise its policies and procedures if deemed necessary by the Secretary of the OPM. All procedures, policies, and subsequent updates must be posted on the OPM's website.

Starting from February 1, 2024, any agency that wishes to use an AI tool must have fulfilled the abovementioned requirements. Furthermore, each agency secretary has the sole discretion to determine that a system will result in any unlawful discrimination or disparate impact they may have the authority not to implement the system.

Section 3: Judicial Department inventory policies and procedures

Starting from December 31, 2023, Connecticut's Judicial Department will be required to conduct an inventory of its AI systems, with the results to be published on the agency's website. This inventory must include the names of systems and specific vendors, a description of the general system capabilities, and whether the systems are used to independently make, inform, or materially support decisions and judgments.

The Judicial Department will also be tasked with developing its policies and procedures concerning the department's development, procurement and implementation, utilization, and ongoing assessment of AI systems. These policies and procedures, at a minimum, will need to include the following:

Furthermore, the Law allows the Judicial Department to revise these policies and procedures if deemed necessary by the Chief Administrator. These policies, procedures, and subsequent updates must be posted on the department's website.

Starting from February 1, 2024, the Law will prohibit the Judicial Department from implementing an AI system that has not met the requirements listed above.

Section 4: Data privacy requirements for State contractors

Under the new Law, businesses contracting with the State of Connecticut are now mandated to adhere to the guidelines set forth by the Connecticut Act Concerning Personal Data Privacy and Online Monitoring (CTDPA). Among its provisions, the CTDPA grants consumers the rights to access, delete, and correct their data, among other things. Additionally, the CTDPA empowers consumers to opt out of targeted advertising, data sales, and automated profiling relating to decisions made "in the provision […] of financial or lending services, housing, insurance, education enrollment or opportunity, criminal justice, employment opportunities, health care services or access to essential goods or services." This year, the CTDPA was amended adding new restrictions concerning health data.

Companies, particularly those that previously did not meet Connecticut's threshold of holding the data of at least 100,000 State residents required for CTDPA coverage, must now take heed and comply with the privacy Law's requirements in order to do business with the State. Additionally, it is unclear whether such contractors will only have to follow requirements for data processors rather than controller duties outlined in the CTDPA.

Section 5: Establishment of an AI working group

The Law establishes an AI working group as a constituent part of the Legislative Department. This group will be composed of experts in AI, automated systems, government policy, and related fields. The working group will be tasked with developing a report providing critical recommendations to the joint standing committee of the General Assembly. The report will focus on the following key areas:

  • development of best practices and procedures for the State Government's ethical and equitable use of AI;
  • review policies and procedures developed by the Office of Policy and Management;
  • utilization of the "Blueprint for an AI Bill of Rights" and other documents to develop recommendations around the regulation of the use of AI with the private sector and the adoption of a Connecticut bill of rights based on the Blueprint; and
  • provide recommendations concerning the adoption of other legislative proposals on AI.

The working group will consist of voting members appointed by elected leaders from various sectors, including industry and academia, along with other experts from the public sphere. Non-voting members will include members of the legislation, the State Attorney General, Treasurer, Comptroller, and Chief Data Officer, among others.

Appointments must be provided no later than 30 days after the Law's effective date, June 7, 2023. Any action the working group takes must have a quorum of at least 50% of the voting members present. The chairpersons of the joint standing committee of the General Assembly and the Executive Director of the Connecticut Academy of Science and Engineering will chair the working group. The General Assembly committee will comprise administrative staff for the working group. The working group's first meeting must occur no later than 60 days after June 7, 2023.

Limitations of data have profound consequences for AI

While Connecticut has now been enacted, there are stipulations within the legislation that may hinder its overall objective of creating lawful and non-discriminatory AI.

In the realm of data science, it is frequently said that "bad data in, bad data out," meaning that good, robust data is vital to receive an accurate output. However, Section Four of the Law requires that businesses who wish to contract with the State to provide services must adhere to the CTDPA. It's important to note that the Connecticut Governor signed into Law Senate Bill 3, for An act concerning online privacy, data, and safety protections, which expanded limitations around what data is allowed to be collected and the use of particular types of sensitive data.

Such limitations on sensitive data for vendors developing products to meet the needs of the Government-procured system limit the ability to train their models on the most robust and inclusive datasets. Frequently, data classified as sensitive is critical for ensuring systems take into account the potential impacts on protected groups. Limiting and excluding data vital for the systems could make them less inclusive. This underscores the necessity for AI systems to be equipped with the requisite data to train models, thus ensuring that the outputs remain both lawful and non-discriminatory.

Jordan Crenshaw Senior Vice President
[email protected]
Michael Richards Policy Director
[email protected]
C_TEC, Washington D.C.

Feedback