USA: Employment practices in the emerging AI regulatory framework
Artificial intelligence (AI) has become a transformative force across every industry, revolutionizing the way businesses operate and impacting employment practices worldwide. In the US, the rapid advancement of AI has led to significant changes in the job market, with both positive and negative effects on employment. As AI continues to evolve, regulators and legislators have taken notice. They have responded with numerous proposals to address potential challenges and ensure a fair and inclusive future of work. Natalie Koss, Esq., Managing Partner at Potomac Legal Group PLLC, provides insight into the impact of AI on employment practices and how employers can prepare for changing legislation.
U.S. employers face the unique challenge of complying with fast-moving changes in new laws and regulations at the federal and state level. Organizations with employees in multiple jurisdictions must develop employment practices that comply, not only with federal rules but with a web of different requirements from each state.
While many proposals have far to go, others have already become law, with New York City leading the way, and California likely to soon pass the first statewide AI employment law. The federal Equal Employment Opportunity Commission (EEOC) has also moved quickly to draft a framework of regulatory priorities.
Based on these state and federal actions, employers are seeing the direction of the emerging AI regulatory framework and can begin preparing for future laws and regulations.
The impact of AI on employment practices
The introduction of AI technology has brought automation and generative content to every office, factory, and institution. Individual employees and entire organizations have increased efficiency, productivity, and cost-effectiveness. Automation has the potential to eliminate certain tasks previously performed by humans, which will render a significant number of skills obsolete.
While some jobs may be eliminated, the rise of AI has also created new employment opportunities. AI technology requires skilled professionals to develop, maintain, and manage these systems. As a result, there has been a shift in demand for workers with expertise in AI, data science, machine learning, and other related fields. Upskilling and reskilling initiatives are essential for employees to adapt to the changing job landscape and remain competitive in the AI-driven economy.
Bias and discrimination
AI does not come without risk to employees and liability to employers.
AI systems are trained using vast amounts of data, and if this data is biased, it can lead to discriminatory outcomes. States are particularly concerned about AI-powered hiring tools that may inadvertently perpetuate existing biases by favoring certain demographic groups. Addressing bias in AI algorithms is crucial to ensuring fair and equal employment practices. Employers must be vigilant in identifying and mitigating biases in their AI systems to create a more inclusive and diverse workforce.
Regulatory proposals for AI and employment
U.S. lawmakers and regulators are proposing a legal framework that requires companies to ensure transparency and accountability in AI systems. This includes providing explanations for automated decisions that affect employees, such as hiring or promotion algorithms.
Transparent AI systems can help mitigate biases and ensure fairness in employment practices. Employers should be prepared to comply with these regulations and be able to provide clear explanations and justifications for decisions made by AI systems.
As AI relies on vast amounts of data, privacy concerns become paramount. Employers must ensure compliance with regulations regarding health and personally-identifying information to maintain trust with employees and avoid legal repercussions. Implementing robust data privacy and protection measures, such as data encryption, secure storage, and consent mechanisms, will be essential for employers to navigate the legal landscape.
Regulatory proposals also emphasize the ethical use of AI in the workplace, requiring employers to avoid discriminatory practices and ensure accountability for the decisions made by AI systems. It may involve conducting regular audits of algorithms and implementing mechanisms to address biases and errors.
Employers should proactively assess their AI systems and algorithms to identify and mitigate biases. Regular ethical audits can help ensure fairness, transparency, and accountability in automated decision-making processes.
EEOC and federal regulation of AI in employment
The primary purpose of the EEOC is to enforce federal anti-discrimination laws, such as Title VII of the Civil Rights Act.
Although the EEOC has not yet made any specific regulatory AI proposals, it has held hearings and published technical assistance documents for employer practices regarding AI.
One way to interpret the EEOC's position on AI is that it believes that an employer's actions should not be discriminatory regardless of whether a human or machine makes the decision. In other words, the result of the action is what matters. A decision must not be discriminatory.
While federal regulators likely will introduce new AI-specific rules, the mandate to employers will remain unchanged. Employers must keep their human and machine decision-making free from discriminatory animus.
It is likely that future AI regulation will address specific AI systems and provide additional penalties for systems that go unchecked. For the time being, employers should continue to self-monitor their systems to ensure that their machine and human decision-making comports with federal and state anti-discrimination laws.
EEOC hearing and technical assistance document
Early in 2023, the EEOC conducted a hearing to examine the impact of AI and automated systems on employment decisions. The purpose of the hearing was to identify potential concerns regarding employers' use of AI in various areas of employment decision-making.
Titled 'Navigating Employment Discrimination in AI and Automated Systems: A New Civil Rights Frontier,' the hearing brought together perspectives from computer scientists, civil rights advocates, legal experts, industrial-organizational psychologists, and employer representatives.
As a result of the hearing and its investigations, the EEOC expressed concerns about the use of AI in employment decisions, as many companies rely on AI technologies for processes such as hiring, promotion, and more. These tools include resume screening, video-interviewing software that evaluates facial expressions and speech, and software that assesses 'job fit' based on personality, aptitude, or skills.
The EEOC noted that these AI systems might have a negative impact on protected groups and individuals who are particularly vulnerable, such as immigrants, individuals with disabilities, those with criminal records, LGBTQIA+ individuals, older workers, and those with limited literacy or English proficiency.
The EEOC has also stated that it aims to address and eliminate technological barriers that result in discrimination, including the use of AI systems that are biased and intentionally exclude or harm protected groups, restrictive online application processes, and screening or performance-evaluation tools that negatively impact workers based on their protected status.
Recently, the EEOC followed up the hearing by publishing a comprehensive technical assistance document1 to provide guidance on ensuring compliance with civil rights laws and promoting fairness and equality in the workplace.
This document builds upon previous releases by the EEOC, including technical assistance on AI and the Americans with Disabilities Act, as well as a joint agency pledge. It addresses common inquiries from employers and tech developers regarding the application of Title VII to the use of automated systems in employment decisions.
By providing guidance, the document enables employers to evaluate whether their use of AI-powered systems may have a disparate impact on protected characteristics outlined in Title VII, such as race, color, national origin, religion, or sex, which includes pregnancy, sexual orientation, and gender identity.
New state and local AI employment laws and proposals
Several states and cities in the US have taken steps to address the use of AI in employment practices through the implementation of specific laws and proposed legislation. These laws aim to protect the privacy and rights of employees in the context of AI technologies.
New York City is the first major U.S. city to regulate AI in employment with Local Law 144 on automated employment decision tools. Under this law, employers are required to notify prospective job candidates if they use automated hiring systems and conduct an annual bias audit of these systems. Specifically, any tool that employs machine learning or AI with data analytics to evaluate employment candidates is defined as an automated hiring tool.
Under the law, an independent auditor must perform an annual bias audit of the automated hiring tool, assessing whether it exhibits any discriminatory biases based on protected characteristics such as race, sex, and ethnicity, among others. Non-compliance with the law can result in penalties ranging from $500 to $1,500 per day.
The California State Assembly recently proposed A.B. 331, which is a bill aimed at regulating artificial intelligence systems, such as automated decision-making tools (ADTs). The bill focuses on addressing inherent biases within these systems.
A.B. 331 aims to govern ADTs that are used to make decisions. The proposed bill mandates that developers and users of ADTs conduct and document an impact assessment. This assessment should cover the intended use of the system, details regarding the data used, and the rigor of the statistical analysis performed. The assessment should include an analysis of potential adverse impacts based on protected characteristics such as race, color, ethnicity, sex, religion, age, national origin, or any other classification protected by state law.
Washington, D.C., Maryland, and Virginia
The District of Columbia Council earlier this year introduced a bill to combat algorithmic discrimination in employment decisions. Known as the 'Stop Discrimination by Algorithms Act of 2023,' the bill aims to prohibit employers from engaging in discriminatory practices using algorithms and requires service providers to ensure that their AI tools comply with the law.
Under the legislation, employers would be obligated to conduct an annual discrimination audit performed by a third party, with reporting requirements. The proposal would also mandate that employers require their AI service providers to adhere to the regulations outlined in the bill.
The bill encompasses a broad definition of protected data that algorithms may utilize, including IP addresses, equipment identification, consumer purchase history, geolocation data, education records, and certain automobile records.
Two neighboring states of Washington, D.C., have also begun taking steps to address privacy related to automated systems.
Maryland has enacted a law that prohibits employers from using facial recognition technology to create a pattern of an applicant's face without explicit permission. This legislation safeguards the privacy and consent of individuals in the state.
Virginia, like Washington, D.C., does not have specific privacy laws in place for employees. However, Virginia's corporate data privacy law contains an exemption for employees, indicating that the law may not provide the same level of protection for employees' privacy as it does for other entities.
Preparing for proposed laws and bills
Employers need to closely follow proposed laws and bills related to AI and employment practices. It is essential to stay informed about the evolving regulatory landscape and understand the potential implications for their organizations. Engaging with policymakers, industry associations, and legal experts can help employers actively participate in the legislative process, provide input, and contribute to shaping fair and effective regulations.
Employers should proactively assess their AI systems and algorithms to identify and mitigate biases. Regular ethical audits can help ensure fairness, transparency, and accountability in automated decision-making processes. By conducting comprehensive audits, employers can identify areas of improvement, implement corrective measures, and ensure that AI systems align with legal and ethical standards.
Companies must develop and implement clear ethical guidelines for the use of AI systems. These guidelines should address issues such as privacy, bias, discrimination, and algorithmic accountability. Employers should update their employee handbooks and provide an overview of the AI systems used to assist employment reviews, promotions, and discipline, and inform employees in writing that the systems comply with federal, state, and local rules, including bias assessments.
Regular training and awareness programs can ensure employees understand and adhere to these guidelines. Employers should create a culture that prioritizes ethical considerations in AI use, encouraging employees to act responsibly and ethically when leveraging AI technologies.
The future of employment in an AI world
As AI continues to shape the employment landscape, employers must be prepared for the potential effects and challenges it brings. The current trend of US employment regulation shows that the emerging AI regulatory framework will seek to prevent and penalize discriminatory actions recommended by machines while further protecting employee data and privacy.
Employers that have enacted internal policies and complied with anti-discrimination laws and regulations in the past are best situated to adapt to new AI laws. Every employer should take the time to review their employment policies, handbooks, and automated systems and make the necessary changes to bring their employment practices into full compliance.
AI and employment practices aim to create a fair and inclusive future of work. By staying informed, conducting ethical audits, investing in workforce development, fostering collaboration, and establishing ethical guidelines, employers can navigate these changes successfully and contribute to a positive and equitable AI-powered work environment. It is crucial for employers to embrace AI as a tool for innovation while ensuring that it is used responsibly and ethically to protect the rights and well-being of employees.
Natalie Koss Managing Partner
Potomac Legal Group PLLC., Washington, D.C.