Continue reading on DataGuidance with:
Free Member
Limited ArticlesCreate an account to continue accessing select articles, resources, and guidance notes.
Already have an account? Log in
USA: Discrimination in a hightech world - potential pitfalls of AI in employment
Artificial intelligence (AI) is among us, and there are a number of ways that AI can be used throughout the course of the employment relationship. Amber Rogers, Brittany Bacon, Katherine Sandberg, and Danielle Dobrusin, from Hunton Andrews Kurth, trace the regulatory landscape governing the use of AI in the US, with concluding practical tips for avoiding discrimination when using AI.
Introduction
AI is an umbrella term for a range of technologies that aim to conduct human-like cognitive processes. Things that humans have traditionally done by thinking and reasoning are increasingly being performed by, or with the assistance of, machines that exhibit 'intelligent' behaviors. According to a meeting of the Equal Employment Opportunity Commission (EEOC), up to 99% of Fortune 500 companies and 83% of companies overall use AI in some form during the hiring process. Employers should be careful to avoid inadvertently discriminating when using these technologies.
In the context of job interviews, for example, AI software purportedly assists employers in more quickly identifying the best candidates for hire by combining mobile interviews with game-based assessments. During the evaluation process, the AI platform can analyze a candidate's facial expressions, word choices, and gestures, and evaluate the game-based assessment results to try to determine which candidate is the best qualified for the position. After a candidate is hired, AI can be used to track performance and productivity. These are just a few examples of the types of AI uses that have triggered concerns about AI's potential contribution to discriminatory practices in the employment context. Companies using AI in the workplace are well advised to carefully consider the patchwork of legal requirements and ethical implications associated with these emerging technologies.
State laws and legislation
In the US, there is no single law that broadly and directly regulates the use of AI. Rather, there is a patchwork of laws that touch on AI in different contexts, including the use of biometric data. This section provides examples of key laws that can implicate businesses' use of AI in the employment context.
State anti-discrimination laws
Even though a state may not have laws that specifically address discrimination by AI, state anti-discrimination laws will likely apply. In general, companies run the risk of disparate impact discrimination claims, whereby a worker will allege that a facially neutral policy or practice has a discriminatory effect. Disparate treatment claims, which require proof of intent to discriminate, are less likely in the context of AI because the algorithm will apply equally to everyone, and thus any discriminatory effect would likely be unintentional. For example, speech pattern analysis that is used to test an applicant's ability to solve problems might inadvertently eliminate applicants with speech impediments. Resume review software might discriminate on the basis of gender by eliminating individuals with gaps in their resumes, thereby excluding candidates who left the workforce to care for children. Regarding race and color, AI software is often trained based on images of lighter-skinned individuals, so in the past has had difficulty detecting the faces of individuals with darker skin.
Biometric privacy laws
While there is currently no federal or state privacy law that directly governs the use of AI in the employment context, companies that use AI software in the recruiting or employment context could also be subject to various state biometric privacy laws. AI can use biometric data in video interviews, in speech patterns or voice recognition, and in comparing stored fingerprints for employees who clock in and out of work. It is important to note that multiple state laws could apply at once: for example, an employer located in state A might conduct an AI video interview with an employee in state B using the AI software of a company located in state C.
Currently, Illinois, Texas, and Washington have laws that govern the collection and storage of biometric identifiers in the employment context. There also are city-level ordinances in Baltimore, Portland (Oregon), and New York City.
The most robust law is Illinois' Biometric Information Privacy Act (BIPA), which provides for a private right of action for violations that allows aggrieved parties to recover $1,000 per violation for negligent violations, and up to $5,000 per violation for intentional or reckless violations. Several cases decided by the Illinois Supreme Court have interpreted the law in favor of plaintiffs and against businesses. For example, in Cothron v. White Castle, the Court determined that a separate claim accrues under BIPA each time a company collects or discloses an individual's biometric data without consent (so for example, each time the employee clocks in on a biometric clock). Then, in Tims v. Black Horse Carriers, the Court determined that a five-year statute of limitations applies to claims under BIPA. Given the separate accrual of claims and the lengthy statute of limitations, BIPA exposes companies to a potential for significant liability.
Laws related to video interviews
Both Maryland and Illinois have enacted laws that restrict an employer's ability to use AI to analyze video interviews of candidates. In Illinois, the Illinois Artificial Intelligence Video Interview Act requires employers to disclose the use of AI to the applicant, explain how the AI works, and then obtain consent from the applicant prior to the interview. Similarly, in Maryland, employers must obtain written consent and a waiver in order to use facial recognition technology (FRT) during job interviews.
State consumer privacy laws
State consumer privacy laws in Colorado, Connecticut, and Virginia grant consumers the right to opt out of the processing of personal data for purposes of certain forms of automated processing on personal data in furtherance of decisions that produce legal or similarly significant effects concerning a consumer and, in some cases, require businesses to conduct and document a data protection assessment with respect to such processing. While a business' use of AI ordinarily may be subject to these requirements, it is important to note that these laws do not apply to consumers acting in a commercial or employment context.
NYC LL 144
In addition to the state laws discussed above, a New York City local law, LL 144, prohibits an employer or employment agency from using automated employment decision tools (AEDTs) to screen a candidate or employee for an employment decision unless:
- the AEDT has been subject to a 'bias audit' within one year prior to using it; and
- a summary of the bias audit results and the distribution date of the AEDT have been posted to the employer or employment agency's website prior to using the tool.
The term 'bias audit' means an impartial evaluation by an independent auditor that includes the testing of an automated employment decision tool to assess the tool's disparate impact on specific categories of individuals. LL 144 also requires any employer or employment agency in NYC using an AEDT for screening to provide notice to each employee or applicant that resides in NYC. The notice must include:
- prior notice that an AEDT will be used and that a candidate may request an alternative selection process or accommodation;
- prior notice of the job qualifications and characteristics the AEDT will use to assess the candidate or employee; and
- notice of the type, source, and retention policy relating to data collected for the AEDT.
Forthcoming legislation
Other laws seeking to regulate the use of AI are on the horizon and states are racing to enact legislation to keep up with the rapid rise in the popularity of AI.
For example, a new California bill, AB 331, would seek to, among other things, require deployers of automated decision tools to perform a detailed impact assessment for the tool. The bill also would require deployers of AEDT that are used to make a consequential decision to notify any person that is the subject of the consequential decision that an AEDT is being used to make a consequential decision about them.
Another notable bill introduced in New Jersey, Bill A4909, would make it unlawful to sell or offer for sale in New Jersey an AEDT unless:
- the AEDT is the subject of a bias audit (i.e., an impartial evaluation, including, but not limited to testing, of an AEDT to assess its predicted compliance with applicable laws relating to discrimination in employment) conducted in the past year prior to selling the AEDT or offering it for sale;
- the sale of the AEDT includes, at no additional cost, an annual bias audit service that provides the results of the audit to the purchaser; and
- the AEDT is sold or offered for sale with a notice stating that the AEDT is subject to the provisions of the bill.
In addition, any person who uses an AEDT to screen a candidate for an employment decision shall notify each candidate of the following within 30 days of the use of the AEDT that:
- the AEDT which is subject to an audit for bias pursuant to the bill was used in connection with the candidate's application for employment; and
- the AEDT assessed the job qualifications or characteristics of the candidate.
Litigation landscape
Generally, litigation in the area of AI in the workplace is in its infancy. However, we can expect that to change in the coming years as some lawsuits have started to trickle in.
AI and the NLRA
Another area of concern for all employers is the intersection of AI and the National Labor Relations Act (NLRA). Section 7 of the NLRA provides employees with the right to self-organize and form, join, or assist labor organizations, and engage in concerted activities. Section 8 prohibits employers from engaging in activities that interfere with employees' Section 7 rights.
On October 31, 2022, the National Labor Relations Board (NLRB) General Counsel issued Memorandum GC 23-02, which states the NLRB's intention to prevent intrusive or abusive electronic monitoring of employees that might impair or negate employees' ability to engage in concerted activity under Section 7 (for example, requiring warehouse workers to wear devices that track movement or conversations, or keeping track of drivers using GPS and cameras in vehicles). According to the General Counsel's memo, 'numerous practices that employers may engage in using new surveillance and management technologies are already unlawful.' If employers implement technology or use existing technology to conduct surveillance of Section 7 activity, then they can run afoul of Section 8. The memo also notes that employers who discipline employees who protest the use of AI in the workplace may also violate Section 8.
The memo requests that the NLRB adopt a framework for protecting employees from monitoring that interferes with Section 7 activity.
EEOC and FTC
The EEOC and the Federal Trade Commission (FTC) are part of an agency coalition with the Consumer Financial Protection Bureau and the Department of Justice's Civil Rights Division that was formed to enforce existing civil rights laws in light of the potential for discrimination that AI brings. On April 25, 2023, the agencies issued a Joint Statement on Enforcement Efforts against Discrimination and Bias in Automated Systems, which explains that the agencies are committed to enforcing laws to protect against discrimination arising from the use of automated systems and algorithmic processes. Both the EEOC and FTC have been active in issuing guidance and initiatives related to companies' use of AI tools.
The EEOC
The EEOC Algorithmic Fairness Initiative was formally announced in October 2021, and is designed to study and implement compliance with existing federal civil rights laws by doing things, such as issuing technical assistance, holding listening sessions with stakeholders about AI tools and their impact on employment, and gathering information about the design and impact of AI technologies in the workplace.
In May 2022, the EEOC published its first guidance, which concerned compliance with the Americans with Disabilities Act and the use of AI. In particular, the EEOC noted that employers should:
- provide reasonable accommodations where needed for job applicants and employees who are evaluated by, or use an AI tool;
- take measures to ensure that qualified individuals are not screened out by AI software due to their disabilities; and
- avoid using AI tools that pose disability-related inquiries or that conduct medical examinations before a conditional offer of employment is extended.
Most recently, on January 31, 2023, the EEOC conducted a hearing on the benefits and drawbacks of the use of AI in employment decisions.
The FTC
In 2022, the FTC issued a report to Congress that explained its concerns about bias in AI. Following that report, the FTC issued a series of blog posts in February, March, and May 2023 focusing on businesses' use of AI.
In the February 2023 blog post, the FTC provides general insight to businesses that develop or are considering developing AI technologies and advises them not to promote false or unsubstantiated claims about their product's efficacy.
In the March 2023 blog post, the FTC describes the use of generative AI technologies for fraud and warns businesses who develop or use generative AI not to partake in, or enable, any deceptive, unfair, or fraudulent conduct.
In the May 2023 blog post, the FTC discusses how commercial businesses can use AI tools to influence consumers' beliefs, emotions, and behaviors, and urges commercial businesses to avoid the use of AI tools until AI developers implement ethical accountability measures to the deployment of AI. In reaction to the growing popularity and quick commercialization of AI technologies, the FTC continues to monitor and warn the public, by way of these blog posts, of the potential dangers of AI in preparation for potential FTC and regulatory intervention. Additionally, the FTC has required companies to destroy algorithms that were trained on data that was improperly collected.
Tips for avoiding discrimination when using AI
In order to mitigate the risks of discrimination when using AI in the workplace, employers should consider following a few steps. The first step is to understand how the algorithm works. Although this might sound like a straightforward task, it is actually complicated given the 'black box' problem of AI. The 'black box' problem is a short way of describing how AI learns and the difficulty of keeping track of how AI makes its decisions. Second, vendors of AI products want to keep proprietary information about how their software works out of the hands of competitors, so getting a straightforward answer from them may be difficult.
The second step in mitigating the risk of discrimination is to actually audit the AI tool. This involves conducting trial runs of the tool and analyzing the results. It is also helpful to engage with the vendor to determine how the software has been tested.
The third step is to make sure that clear retention protocols are set with the vendor. Otherwise, the algorithm may not store records, so it could be difficult to see if a specific group of candidates is being disproportionately rejected.
Conclusion
There will undoubtedly be more opportunities for employers to rely on AI technologies as more and more products begin integrating AI. As discussed above, there is no overarching law in the US that governs the use of AI and there are concerns that these technologies have the potential to contribute to discriminatory practices.
Accordingly, to mitigate risk and engage in the responsible use of AI, employers seeking to use AI in the employment context should carefully consider the patchwork of laws that may apply, as well as any discriminatory effects that may result from the use of a product.
Amber Rogers Partner
[email protected]
Brittany Bacon Partner
[email protected]
Katherine Sandberg Associate
[email protected]
Danielle Dobrusin Associate
[email protected]
Hunton Andrews Kurth, New York