Support Centre

You have out of 5 free articles left for the month

Signup for a trial to access unlimited content.

Start Trial

Continue reading on DataGuidance with:

Free Member

Limited Articles

Create an account to continue accessing select articles, resources, and guidance notes.

Free Trial

Unlimited Access

Start your free trial to access unlimited articles, resources, guidance notes, and workspaces.

USA: How the FTC is influencing AI regulation

Over the years, as part of its role as the primary federal consumer protection regulator, the Federal Trade Commission (FTC) has filled a void in the oversight and regulation of new technologies. Most recently, the rapid adoption of artificial intelligence (AI), machine learning, and other algorithmic decision-making systems (AI tools) - supercharged by the public release of powerful generative AI models - has raised the FTC's concern about possible harm to consumers. With no federal law that specifically regulates AI, the FTC has sought to use its existing consumer protection authority to constrain harmful AI-related business practices. 

Primarily, the FTC has authority under Section 5 of the FTC Act to prohibit businesses from engaging in deceptive unfair business practices, which it has long used to regulate company data practices. With increasing and novel uses of AI and other algorithmic data processing tools, the FTC has issued a number of guidance documents and engaged in enforcement activity demonstrating what it believes to be deceptive or unfair when businesses use these tools. Businesses that do not follow this guidance face investigation and potential enforcement, with the FTC coming up with creative penalties designed to dissuade improper behavior, including the disgorgement of algorithms, data, and other inputs to and outputs of unlawful AI systems. 

In this Insight article, Bret Cohen, from Hogan Lovells, covers some of the AI business practices that the FTC considers unfair or deceptive, describes penalties available to the agency when bringing a Section 5 claim for use of AI tools, and explains the FTC's views on best practices for use of these tools. 

Just_Super / Signature collection / istockphoto.com

Deceptive trade practices 

Under Section 5, a business engages in a deceptive practice if there is a statement, omission, or other practice that is likely to mislead a consumer acting reasonably under the circumstances, causing harm to the consumer. FTC guidance has identified the following AI-related business practices that it considers to be deceptive to consumers. 

Misleading consumers about the nature of AI tools and how they are used  

AI tools - particularly generative AI tools - have an increasing ability to look and feel like regular human interactions. The FTC advises businesses to be cautious of misleading consumers about the nature of these interactions. For example, the FTC brought an enforcement action against an online dating service that created chatbots which appeared to be other human users of the service with the goal of inducing potential customers to sign up for the service. In addition, the FTC has cautioned that creating technology that is effectively designed to deceive - such as deepfake videos and voice clones - can itself be a deceptive practice.

Misleading consumers about what AI-based products can do  

The FTC is monitoring companies that exaggerate claims about what their AI tools can do, or the accuracy of those tools' predictions if those claims are not based on actual evidence. The FTC even issued guidance on this point with respect to tools marketed as being able to identify AI-generated content, which it noted could be deceptive if not supported by scientific testing. The agency also cautioned companies about marketing products or digital items created by AI tools as human-generated, and advised that when offering a product that relies on outputs from generative AI models, customers should be informed about the extent to which the training data includes copyrighted or otherwise protected material.

Misleading consumers about AI tools' collection and deletion of data  

The FTC has warned companies to be truthful when collecting information to be used in AI or machine learning algorithms. For example, the FTC brought an enforcement action against a company that told users it would not apply facial recognition technology to users' photos uploaded to the service unless they affirmatively activated the feature, but automatically activated the feature for users who lived in all but a few jurisdictions. The FTC also alleged that the company told users it would delete their photos if they deactivated their accounts, but instead retained them indefinitely. 

Unfair trade practices 

A practice or act is unfair under Section 5 if it causes or is likely to cause substantial injury to consumers which is not reasonably avoidable by consumers themselves and not outweighed by countervailing benefits to consumers or to competition. The FTC has taken the position that the use of AI or other automated decision-making systems is unfair if they make biased, discriminatory, or incorrect decisions about individuals that could have been avoided through better oversight of the system. In its guidance and cases relating to unfair use of AI tools, the FTC offered the following guidance. 

Testing AI and automated systems for incorrect or harmful outcomes before deploying them  

In one case, the FTC alleged that a retailer engaged in unfair practices when it used AI-based facial recognition technology to identify customers who had been engaged in shoplifting because the technology was trained on poor data. This led employees, acting on false-positive alerts, to follow innocent consumers around its stores, search them, order them to leave, call the police to confront or remove them, and publicly accuse them of wrongdoing. 

Avoid discriminatory decisions  

In the same case as above, the FTC alleged that the facial recognition algorithm disproportionately impacted people of color, leading to harmful outcomes on the basis of race. The FTC has also advised that AI tools and algorithms operating in areas like housing, credit, or other circumstances in which inaccuracies could have significant effects on consumers could lead to violations of other civil rights laws enforced by the FTC, such as the Fair Credit Reporting Act and the Equal Credit Opportunity Act. 

Penalties 

Historically, in cases where there has been a violation of Section 5, the FTC often obtains consent orders - effectively settlements - through which it prohibits the company from engaging in further unfair or deceptive conduct. It typically requires the company to adopt internal compliance measures before deploying similar technology and regular reporting of compliance to the agency. If violated, these consent orders carry significant fines of over $50,000 per violation, which the FTC treats as per consumer, and can aggregate into billions of dollars. 

More recently, the FTC also required companies it deems to have trained AI tools with data collected under unfair or deceptive circumstances to delete the algorithms or other AI or machine learning models it has created using the improperly obtained data. In imposing this penalty, the FTC seeks to disincentivize the deceptive, harmful, or otherwise unlawful collection of data used to train the algorithm by requiring the deletion of the results of that unlawful data collection and processing. 

Best practices 

The FTC is focused on protecting consumers from harm potentially caused by the improper use of AI tools and the marketing around them. To best demonstrate compliance with the FTC's expectations about the fair and accurate use of AI tools, companies should be prepared to demonstrate how they comply with its expectations by taking the actions outlined below. 

Carefully review claims about AI tools and the circumstances around their use  

Companies should review such claims to ensure that consumers are not deceived and be especially transparent about the use of sensitive data. The FTC advises that companies notify consumers about how and when their personal information will be used by or be used to develop AI tools - especially if the information collected is sensitive, such as through facial recognition or the collection of biometric data - and that they review marketing claims to make sure that they accurately reflect AI tools' abilities and limitations. 

Don't give consumers the wrong impression  

If AI tools are used to chat or mimic human behavior, companies should make sure that consumers know that they are chatting with an AI tool, and don't give them the impression that they are interacting with a human. If AI tools are used to generate outputs such as stories, graphics, or advertisements, companies should not imply that they are organically generated. 

Make sure that AI models are validated and revalidated to work as intended, and do not illegally discriminate  

The FTC takes the position that companies are responsible for considering whether inputs to their AI models could lead to possibly biased, discriminatory, or incorrect outcomes. For example, the agency has cautioned about using decision-making algorithms using data points, such as zip codes or census tracts, that may serve as proxies for protected characteristics under antidiscrimination laws that could serve to disadvantage certain ethnic groups. In addition, the FTC has urged businesses to evaluate whether the outputs of their model do, in fact, discriminate. Before, during, and after the development of AI tools or models, the FTC advises that companies ask questions such as:  

  • How representative is your data set?  
  • Does your data model account for biases?  
  • How accurate are your predictions?  
  • Does your reliance on big data raise ethical or fairness concerns?1 

Bret Cohen Partner 
[email protected]
Hogan Lovells, Washington, D.C. 


1. The author thanks Rose Grover for contributions to this article.