Support Centre

USA Federal

Summary

Law: There is no general federal privacy regulation yet, however, House Resolution (HR) 8152 for the federal American Data Privacy and Protection Act (ADPPA) has been tabled and will now be submitted to the U.S. House of Representatives. In addition, multiple sectoral laws apply on a federal level.

Regulator: The Federal Trade Commission (FTC) takes enforcement action against organisations for violations of Section 5 of the FTC Act, which prohibits unfair or deceptive acts in or affecting commerce. Moreover, under the ADPPA the FTC would have the authority to issue regulations for companies to comply with a newly introduced requirement to implement security practices to protect and secure personal data against unauthorised access. Furthermore, under the ADPPA, the FTC would be provided with the authority to enforce such requirements, together with state attorneys general (AGs) and the California Privacy Protection Agency (CPPA).

Summary: The ADPPA establishes requirements for how companies handle personal data, specifically it requires covered enitities and service providers to limit the collection, processing, and transfer of personal data to that which is reasonably necessary to provide a requested product or service. Additionally, the ADPPA sets out legal protections for consumers' data, including the right to access, correct, and delete their personal data, and requires companies to provide individuals with a means to opt-out of targeted advertising. Lastly, the ADPPA would generally pre-empt state laws that are covered by its provisions, except for certain categories of state laws and specified laws in Illinois and California.

Whilst the ADPPA is still going through the legislative process, there are several related federal laws, including the Health Insurance Portability and Accountability Act of 1996 (HIPAA), which regulates the privacy and security of health information, the Gramm-Leach-Bliley Act of 1999 (GLBA), which requires financial institutions to explain their information-sharing practices to their customers and to safeguard sensitive data, and the Children's Online Privacy Protection Act of 1998 (COPPA), which imposes requirements on operators of websites or online services directed to children under 13 years old. The absence of a federal privacy law or a supervisory authority has made the FTC the de facto regulator resulting in a body of case law and settlements over violations of consumers' privacy rights or failures to maintain security of sensitive consumer information. The USA also participates in the Privacy Shield Framework with Switzerland, as well as the Asia Pacific Cross-Border Privacy Rules system, both of which allow for the seamless flow of data to other jurisdictions.

Furthermore, on July 10, 2023, the European Commission voted to adopt its adequacy decision for the EU-US Data Privacy Framework (DPF), concluding that the US provides a level of protection essentially equivalent to that of the EU for personal data transferred under the EU-US DPF from a controller or a processor in the EU to certified organizations in the US. The adequacy decision has the effect that personal data transfers from controllers and processors in the EU to certified organizations in the US may take place without the need to obtain any further authorization.

Other key laws and regulations include:

Insights

In this Insight article, Zach Lerner and Hannah Schaller, from ZwillGen PLLC, analyze the privacy challenges confronting artificial intelligence (AI) developers in US education, navigating compliance nuances with laws and state privacy regulations to ensure responsible AI use.

Over the years, as part of its role as the primary federal consumer protection regulator, the Federal Trade Commission (FTC) has filled a void in the oversight and regulation of new technologies. Most recently, the rapid adoption of artificial intelligence (AI), machine learning, and other algorithmic decision-making systems (AI tools) - supercharged by the public release of powerful generative AI models - has raised the FTC's concern about possible harm to consumers. With no federal law that specifically regulates AI, the FTC has sought to use its existing consumer protection authority to constrain harmful AI-related business practices. 

Primarily, the FTC has authority under Section 5 of the FTC Act to prohibit businesses from engaging in deceptive unfair business practices, which it has long used to regulate company data practices. With increasing and novel uses of AI and other algorithmic data processing tools, the FTC has issued a number of guidance documents and engaged in enforcement activity demonstrating what it believes to be deceptive or unfair when businesses use these tools. Businesses that do not follow this guidance face investigation and potential enforcement, with the FTC coming up with creative penalties designed to dissuade improper behavior, including the disgorgement of algorithms, data, and other inputs to and outputs of unlawful AI systems. 

In this Insight article, Bret Cohen, from Hogan Lovells, covers some of the AI business practices that the FTC considers unfair or deceptive, describes penalties available to the agency when bringing a Section 5 claim for use of AI tools, and explains the FTC's views on best practices for use of these tools. 

In this Insight article, Michael Rubin and Robert Brown, from Latham & Watkins LLP, explore the contours of the U.S. Senate's recently proposed bipartisan legislation, the Artificial Intelligence Research, Innovation, and Accountability Act of 2023 (AIRIA).

In this Insight article, Camila Tobón, Partner at Shook, Hardy & Bacon, explores the far-reaching impact of President Biden's Executive Order 14110 on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence (the Executive Order), delineating eight principles for the responsible development of AI.

In early 2021, the U.S. Supreme Court (the Supreme Court) issued a ruling that significantly narrowed the definition of an automatic telephone dialing system (ATDS) under the Telephone Consumer Protection Act (TCPA). Although the ruling resulted in fewer complaints alleging violations of the TCPA's auto-dialer provision, the landmark decision resulted in another, perhaps unforeseen consequence: it spurred a number of states to enact or amend their own 'mini-TCPAs.' These laws often pose additional litigation or enforcement risks for companies that call or text to communicate with consumers. Francis Nolan and Amy Albanese, from Eversheds Sutherland, explore these 'mini-TCPAs' and what impact they have on telemarketing.

On November 15, 2023, a bipartisan group of senators from the Senate Committee on Commerce, Science, and Transportation introduced the Artificial Intelligence Research, Innovation, and Accountability Act of 2023 (AIRIA). This group included John Thune, Amy Klobuchar, Roger Wicker, John Hickenlooper, Shelley Moore Capito, and Ben Ray Luján.  

This legislation aims to establish a framework to enhance innovation while ensuring greater transparency, accountability, and security in the development and operation of artificial intelligence (AI) applications. Klobuchar noted the importance of AIRIA, emphasizing the "potential for great benefits, but also serious risks [of AI], and [the necessity for] our laws . . . to keep up." She explained, "[t]his bipartisan legislation is one important step of many necessary towards addressing potential harms. It will put in place common sense safeguards for the highest-risk applications of AI – like in our critical infrastructure – and improve transparency for policy makers and consumers." 

AIRIA follows the recent Executive Order on Safe, Secure, and Trustworthy AI (the Executive Order) and the Blueprint for an AI Bill of Rights recently released by the White House. The legislation holds jurisdiction over AI-related agencies, specifically requiring the National Institute of Standards and Technology (NIST) to research, facilitate, and recommend standardization and authenticity of online content connected to behavior with AI systems. To achieve this objective, AIRIA is structurally divided into two sections: Title I – AI Research and Innovation and Title II – AI Accountability. Bennett B. Borden, Danny Tobey, Tony Samp, Coran Darling, and Ted Loud, from DLA Piper, discuss AIRIA and what the future could hold for AI regulation in the US. 

Currently, no federal standard exists for regulating the collection, use, or disclosure of geofence technology data in the US. However, in 2023, five states – Utah, Washington, Nevada, New York, and Connecticut – enacted geofence technology laws. Meghan O'Connor and Ashleigh V. Giovannini, from Quarles & Brady LLP, discuss what geofence technology is and how it is used, as well as existing legislation in the US.

In today's digital economy, nearly every organization, whatever the industry, is reliant on digital infrastructure and internet connectivity. As a result, organizations are constantly vulnerable to cyberattacks such as phishing, fraud, and ransomware, and struggle to achieve adequate levels of cybersecurity preparedness and resiliency in the face of emerging threats. At the same time, many organizations are subject to existing regulatory requirements to safeguard private, health, financial, and other protected information from cyberattacks.  

In the face of these rapidly evolving cybersecurity risks, laws and regulations may seek to anticipate how best to protect the public but often lag behind technology innovations and the evolving threat landscape (e.g., the proliferation of powerful artificial intelligence [AI] applications). Indeed, state and federal regulations mandating cybersecurity safeguards and breach reporting remain, in significant ways, a patchwork of differing and disjointed requirements. Existing laws may lack incentives for robust compliance or for voluntary timely threat information sharing and coordination that leave entities more vulnerable to compromises, including in its supply chain. These trends will continue to shape how regulators approach harmonizing policy moving forward, relative to protecting critical infrastructure, promoting private-public cooperation, and strengthening cybersecurity resiliency and preparedness up and down the supply chain for businesses, as well as the effectiveness of those efforts. Alaap Shah and Brian G. Cesaratto, from Epstein Becker & Green, P.C., evaluate the current regulatory landscape surrounding cybersecurity and how this may evolve. 

The past year saw huge developments in technology using algorithms and artificial intelligence (AI). This rapid growth has brought excitement but also raised questions about sufficient legislation, including regulating potential discrimination, bias, and unintended consequences.

One proposed federal law, the American Data Privacy and Protection Act (ADPPA), has gained bipartisan support since its proposal in 20221. If passed, the ADPPA would create a national framework that would likely pre-empt many, although not all, state privacy laws and impose new privacy requirements, including for companies to limit discrimination. Liana Chen, Partner at Kronenberger Rosenfeld, LLP, looks at what Section 207 of the ADPPA means for businesses in terms of civil rights protections and algorithm impact assessments.

In this Insight article, Sheri Porath Rockwell and Ernesto R. Claeyssen, from Sidley Austin LLP, discuss data subject rights and privacy policy requirements under the patchwork of 13 US states' comprehensive data privacy laws that have been passed as of the date of this article; while part three of the operational Insight series on what companies need to do in order to comply with US privacy laws looks into the privacy compliance requirements triggered by the sale of personal information. Sheri and Ernesto confirm that data subject rights and privacy policy requirements under these laws are similar in many respects, but also highlight important differences between the laws.[1]

Alex Sharpe, Principal at Sharpe Management Consulting LLC, delves into the core principles of the US National Cybersecurity Strategy, exploring the Three Rs: Responsibility, Regulation, and Resilience. This Insight article highlights the strategy's significance for decision-makers and corporate practitioners, emphasizing its impact on market forces, regulatory changes, and investments in cybersecurity and resilience.

In this Insight Article, Jennie Cunningham, from Kilpatrick Townsend & Stockton, explores the evolving landscape of health data privacy regulations in the US, with a focus on Washington's My Health My Data Act (the Act) and Nevada's Senate Bill No. 370 for An Act relating to data privacy (SB 370), both of which expand data protection beyond federal law standards.

Feedback