Support Centre

USA Federal

Summary

Law: There is no general federal privacy regulation yet, however, House Resolution (HR) 8152 for the federal American Data Privacy and Protection Act (ADPPA) has been tabled and will now be submitted to the U.S. House of Representatives. In addition, multiple sectoral laws apply on a federal level.

Regulator: The Federal Trade Commission (FTC) takes enforcement action against organisations for violations of Section 5 of the FTC Act, which prohibits unfair or deceptive acts in or affecting commerce. Moreover, under the ADPPA the FTC would have the authority to issue regulations for companies to comply with a newly introduced requirement to implement security practices to protect and secure personal data against unauthorised access. Furthermore, under the ADPPA, the FTC would be provided with the authority to enforce such requirements, together with state attorneys general (AGs) and the California Privacy Protection Agency (CPPA).

Summary: The ADPPA establishes requirements for how companies handle personal data, specifically it requires covered enitities and service providers to limit the collection, processing, and transfer of personal data to that which is reasonably necessary to provide a requested product or service. Additionally, the ADPPA sets out legal protections for consumers' data, including the right to access, correct, and delete their personal data, and requires companies to provide individuals with a means to opt-out of targeted advertising. Lastly, the ADPPA would generally pre-empt state laws that are covered by its provisions, except for certain categories of state laws and specified laws in Illinois and California.

Whilst the ADPPA is still going through the legislative process, there are several related federal laws, including the Health Insurance Portability and Accountability Act of 1996 (HIPAA), which regulates the privacy and security of health information, the Gramm-Leach-Bliley Act of 1999 (GLBA), which requires financial institutions to explain their information-sharing practices to their customers and to safeguard sensitive data, and the Children's Online Privacy Protection Act of 1998 (COPPA), which imposes requirements on operators of websites or online services directed to children under 13 years old. The absence of a federal privacy law or a supervisory authority has made the FTC the de facto regulator resulting in a body of case law and settlements over violations of consumers' privacy rights or failures to maintain security of sensitive consumer information. The USA also participates in the Privacy Shield Framework with Switzerland, as well as the Asia Pacific Cross-Border Privacy Rules system, both of which allow for the seamless flow of data to other jurisdictions.

Furthermore, on July 10, 2023, the European Commission voted to adopt its adequacy decision for the EU-US Data Privacy Framework (DPF), concluding that the US provides a level of protection essentially equivalent to that of the EU for personal data transferred under the EU-US DPF from a controller or a processor in the EU to certified organizations in the US. The adequacy decision has the effect that personal data transfers from controllers and processors in the EU to certified organizations in the US may take place without the need to obtain any further authorization.

Other key laws and regulations include:

Insights

On April 7, 2024, U.S. Representative Cathy Rodgers and U.S. Senator Maria Cantwell introduced the American Privacy Rights Act 2024 (the Bill), aimed at establishing robust national data privacy standards with a focus on consumer control over personal information. In this Insight Q&A article, Billee Elliott McAuliffe and Jacquelyn H. Sicilia, from Lewis Rice LLC, delve into key provisions, limitations, and implications of this proposed legislation. They address frequently asked questions, offering valuable insights into how the Bill could reshape data privacy regulations in the US.

On April 7, 2024, U.S. Representative Cathy Rodgers and U.S. Senator Maria Cantwell unveiled the American Privacy Rights Act 2024 (the Bill) which would establish national consumer data privacy rights and set standards for data security. The Bill has bipartisan and bicameral support and is the first comprehensive US federal privacy bill to be unveiled since the American Data Privacy and Protection Act (ADPPA). In this article, OneTrust DataGuidance Research breaks down the main provisions of the Bill, with expert comments provided by Starr Drum, Shareholder at Polsinelli PC, and Michelle Schaap, Partner at CSG Law.

Since the public debut of generative artificial intelligence (AI) about 18 months ago, proponents and detractors of the new technology have saturated the media with breathless commentaries about the promise and peril of this new technology in the legal profession. On the one hand, a reported 44% of all legal tasks could be replaced by generative AI, while on the other hand, generative AI 'hallucinates' and makes up fake but convincing-sounding case citations, leading to lawyers being sanctioned. So, which is it? 

And importantly, how should lawyers navigate this new landscape? Shun AI and risk falling behind the competition? Or embrace it and get too far out over your skis? 

This choice raises both practical and ethical questions. While the practicalities are still a work in progress - as new use cases and applications are hitting the market every day - the ethical questions are beginning to take shape. Lawyers should be aware of how to use generative AI tools responsibly and ethically, maintaining compliance with professional rules of conduct as required by their respective state bars. Several state bar associations have now issued guidance. Dr. Christian Mammen, Vincent Look, and Dr. Seiko Okada, of Womble Bond Dickinson, discuss this guidance and how the practice of law may evolve with the increasing use of generative AI.  

On February 28, 2024, the White House published Executive Order 14117 on Preventing Access to Americans' Bulk Sensitive Personal Data and Government-Related Data by Countries of Concern (the EO). The EO calls for the promulgation of regulations to prevent the transfer of bulk sensitive personal data, including genomic data, biometric data, personal health data, geolocation data, financial data, etc., and government-related data, to countries of concern. OneTrust DataGuidance Research gives an overview of the EO and its impact on companies, with expert comments from Mark Francis, Partner at Holland & Knight.

In this Insight article, Zach Lerner and Hannah Schaller, from ZwillGen PLLC, analyze the privacy challenges confronting artificial intelligence (AI) developers in US education, navigating compliance nuances with laws and state privacy regulations to ensure responsible AI use.

Over the years, as part of its role as the primary federal consumer protection regulator, the Federal Trade Commission (FTC) has filled a void in the oversight and regulation of new technologies. Most recently, the rapid adoption of artificial intelligence (AI), machine learning, and other algorithmic decision-making systems (AI tools) - supercharged by the public release of powerful generative AI models - has raised the FTC's concern about possible harm to consumers. With no federal law that specifically regulates AI, the FTC has sought to use its existing consumer protection authority to constrain harmful AI-related business practices. 

Primarily, the FTC has authority under Section 5 of the FTC Act to prohibit businesses from engaging in deceptive unfair business practices, which it has long used to regulate company data practices. With increasing and novel uses of AI and other algorithmic data processing tools, the FTC has issued a number of guidance documents and engaged in enforcement activity demonstrating what it believes to be deceptive or unfair when businesses use these tools. Businesses that do not follow this guidance face investigation and potential enforcement, with the FTC coming up with creative penalties designed to dissuade improper behavior, including the disgorgement of algorithms, data, and other inputs to and outputs of unlawful AI systems. 

In this Insight article, Bret Cohen, from Hogan Lovells, covers some of the AI business practices that the FTC considers unfair or deceptive, describes penalties available to the agency when bringing a Section 5 claim for use of AI tools, and explains the FTC's views on best practices for use of these tools. 

In this Insight article, Michael Rubin and Robert Brown, from Latham & Watkins LLP, explore the contours of the U.S. Senate's recently proposed bipartisan legislation, the Artificial Intelligence Research, Innovation, and Accountability Act of 2023 (AIRIA).

In this Insight article, Camila Tobón, Partner at Shook, Hardy & Bacon, explores the far-reaching impact of President Biden's Executive Order 14110 on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence (the Executive Order), delineating eight principles for the responsible development of AI.

In early 2021, the U.S. Supreme Court (the Supreme Court) issued a ruling that significantly narrowed the definition of an automatic telephone dialing system (ATDS) under the Telephone Consumer Protection Act (TCPA). Although the ruling resulted in fewer complaints alleging violations of the TCPA's auto-dialer provision, the landmark decision resulted in another, perhaps unforeseen consequence: it spurred a number of states to enact or amend their own 'mini-TCPAs.' These laws often pose additional litigation or enforcement risks for companies that call or text to communicate with consumers. Francis Nolan and Amy Albanese, from Eversheds Sutherland, explore these 'mini-TCPAs' and what impact they have on telemarketing.

On November 15, 2023, a bipartisan group of senators from the Senate Committee on Commerce, Science, and Transportation introduced the Artificial Intelligence Research, Innovation, and Accountability Act of 2023 (AIRIA). This group included John Thune, Amy Klobuchar, Roger Wicker, John Hickenlooper, Shelley Moore Capito, and Ben Ray Luján.  

This legislation aims to establish a framework to enhance innovation while ensuring greater transparency, accountability, and security in the development and operation of artificial intelligence (AI) applications. Klobuchar noted the importance of AIRIA, emphasizing the "potential for great benefits, but also serious risks [of AI], and [the necessity for] our laws . . . to keep up." She explained, "[t]his bipartisan legislation is one important step of many necessary towards addressing potential harms. It will put in place common sense safeguards for the highest-risk applications of AI – like in our critical infrastructure – and improve transparency for policy makers and consumers." 

AIRIA follows the recent Executive Order on Safe, Secure, and Trustworthy AI (the Executive Order) and the Blueprint for an AI Bill of Rights recently released by the White House. The legislation holds jurisdiction over AI-related agencies, specifically requiring the National Institute of Standards and Technology (NIST) to research, facilitate, and recommend standardization and authenticity of online content connected to behavior with AI systems. To achieve this objective, AIRIA is structurally divided into two sections: Title I – AI Research and Innovation and Title II – AI Accountability. Bennett B. Borden, Danny Tobey, Tony Samp, Coran Darling, and Ted Loud, from DLA Piper, discuss AIRIA and what the future could hold for AI regulation in the US. 

Currently, no federal standard exists for regulating the collection, use, or disclosure of geofence technology data in the US. However, in 2023, five states – Utah, Washington, Nevada, New York, and Connecticut – enacted geofence technology laws. Meghan O'Connor and Ashleigh V. Giovannini, from Quarles & Brady LLP, discuss what geofence technology is and how it is used, as well as existing legislation in the US.

In today's digital economy, nearly every organization, whatever the industry, is reliant on digital infrastructure and internet connectivity. As a result, organizations are constantly vulnerable to cyberattacks such as phishing, fraud, and ransomware, and struggle to achieve adequate levels of cybersecurity preparedness and resiliency in the face of emerging threats. At the same time, many organizations are subject to existing regulatory requirements to safeguard private, health, financial, and other protected information from cyberattacks.  

In the face of these rapidly evolving cybersecurity risks, laws and regulations may seek to anticipate how best to protect the public but often lag behind technology innovations and the evolving threat landscape (e.g., the proliferation of powerful artificial intelligence [AI] applications). Indeed, state and federal regulations mandating cybersecurity safeguards and breach reporting remain, in significant ways, a patchwork of differing and disjointed requirements. Existing laws may lack incentives for robust compliance or for voluntary timely threat information sharing and coordination that leave entities more vulnerable to compromises, including in its supply chain. These trends will continue to shape how regulators approach harmonizing policy moving forward, relative to protecting critical infrastructure, promoting private-public cooperation, and strengthening cybersecurity resiliency and preparedness up and down the supply chain for businesses, as well as the effectiveness of those efforts. Alaap Shah and Brian G. Cesaratto, from Epstein Becker & Green, P.C., evaluate the current regulatory landscape surrounding cybersecurity and how this may evolve. 

Feedback