USA: Chamber of Commerce issues letter to Biden Administration on EU AI Act concerns
On September 13, 2023, the U.S. Chamber of Commerce published a letter addressed to the Biden Administration outlining concerns regarding the EU AI Act. The letter clarifies that concerns are restricted to the Proposal for a Regulation of the European Parliament and of the Council Laying Down Harmonised Rules on Artificial Intelligence (the AI Act), and not the surrounding legislation as a whole.
In particular, the letter details concerns on the broad definition of artificial intelligence (AI) systems under the AI Act, recommending the adoption of a definition of narrower scope, such as the Organization for Economic Cooperation and Development (OECD) definition of AI.
On prohibited AI systems, the letter supports the position adopted by the European Commission and Council to ban the use of 'real-time' remote biometric identification for the purposes of law enforcement in all cases, except specific set of purposes such as targeted search for crime victims or prevention of imminent threats to life and safety. Therefore, opposing the position of the European Parliament to ban all uses of 'real-time' remote biometric identification in publicly accessible spaces. Similarly, the letter provides that the classification of AI systems that use biometric or biometric-based data to make inferences as personal characteristics or emotions as high-risk is overly broad, and would capture a number of low-risk AI Systems within such classification.
Regarding AI systems classified as high risk, the letter alleges that the AI Act will place an unnecessary burden on US businesses and could discourage the development and use of AI. Particularly the potential classification of machine learning systems that do not make decisions that could significantly impact people's lives as high risk.
In addition, the letter alleges that the targeted requirements for all general purpose AI (GPAI) systems require the use of tailored and technically feasible requirements, grounded in existing standards and interoperable principles. Specifically, the letter provides that GPAI developers may not be able to anticipate all potential risks where the GPAI system does not have a predefined 'purpose.' Likewise, the letter recommends the removal of provisions on targeted requirements for foundation models in the AI Act text, alleging that it is unclear to which operators the provisions on publishing a summary of copyright-protected training data apply.
Finally, the letter also stipulates that the terminology of 'provider' and 'user' under the AI Act as proposed by the European Council does not sufficiently distinguish between the roles in the AI value chain, or provide clarification as to what parties are responsible. The letter recommends that AI deployers should be legally responsible, and should require AI developers to make contractual commitments to ensure acknowledgment that the AI system used is high risk.