Continue reading on DataGuidance with:
Free Member
Limited ArticlesCreate an account to continue accessing select articles, resources, and guidance notes.
Already have an account? Log in
USA: Dissecting the Blueprint for an AI Bill of Rights
In October 2022, the White House Office of Science and Technology Policy (OSTP) published its Blueprint for an AI Bill of Rights. In this Insight article, Karen Silverman, Chloe Autio, and Brinson Elliott, from the Cantellus Group, set out primary areas of the Bill of Rights and how it has been received, built upon, and operationalized since its release, including how it fits into the U.S. Administration's broader push for responsible artificial intelligence (AI).
Structure, focus, and scope
The Blueprint is focused on automated systems that 'have the potential to meaningfully impact the American public's rights, opportunities, or access to critical resources or services,' and is organized into three parts:
- The Bill of Rights outlines expectations for AI use for citizens and residents and states five principles:
- safe and effective systems: you should be protected from unsafe or ineffective systems;
- algorithmic discrimination protections: you should not face discrimination by algorithms, and systems should be used and designed in an equitable way;
- data privacy: you should be protected from abusive data practices via built-in protections, and you should have agency over how your data is used;
- notice and explanation: you should know that an automated system is being used and understand how and why it contributes to outcomes that impact you; and
- alternative options: you should be able to opt out, where appropriate, and have access to a person who can quickly consider and remedy problems you encounter.
- Corresponding technical guidance, informed by AI governance maturity models and frameworks, is offered to support implementation for each principle.
- Examples of, and potential domains for, AI harms or misuse, such as lending, human resources, and surveillance (which would find a counterpart in the 'high-risk' use case framework of the EU AI Act), are outlined by way of illustration.
Context
While the OSTP plays a key role in technology policy development within the Administration, the OSTP is primarily a convening organization without the authority to independently issue binding rules and regulations. Accordingly, the Blueprint is expressly non-binding and neither constitutes U.S. Government policy, nor modifies or replaces existing statutes, regulations, or policies for the public or federal agencies. That said, the OSTP also announced that the Blueprint would be followed by several related agency actions, many of which were (and are) already underway, as discussed below. To this point, beyond framing expectations for AI use, the utility of the Blueprint to the industry remains somewhat unclear, including the extent to which it offers guidance 'where there are gaps in existing law' or as a tool for the industry to use when developing AI capabilities.
Reactions to the Bill of Rights
The Blueprint received mixed reactions in the press, industry, and academia, including the following:
- applause that the U.S. Government advanced a rights-based framework grounded in American civil rights and values, noting that the Blueprint is a valuable step towards greater consent and equity and reversing course as is necessary to prevent AI harms;
- frustration among strong advocates for government controls that the Blueprint is 'toothless' in terms of combatting or enforcing malicious or discriminatory uses of AI, concluding that the guidance will be largely ineffectual;
- belief that the Blueprint is unnecessary, precisely because existing laws and civil rights apply equally to digital and non-digital risks, and thus already protect residents against many AI risks, harms, and rights violations;
- a sense that the US is missing opportunities - even with the National Institute of Standards and Technology's (NIST) AI Risk Management Framework - to address underlying quality and performance standards for AI, in addition to addressing harms from discrimination and bias;
- reignited debates around the potential for AI regulation to stifle beneficial innovation and competition;
- concerns that it is too open-ended and sweeping in places, leading to criticism around its scope and application and a perceived lack of transparency regarding how it was developed:
- in this regard, the OSTP stated that the Blueprint was informed by a Request for Information (RFI) and two listening sessions on biometrics (merely one component of AI), as well as several panels coordinated by the OSTP; the OSTP also conducted consultations with several companies and groups, but cited the RFI and listening sessions as the stated basis for the Blueprint; and
- concerns around overly broad or unspecific definitions of certain terms (i.e., automated systems, surveillance technology, 'harms,' and 'personal finance') or purported aspirations for transparency regarding notice, documentation, and deeper and more broadly accessible information about system inputs and design.
Impact
Many of the 'agency activities' referenced in the official Blueprint release were already underway (i.e., the U.S. Equal Employment Opportunity Commission (EEOC), the Consumer Financial Protection Bureau (CFPB), and the U.S. Department of Health and Human Services (HHS)), with many of these activities confirming that existing laws apply to the use of AI. Other initiatives were started separately from the release of the Blueprint (i.e., the Federal Trade Commission (FTC), the U.S. Department of Labor (DOL), or NIST). The Blueprint represents the first steps that the OSTP could have taken to meet its stated goal to issue an 'AI Bill of Rights' and the maximum it could achieve in 2022, in the face of strong cautions against prescriptive regulations or limitations on innovation.
Practically, the Blueprint remains a statement of analysis - and perhaps intent - in a widening Administration effort to regulate key aspects of AI development and use. The introduction of generative AI, along with advancing global regulation, has accelerated these efforts since its publication, with the White House and the OSTP now turning focus to developing a National Strategy for AI, which will be informed by prior workstreams, including the Blueprint.
Recent developments
U.S. Government activity since the Blueprint
The US is charting a sector-specific approach to AI regulation to balance controls and innovation, allowing each agency to take a slightly different approach within their respective jurisdictions and remit.
- In May 2023, the Administration announced new actions to promote responsible AI innovation that protects Americans' rights and safety, including two RFIs: the first on automated worker surveillance and the second to inform a National AI Strategy, the Department of Education's report titled 'Artificial Intelligence and the Future of Teaching and Learning: Insights and Recommendations,' and an update to the 'National Artificial Intelligence Research and Development Strategic Plan' (the National AI R&D Strategic Plan).
- U.S. Vice President Kamala Harris also met with CEOs from different companies, remarking that "the private sector has an ethical, moral, and legal responsibility to ensure the safety and security of their products."
- The White House has delivered on its promise to continue convening diverse groups on critical AI issues, meeting with leading AI experts, researchers, and advocates in San Francisco, as well as with civil rights leaders, consumer protection groups, and civil society. The White House is also supporting events to interrogate and foster access to improve AI systems, for instance in a red-teaming event at DEF CON 2023.
In addition to initiatives focused on AI governance and oversight, the Administration is also exploring how to boost innovation, research, and development in AI technologies across the board:
- The U.S. President's Council of Advisors on Science and Technology (PCAST) recently created a Generative AI Working Group and has an open RFI on AI's impact on misinformation.
- The White House released an updated National AI R&D Strategic Plan outlining key priorities and goals for federal investments in AI.
- Likewise, the National AI Research Resource (NAIRR) Task Force finalized its recommendations for the NAIRR, which is now waiting for congressional authorization.
- In May 2023, the White House also announced the establishment of seven new national AI research institutes across the US with the help of $140 million from the National Science Foundation.
- Federal agencies, including the Department of Justice (DOJ), the FTC, the CFPB, and the EEOC, have released independent, as well as joint guidance on AI enforcement. Consistent with the Blueprint, officials within these agencies have affirmed their commitment to enforce existing laws and regulations and uphold principles of fairness, equality, and justice as automated systems increasingly impact civil rights, fair competition, consumer protection, and equal opportunity. However, most have yet to outline further agency-specific actions.
- Finally, the Administration is pushing forward on several global initiatives to support innovation and responsible use of AI, including through work at the G7, the Organisation for Economic Co-operation and Development (OECD), the United Nations, and bilateral negotiations, such as the Trade and Technology Council (TTC).
Looking ahead
In the year 2023, the Biden Administration is expected to direct federal agencies (potentially through executive order) to carry out new regulatory oversight and enforcement measures for AI consistent with their agency principles and the Blueprint. Later this year, the White House will unveil its National AI Strategy, aimed at coordinating and aligning AI work and policymaking across the U.S. Government. This strategy, as well as legislative proposals, will be informed by the National Telecommunications and Information Administration (NTIA) report on recommendations for AI accountability, several OSTP RFIs, as well as frameworks, such as the NIST AI Risk Management Framework, which the industry has begun to incorporate into workflows.
Conclusion
The principles outlined in the Blueprint for an AI Bill of Rights are likely to continue to inform (and not constrain) US federal AI policymaking, and will serve as an important element of future AI policy strategies to come. Looking ahead, we can expect a busy season for AI policy developments.
Karen Silverman Founder
[email protected]
Chloe Autio Director
[email protected]
Brinson Elliott Manager
[email protected]
Cantellus Group, San Francisco