Support Centre

You have out of 5 free articles left for the month

Signup for a trial to access unlimited content.

Start Trial

Continue reading on DataGuidance with:

Free Member

Limited Articles

Create an account to continue accessing select articles, resources, and guidance notes.

Free Trial

Unlimited Access

Start your free trial to access unlimited articles, resources, guidance notes, and workspaces.

USA: Executive Order on Safe, Secure, and Trustworthy AI - key takeaways

In this Insight article, Camila Tobón, Partner at Shook, Hardy & Bacon, explores the far-reaching impact of President Biden's Executive Order 14110 on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence (the Executive Order), delineating eight principles for the responsible development of AI.

Douglas Rissing / Signature collection / istockphoto.com

On October 30, 2023, President Biden issued the Executive Order. The stated purpose is to establish a coordinated, Federal Government-wide approach to governing the development and use of artificial intelligence (AI) safely and responsibly. To do so, the Executive Order includes a multitude of directives to various federal agencies, instructing them to take action on the use of AI. This includes assessing and addressing potential risks in areas such as national security, healthcare, transportation, consumer protection, education, and privacy. Timetables for action range between 30 to 365 days, with most directives requiring work to begin immediately to meet the assigned targets.

Key takeaways

Although the Executive Order is directed at federal agencies, the principles outlined and the specific tasks defined could result in guidance and regulations affecting the private sector. Of most interest to companies and AI market players should be forthcoming developments in the areas of standards, model testing, anti-discrimination, workers' rights, and consumer protection. For example, the specific reference to the Artificial Intelligence Risk Management Framework (AI RMF 1.0) (AI RMF) developed by the National Institute of Standards and Technology (NIST) indicates that it could become the de facto standard for AI governance. Repeated references to anti-discrimination, worker, and consumer protection make clear that the regulatory focus will be on ensuring that the risks of using AI are appropriately assessed and mitigated. The multi-stakeholder approaches outlined in the Executive Order demonstrate that AI is an issue that impacts all industries and each federal department will be developing expertise and managing oversight of the technology.

Structure

The Executive Order is divided into 13 sections, eight of which contain the directives and are divided by topic. These directives flow from the eight principles set out at the beginning of the Executive Order in the section on Policy and Principles. There is also a 'Purpose' section and a 'Definitions' section, providing additional context for the various topical directives, as well as an 'Implementation' section that creates a White House AI Council to further the objectives of the Order and a 'General Provisions' section with certain administrative provisions.

Guiding principles

The Executive Order lays out eight principles intended to be the foundation for the requirements directed at the various federal agencies in their adoption of AI. They are:

  1. AI must be safe and secure: This includes having robust testing and evaluation systems, including in the post-deployment setting, as well as effective labeling and content provenance mechanisms.
  2. Promoting responsible innovation, competition, and collaboration will allow the US to lead in AI and unlock the technology's potential to solve some of society's most difficult challenges: This includes support programs to give Americans the skills they need for the age of AI as well as attracting AI talent to the US. It also includes having a fair, open, and competitive ecosystem and marketplace for AI.
  3. The responsible development and use of AI require a commitment to supporting American workers: Job training and education must be adapted to ensure that workers benefit from opportunities that AI creates.
  4. AI policies must be consistent with the administration's dedication to advancing equity and civil rights: AI must comply with all federal laws protecting disadvantaged communities from discrimination and there must be robust technical evaluations, careful oversight, engagement with affected communities, and rigorous regulation to protect against unlawful discrimination and abuse.
  5. The interests of Americans who increasingly use, interact with, or purchase AI and AI-enabled products in their daily lives must be protected: Existing consumer protection laws should be enforced and additional appropriate safeguards should be enacted to prevent fraud, unintended bias, discrimination, privacy infringements, and other harms from AI.
  6. Americans' privacy and civil liberties must be protected as AI continues advancing: The Federal Government will ensure that the collection, use, and retention of data is lawful and secure, and mitigates privacy and confidentiality risks.
  7. It is important to manage the risks from the Federal Government's own use of AI and increase its internal capacity to regulate, govern, and support the responsible use of AI to deliver better results for Americans: The current administration will take steps to attract, retain, and develop public service-oriented AI professionals and ease professionals' path into the Federal Government to help harness and govern AI.
  8. The Federal Government should lead the way to global societal, economic, and technological progress, as the US has in previous eras of disruptive innovation and change: This is not measured solely by technological advancements but also by pioneering systems and safeguards needed to deploy technology responsibly, and building and promoting those safeguards with the rest of the world.

Federal agency obligations

Sections 4 through 11 of the Executive Order list the actions that federal agencies must undertake to identify potential risks and to take a reasoned approach to the consideration, development, and adoption of AI. Each of those sections is summarized below, in the order they appear.

Ensuring the Safety and Security of AI Technology

This section is subdivided into eight parts, each with a different but related focus. The tasks outlined are directed primarily to the Secretaries of Commerce and Homeland Security.

The Secretary of Commerce is required to, among others:

  • act through the Director of NIST, develop a companion resource to the AI RMF, NIST AI-100-1, for generative AI;
  • launch an initiative to create guidance and benchmarks for evaluating and auditing AI capabilities;
  • establish guidelines for red teaming tests, especially for dual-use foundation models;
  • issue specified reporting requirements for companies developing or intending to use dual-use foundation models, companies that acquire, develop, or possess a potential large-scale computing cluster, and companies that provide infrastructure as a service product;
  • develop guidance focusing on identifying and labeling synthetic content produced by AI systems and on establishing the authenticity and provenance of digital content, both synthetic and not synthetic, produced by the Federal Government or on its behalf; and
  • submit a report to the President on risks associated with actors fine-tuning dual-use foundation models, benefits to AI innovation and research, and potential voluntary, regulatory, and international mechanisms to manage the risks of using AI.

The Secretary of Homeland Security is tasked with:

  • assessing and addressing risks of AI on critical infrastructure, working with the Secretary of the Treasury and other agency heads;
  • aiding in the discovery and remediation of vulnerabilities in critical U.S. Government software, systems, and networks, working with the Secretary of Defense; and
  • submitting a report to the President assessing types of AI models that may present chemical, biological, radiological, or nuclear (CBRN) threats and making recommendations for regulating or overseeing the training, deployment, publication, or use of these models.

The Secretary of Energy must develop AI model evaluation tools and AI testbeds to, at a minimum, evaluate AI capabilities to generate outputs that may represent nuclear, nonproliferation, biological, chemical, critical infrastructure, and energy-security threats or hazards.

The Chief Data Officer is required to develop initial guidelines for performing security reviews of releasing Federal data while providing public access and agencies are directed to conduct such reviews.

Lastly, the Assistant to the President for National Security Affairs and the Assistant to the President and the Deputy Chief of Staff for Policy are tasked with overseeing an interagency process to submit a National Security Memorandum to the President addressing the governance of AI used as a component of a national security system or for military and intelligence purposes.

Promoting Innovation and Competition

This section has three parts. The first subsection is geared towards making it easier to attract foreign talent, by making it easier to obtain or renew visas.

The second subsection seeks to promote innovation. Among others, it directs the National Science Foundation (NSF) to launch a pilot program implementing the National AI Research Resource, fund and launch at least one NSF Regional Innovation Engine, and establish at least four new National AI Research Institutes. It also tasks the Under Secretary of Commerce for Intellectual Property and the Director of the U.S. Patent and Trademark Office to publish guidance to patent examiners and applicants addressing inventorship and the use of AI (including generative AI) in the inventive process. It also includes directives relating to training for scientists in the fields of high-performance and data-intensive computing, advancing responsible AI innovation by healthcare technology developers, improving the quality of veterans' healthcare, strengthening the US's resilience against climate change impacts and building an equitable clean energy economy for the future, and understanding AI's implications for scientific research.

The third subsection seeks to promote competition by encouraging the Federal Trade Commission to consider rulemaking to both ensure competition and protect consumers and workers from harm; directing the Secretary of Commerce to undertake initiatives to promote competition and innovation in the semiconductor industry; and requiring the Administrator of the Small Business Administration to prioritize the allocation of funding for small businesses engaging in AI innovation.

Supporting Workers

This section has three key aims: to advance the Government's understanding of AI's implications for workers; to ensure that AI deployed in the workplace advances employees' well-being; and to foster a diverse AI-ready workforce. Most of the directives are targeted at the Secretary of Labor, who must submit a report analyzing abilities of agencies to support workers displaced by adoption of AI and other technological advancements, develop and publish principles and best practices for employers that could be used to mitigate AI's potential harms to employees' wellbeing and maximize its potential benefits, and issue guidance to make clear that employers that deploy AI to monitor or augment employees' work must continue to comply with protections that ensure that workers are compensated for their hours worked.

Advancing Equity and Civil Rights

This section has three subsections focusing on civil rights in the criminal justice system, government benefits and programs, and the broader economy.

The requirements relating to the criminal justice system are directed at the Attorney General to address unlawful discrimination and other harms that may be exacerbated by AI, to promote the equitable treatment of individuals, to adhere to the Federal Government's fundamental obligation to ensure fair and impartial justice for all, and to advance the presence of relevant technical experts and expertise among law enforcement professionals.

As to government benefits and programs, agencies are tasked with using their respective civil rights and civil liberties offices and authorities to prevent and address unlawful discrimination and other harms that result from the use of AI in Federal Government programs and benefits administration. The Secretary of Health and Human Services and the Secretary of Agriculture must publish a plan addressing the use of automated or algorithmic systems in the implementation of public benefits and services administered by their respective departments.

In relation to the broader economy, there are various directives for different U.S. Government departments. The Secretary of Labor is required to publish guidance for federal contractors regarding nondiscrimination in hiring involving AI and other technology-based hiring systems. The Secretary of Housing and Urban Development must issue additional guidance addressing the use of tenant screening systems in ways that violate the Fair Housing Act. Other entities like the Federal Housing Finance Agency, the Consumer Financial Protection Bureau, and the Architectural and Transportation Barriers Compliance Board are encouraged to consider measures to address unlawful discrimination in the use of AI in the areas under their purview.

Protecting Consumers, Patients, Passengers, and Students

This section covers many areas. Generally, independent regulatory agencies are encouraged to use their full range of authorities (including rulemaking) to protect consumers from fraud, discrimination, and threats to privacy and to address other risks that may arise from the use of AI.

Specifically related to healthcare, the Secretary of Health and Human Services is required to, among others, establish an AI Task Force to develop a strategic plan that includes policies and frameworks on responsible deployment and use of AI and AI-enabled technologies in the health and human services sector. This includes healthcare delivery and financing; long-term safety and real-world performance monitoring of AI-enabled technologies; incorporation of equity principles in AI-enabled technologies; incorporation of safety, privacy, and security standards into the software development lifecycle; development, maintenance, and availability of documentation to help users determine appropriate and safe uses of AI in local settings; and identification of uses of AI to promote workplace efficiency and satisfaction (e.g., reducing administrative burdens).

In the transportation sector, the Secretary of Transportation is directed to undertake certain activities to promote the safe and responsible development and use of AI in the transportation sector, including determining whether guidance is needed, providing appropriate advice, and exploring transportation-related opportunities and challenges of using AI in transportation.

For education leaders, the Secretary of Education must develop resources, policies, and guidance regarding AI, including an 'AI Toolkit.' Topics to be covered include appropriate human review of AI decisions, designing AI systems to enhance trust and safety, and aligning AI use with privacy-related laws and regulations in the educational context.

Lastly, the Federal Communications Commission is encouraged to consider actions related to how AI will affect communications networks and consumers including, among others, efforts to combat unwanted robocalls and robotexts and deploying AI technologies that better serve consumers by blocking such communications.

Protecting Privacy

This section has two aims. First, to identify commercially available information (CAI) in agencies' data inventories and evaluate agency standards and procedures associated with the collection, processing, maintenance, use, sharing, dissemination, and disposition of CAI that contains personally identifiable information. Second, to create guidelines and advance research related to privacy-enhancing technologies.

Advancing Federal Government Use of AI

This section seeks the development of guidance for the Federal Government to strengthen the effective and appropriate use of AI, advance AI innovation, and manage risks from AI. The Executive Order provides that this guidance must require agencies to appoint a Chief AI Officer. The section also seeks to increase AI talent within the Government by identifying priority areas for increased talent, planning for recruitment, hiring, and retention, and providing federal workforce training.

Strengthening American Leadership Abroad

Here, the Secretary of State is tasked with expanding engagements with allies and partners, including publishing an AI in the Global Development Playbook and developing a Global AI Research Agenda. The Secretary of Commerce is tasked with developing a plan for global engagement on AI-related consensus standards, cooperation and coordination, and information sharing.

Next Steps

By the spring and summer of 2024, we should expect to see guidance from the various federal agencies listed above relating to the use of AI and how to assess and mitigate risks. It is unclear whether Congress will have taken up any bills relating to AI during that time. So it may be that the guidance provided by these agencies will result in rulemaking that would govern companies' and private entities' use of AI. Any organization considering implementing AI should carefully consider the eight guiding principles and ensure their adoption of the technology - whatever the use case - is consistent with those principles.

Camila Tobón Partner
[email protected]
Shook, Hardy & Bacon, Denver

Feedback