skip to main content
Overview
Toggle Button Open

November 2, 2023

By: Alexandra Wilson Pantos and Shelley M. Jackson

On October 30, President Biden issued his Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence, along with a companion Fact Sheet issued by the White House. The Executive Order establishes the federal government’s priorities with respect to Artificial Intelligence (AI) and identifies several areas that will be subject to continuing government regulation in this space. The Executive Order directs various federal agencies to initiate studies, issue opinions and guidance, enact regulations, and more relating to AI.

The stated aims of the Executive Order are to (1) ensure the safety and security of AI systems; (2) promote innovation and competition in the field; (3) support American workers who run the risk of being displaced by AI systems; (4) advance equity and civil rights; (5) protect consumers, patients, passengers, and students in the American marketplace; (6) protect individual privacy; (7) encourage federal use of AI systems; and (8) strengthen American leadership abroad. To implement each of these policies, the Executive Order establishes the White House Artificial Intelligence Council, or White House AI Council, which sits within the Executive Office of the President. The White House AI Council is responsible for coordinating all inter-agency activities related to the policy aims set forth above and discussed in greater detail below.

To ensure the safety and security of AI, the Executive Order outlines a plan to (1) develop guidelines, standards, and best practices for AI safety and security; (2) ensure safe and reliable AI systems; (3) manage the use of AI in critical infrastructure and cybersecurity; (4) reduce risks associated with AI and chemical, biological, radiological, or nuclear weapons; (5) reduce risks of proliferating deep-fake content such as generated or modified images, videos, audio, or text; (6) solicit feedback on the risks and benefits of AI models from the private sector; (7) improve public data access while managing the security risks of AI; and (8) develop a coordinated approach to managing AI’s national security risks across the executive branch. The Executive Order directs the National Institute of Standards and Technology (NIST) to work with the Secretaries of Energy and Homeland Security to establish industry guidelines and best practices for AI systems, including benchmarks for auditing AI systems. These agencies are also tasked with creating procedures for testing AI models to ensure these systems are safe, secure, and trustworthy. The Executive Order regulates all types of AI systems, with a particular focus on cybersecurity and biosecurity technology.

The Executive Order seeks to promote innovation and competition by (1) attracting individuals with skills and training in AI systems to the United States and streamlining the visa process, (2) strengthening partnerships between the public and private sectors through training programs and clarifying patent issues to foster development around solutions for key issues such as veterans’ services and climate change, and (3) fostering competition through administrative action and support for small businesses using or selling AI systems. These provisions of the Executive Order indicate that the United States aims to become a leader in this area of technological innovation by supporting individuals, organizations, and companies who work with AI.

In relation to the American workforce, the Executive Order empowers different government agencies to study the effects of AI on workers and recommend ways to address workforce disruptions. The Executive Order also directs the Secretaries of Labor, Commerce, and Education to assess how current systems can assist displaced workers and identify training and education opportunities to direct workers towards careers that work with AI.

According to the Executive Order, AI can be used to protect individuals’ civil rights in the criminal justice system through its application in sentencing, prison-management tools, forensic analysis, police surveillance, and more. The Executive Order also provides for increased protection of civil rights related to government benefits and programs by designing AI systems to avoid unlawful discrimination and by using AI to impartially administer public benefits and programs. Last, the Executive Order recognizes the potential to strengthen civil rights in the broader economy by directing the Secretary of Labor to publish guidance within the next year on how AI models need to be updated to ensure they do not perpetuate discrimination in hiring decisions.

The Executive Order contains a loose charge to independent regulatory agencies to protect American consumers from fraud, discrimination, privacy, and other risks associated with AI systems used in the economy. Conversely, the Executive Order provides more direct orders to effectuate the safe use of AI in the healthcare area, including directives to the Secretary of Health and Human Services to establish robust guidelines for how AI should be developed and used in this sector. The Executive Order also recognizes that the transportation and education industries will likely be affected by AI but does not provide a clear mandate for how government agencies should regulate these respective markets on how best to use AI. 

One of the most comprehensive sections of the Executive Order relates to how the federal government has been directed to implement AI. The Order establishes a framework for the government to adopt AI systems under the direction of the Office of Management and Budget. Specifically, the Order contemplates adopting AI by hiring AI experts and data scientists.

At its core, the Executive Order appears directed toward its last stated goal: Strengthening American Leadership Abroad. The Executive Order directs the Secretary of State and various other government stakeholders to collaborate internationally with allies to help train them on AI and the United States’ policies surrounding it. As part of this work, the agencies are directed to lead development of an international framework for addressing the risks of AI while realizing its benefits.

Implementing the above policies will require extensive collaboration between federal agencies and key stakeholders. The sheer magnitude of the order and detail with which it establishes the roles and responsibilities of various government agencies demonstrates that the Biden Administration is betting on the future of Artificial Intelligence systems. 

Several key takeaways:

  • The federal government is promoting the expansion of AI systems by electing to implement more systems throughout the government and propping up the AI industry to support further development.
  • The government acknowledges the risks to privacy, safety, and workforce displacement presented by AI systems.
  • Aside from the technology sector, the government anticipates the biggest industry disruptions stemming from AI to take place in the healthcare, education, and transportation industries.
  • Potential action items for organizations:
    • Establish (if one does not exist) a clear internal framework for assessments, inquiries, and compliance/risk management decisions relating to use/expansion of AI within the organization.
    • Monitor developments over time, particularly those that are industry specific (e.g., look for studies, guidance, and proposed rulemaking by relevant agencies).
    • Remain vigilant for opportunities to leverage anticipated government programs to support AI use/expansion.
    • Engage outside expertise as appropriate (e.g., vendors, outside counsel) for more information on AI-related opportunities and risks.

Krieg DeVault LLP attorneys will continue to monitor developments related to this Executive Order. If you would like to discuss the opportunities and risks presented to your organization by AI, please contact Alexandra E. Wilson, Shelley M. Jackson, or any member of Krieg DeVault’s AI Task Force.

Disclaimer: The contents of this article should not be construed as legal advice or a legal opinion on any specific facts or circumstances. The contents are intended for general informational purposes only, and you are urged to consult with counsel concerning your situation and specific legal questions you may have.
 

November 2, 2023

By: Alexandra Wilson Pantos and Shelley M. Jackson

On October 30, President Biden issued his Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence, along with a companion Fact Sheet issued by the White House. The Executive Order establishes the federal government’s priorities with respect to Artificial Intelligence (AI) and identifies several areas that will be subject to continuing government regulation in this space. The Executive Order directs various federal agencies to initiate studies, issue opinions and guidance, enact regulations, and more relating to AI.

The stated aims of the Executive Order are to (1) ensure the safety and security of AI systems; (2) promote innovation and competition in the field; (3) support American workers who run the risk of being displaced by AI systems; (4) advance equity and civil rights; (5) protect consumers, patients, passengers, and students in the American marketplace; (6) protect individual privacy; (7) encourage federal use of AI systems; and (8) strengthen American leadership abroad. To implement each of these policies, the Executive Order establishes the White House Artificial Intelligence Council, or White House AI Council, which sits within the Executive Office of the President. The White House AI Council is responsible for coordinating all inter-agency activities related to the policy aims set forth above and discussed in greater detail below.

To ensure the safety and security of AI, the Executive Order outlines a plan to (1) develop guidelines, standards, and best practices for AI safety and security; (2) ensure safe and reliable AI systems; (3) manage the use of AI in critical infrastructure and cybersecurity; (4) reduce risks associated with AI and chemical, biological, radiological, or nuclear weapons; (5) reduce risks of proliferating deep-fake content such as generated or modified images, videos, audio, or text; (6) solicit feedback on the risks and benefits of AI models from the private sector; (7) improve public data access while managing the security risks of AI; and (8) develop a coordinated approach to managing AI’s national security risks across the executive branch. The Executive Order directs the National Institute of Standards and Technology (NIST) to work with the Secretaries of Energy and Homeland Security to establish industry guidelines and best practices for AI systems, including benchmarks for auditing AI systems. These agencies are also tasked with creating procedures for testing AI models to ensure these systems are safe, secure, and trustworthy. The Executive Order regulates all types of AI systems, with a particular focus on cybersecurity and biosecurity technology.

The Executive Order seeks to promote innovation and competition by (1) attracting individuals with skills and training in AI systems to the United States and streamlining the visa process, (2) strengthening partnerships between the public and private sectors through training programs and clarifying patent issues to foster development around solutions for key issues such as veterans’ services and climate change, and (3) fostering competition through administrative action and support for small businesses using or selling AI systems. These provisions of the Executive Order indicate that the United States aims to become a leader in this area of technological innovation by supporting individuals, organizations, and companies who work with AI.

In relation to the American workforce, the Executive Order empowers different government agencies to study the effects of AI on workers and recommend ways to address workforce disruptions. The Executive Order also directs the Secretaries of Labor, Commerce, and Education to assess how current systems can assist displaced workers and identify training and education opportunities to direct workers towards careers that work with AI.

According to the Executive Order, AI can be used to protect individuals’ civil rights in the criminal justice system through its application in sentencing, prison-management tools, forensic analysis, police surveillance, and more. The Executive Order also provides for increased protection of civil rights related to government benefits and programs by designing AI systems to avoid unlawful discrimination and by using AI to impartially administer public benefits and programs. Last, the Executive Order recognizes the potential to strengthen civil rights in the broader economy by directing the Secretary of Labor to publish guidance within the next year on how AI models need to be updated to ensure they do not perpetuate discrimination in hiring decisions.

The Executive Order contains a loose charge to independent regulatory agencies to protect American consumers from fraud, discrimination, privacy, and other risks associated with AI systems used in the economy. Conversely, the Executive Order provides more direct orders to effectuate the safe use of AI in the healthcare area, including directives to the Secretary of Health and Human Services to establish robust guidelines for how AI should be developed and used in this sector. The Executive Order also recognizes that the transportation and education industries will likely be affected by AI but does not provide a clear mandate for how government agencies should regulate these respective markets on how best to use AI. 

One of the most comprehensive sections of the Executive Order relates to how the federal government has been directed to implement AI. The Order establishes a framework for the government to adopt AI systems under the direction of the Office of Management and Budget. Specifically, the Order contemplates adopting AI by hiring AI experts and data scientists.

At its core, the Executive Order appears directed toward its last stated goal: Strengthening American Leadership Abroad. The Executive Order directs the Secretary of State and various other government stakeholders to collaborate internationally with allies to help train them on AI and the United States’ policies surrounding it. As part of this work, the agencies are directed to lead development of an international framework for addressing the risks of AI while realizing its benefits.

Implementing the above policies will require extensive collaboration between federal agencies and key stakeholders. The sheer magnitude of the order and detail with which it establishes the roles and responsibilities of various government agencies demonstrates that the Biden Administration is betting on the future of Artificial Intelligence systems. 

Several key takeaways:

  • The federal government is promoting the expansion of AI systems by electing to implement more systems throughout the government and propping up the AI industry to support further development.
  • The government acknowledges the risks to privacy, safety, and workforce displacement presented by AI systems.
  • Aside from the technology sector, the government anticipates the biggest industry disruptions stemming from AI to take place in the healthcare, education, and transportation industries.
  • Potential action items for organizations:
    • Establish (if one does not exist) a clear internal framework for assessments, inquiries, and compliance/risk management decisions relating to use/expansion of AI within the organization.
    • Monitor developments over time, particularly those that are industry specific (e.g., look for studies, guidance, and proposed rulemaking by relevant agencies).
    • Remain vigilant for opportunities to leverage anticipated government programs to support AI use/expansion.
    • Engage outside expertise as appropriate (e.g., vendors, outside counsel) for more information on AI-related opportunities and risks.

Krieg DeVault LLP attorneys will continue to monitor developments related to this Executive Order. If you would like to discuss the opportunities and risks presented to your organization by AI, please contact Alexandra E. Wilson, Shelley M. Jackson, or any member of Krieg DeVault’s AI Task Force.

Disclaimer: The contents of this article should not be construed as legal advice or a legal opinion on any specific facts or circumstances. The contents are intended for general informational purposes only, and you are urged to consult with counsel concerning your situation and specific legal questions you may have.