Advancing Trustworthy AI

One of the key purposes of the National AI Initiative is to ensure that the United States leads the world in the development and use of trustworthy AI systems in the public and private sectors. The United States has long been a champion and defender of the core values of freedom; guarantees of human rights; the rule of law; stability in our institutions; rights to privacy, civil rights, and civil liberties; respect for intellectual property; and opportunities to all to pursue their dreams.

To be trustworthy, AI technologies must appropriately reflect characteristics such as accuracy, explainability and interpretability, privacy, reliability, robustness, safety, and security or resilience to attacks – and ensure that bias is mitigated. Factors such as fairness and transparency should be considered, particularly during deployment or use. In addition, the broader impacts of AI on society must be considered, such as implications for the workforce. Developing and using AI in ways that are ethical, reduce bias, promote fairness, and protect privacy is essential for fostering a positive effect on society consistent with core U.S. values.

Advancing trustworthy AI requires a multifaceted approach, including R&D investments addressing key technical challenges, development of metrics, standards, and assessment tools to measure and evaluate AI trustworthiness, engagement in the development of AI technical standards, governance approaches for the use of AI in the public and private sectors, and preparing a diverse and inclusive workforce for the jobs of the future. It also requires international collaborations and partnerships, which are described on the International Cooperation page of this website.

↑ Top
 

Research and Development for Trustworthy AI

The Federal Government has prioritized AI R&D activities that address the ethical, legal, and societal implications of AI, as well as the safety and security of AI systems.  The National AI R&D Strategic Plan: 2019 Update details many of the research challenges in these areas, while the 2016-2019 Progress Report: Advancing Artificial Intelligence R&D provides an overview of the numerous Federal R&D programs that address these research challenges.

For example, NSF’s Program on Fairness in Artificial Intelligence, in collaboration with a major technology company, supports research on fairness in AI with the goal of contributing to trustworthy AI systems that are readily accepted and deployed to tackle grand challenges facing society.  NSF collaborated with a non-profit to support early concept grants for exploratory research on AI and Society to better understand the social challenges arising from AI technology and enable scientific contributions to overcome them. Some of NSF’s National AI Research Institutes are also funding research on trustworthy AI, such as through the NSF AI Institute for Research on Trustworthy AI in Weather, Climate, and Coastal Oceanography. Outside of the NSF, other initiatives include a center of excellence that works on trustworthy AI established jointly by the Department of Defense (DoD) and a historically black research university.

An important R&D emphasis for trustworthy AI is providing AI systems with explainability mechanisms that help human users understand reasons for AI outputs. DARPA’s Explainable AI (XAI) program aims to create machine learning techniques that produce more explainable solutions while maintaining high performance and appropriate levels of trust in the system.

The complexity of many AI systems creates important safety and security challenges that must be addressed to ensure that these systems are trustworthy. AI systems have some inherent cybersecurity risks because of the characteristics of how the technology is designed. R&D investments such as DARPA’s AI Next Campaign are creating solutions for countering adversarial attacks on AI technologies, such as those that attempt to contaminate training data, modify algorithms, create adversarial inputs, or exploit flaws in AI system goals. This research is expected to lead to more secure, robust, and safe AI systems that are reliable and trustworthy.

↑ Top
 

Metrics, Assessment Tools, and Technical Standards for AI

Metrics, assessment tools, and technical standards are essential for ensuring that AI technologies meet critical objectives for functionality and interoperability. The National Institute of Standards and Technology (NIST) leads many efforts to advance foundational research for measuring and assessing AI technologies, including the development of AI data standards and best practices, as well as AI evaluation methodologies and standard testing protocols. As called for by the February 2019 Executive Order on Maintaining American Leadership in AI (EO 13859), NIST released a plan in August 2019 for Federal engagement in the development of AI technical standards and related tools in support of reliable, robust, and trustworthy AI systems. Other NIST efforts toward an AI measurement science include multistakeholder convenings and papers that address challenges such as explainable AI, AI bias, and AI security.

Recognizing the importance of tools for evaluating and managing risk in AI systems, the National AI Initiative calls for NIST, in collaboration with NSF, DOE, and other public and private stakeholders, to develop a voluntary risk management framework for trustworthy AI. This framework will include common definitions and characterizations for trustworthiness, identify and provide approaches for developing and assessing the trustworthiness of AI, and identify and provide approaches for mitigating risks from AI systems, along with other relevant content.

↑ Top
 

Use of AI in the Private Sector

Important to fostering public trust and confidence in the use of AI is the development of an appropriate governance approach to the use of AI in the private sector. In response to a directive in EO 13859, the Office of Management and Budget (OMB) issued Memorandum M-21-06, “Guidance for Regulation of AI Applications”, which provides guidance to Federal agencies informing the development of regulatory and non-regulatory approaches to the use of AI in the private sector. This guidance reflects the goals of ensuring public engagement in the regulatory process, adopting a risk assessment and cost-benefit approach to AI that is based on evidence and scientific integrity, and an emphasis on the importance of promoting AI that is trustworthy – fair, nondiscriminatory, transparent, safe, and secure. It further requires agencies to create plans consistent with this guidance, including identifying AI use case priorities, regulatory barriers, and planned actions, as informed by multi-stakeholder engagement. Some agencies have published their responses publicly, including the Department of Veterans Affairs, the US Agency for International Development, and the Department of Energy.

The Food and Drug Administration (FDA) has identified 10 guiding principles to promote safe, effective, and high-quality medical devices that use AI and machine learning. These principles were developed jointly with counterparts in Canada and the United Kingdom.

Independent agencies have also issued their own guidance about the use of AI in a trustworthy manner. In April 2020, the Federal Trade Commission (FTC) issued business guidance on AI and algorithms, which stated that AI should be used in ways that are transparent, explainable, fair, empirically sound, and accountable; since then, the FTC has elaborated on its approach and taken enforcement action aligned with its guidance. In November 2020, the Consumer Product Safety Commission (CPSC) made AI and machine learning a priority in its operating plan; in May 2021, CPSC issued a report on its work to ensure the use of AI and machine learning does not result in unsafe products. In October 2021, the Equal Employment Opportunity Commission launched an initiative on AI and algorithmic fairness focused on ensuring that AI used in employment decisions complies with federal civil rights laws enforced by the independent agency.

↑ Top
 

Use of AI by the Federal Government

At the White House Summit on Artificial Intelligence in Government in September 2019, Dr. Lynne Parker (White House Office of Science and Technology Policy) led a panel discussion on how Federal agencies have adopted AI, with Lt Gen Jack Shanahan (DoD’s Joint AI Center), Dr. Patricia Flatley Brennan (NIH), and Mr. Charles Keckler (HHS).

credit: Erik Jacobs, White House Office of Science and Technology Policy

At the White House Summit on Artificial Intelligence in Government in September 2019, Dr. Lynne Parker (White House Office of Science and Technology Policy) led a panel discussion on how Federal agencies have adopted AI, with Lt Gen Jack Shanahan (DoD’s Joint AI Center), Dr. Patricia Flatley Brennan (NIH), and Mr. Charles Keckler (HHS).

Through Executive Order 13960, the Federal Government has established guidelines promoting the use of trustworthy AI by Federal Agencies to more effectively deliver services to the American people and foster public trust in this technology.   These guidelines define principles for the use of AI in Government, establish a common policy for implementing the principles, direct agencies to catalogue their AI use cases, and call on General Services Administration (GSA) and the Office of Personnel Management to enhance AI implementation expertise at the agencies.  These guidelines apply primarily to civilian agencies outside the national security space, and are synergistic with existing Federal AI ethics principles, including the Department of Defense’s Ethical Principles for AI and the Intelligence Community’s Principles of AI Ethics and AI Ethics Framework. Agencies across the entire Federal government are now working to implement these principles as they design, develop, acquire, and use AI to advance their mission; for example, the Defense Innovation Unit (DIU) developed Responsible AI Guidelines to help implement the DoD’s Ethical Principles into DIU’s commercial prototyping and acquisition programs. As part of the cross-government effort to implement EO 13960, agencies are also conducting an inventory of their non-classified and non-sensitive use cases of AI. If any inconsistencies are discovered between EO 13960 and AI applications, agencies will be required to develop plans to achieve consistency with EO 13960 or to retire those AI applications.

To accelerate the adoption of AI in government, the White House hosted the Summit on Artificial Intelligence in Government in September 2019 to spark ideas for how the Federal government can adopt AI to better achieve its mission and improve services to the American people.  The Summit highlighted innovative efforts at Federal agencies that have already adopted AI, and looked ahead to future transformative AI applications that will make government more effective, efficient, and responsive.

At the agency and department level, several new AI centers and offices have been established to advance the adoption of trustworthy AI within those departments, including the Department of Defense (DoD) Joint AI Center, the Department of Energy (DOE) AI and Technology Office, and the Department of Veterans Affairs (VA) National AI Institute.

Furthering these efforts, the General Service Administration (GSA) launched the GSA AI Center of Excellence in September 2019. The AI Center of Excellence was later codified into law by the AI in Government Act of 2020. These efforts are supporting and coordinating the use of AI in the Federal agencies, helping to deploy scalable solutions and facilitating the sharing of best practices and tools for AI adoption. Illustrating the benefits of interagency collaboration on AI through the AI Center of Excellence, an AI pilot project developed by the Department of Health and Human Services in 2019 led to a new effort announced by GSA in September 2020 to assist Federal agencies in using AI for Regulatory Workflow Modernization. A collaboration between GSA and OMB, the Technology Modernization Fund also supports a few projects that advance the use of trustworthy AI in the Federal Government, such as the use of AI to improve cybersecurity.

GSA has also launched a number of efforts to mitigate AI risks, creating a Guide to AI Ethics, publishing the Digital Worker Identity Playbook, and supporting a project on Combating Bias in AI in partnership with the Census Bureau and a university. The AI Community of Practice, launched in November 2019, brings together AI practitioners across Federal agencies, some of whom participate in working groups on Responsible AI and AI and Privacy. Additionally, as part of EO 13960, the GSA-based Presidential Innovation Fellowship also established a track focused on advancing the use of trustworthy AI in the Federal Government.

The Government Accountability Office (GAO), a part of the legislative branch, has created an AI accountability framework for Federal agencies and other entities. GAO has also published a number of other reports related to AI, some of which are available on the Publications Library page of this website.

↑ Top
 

Engaging stakeholders, experts, and the public

In order to better understand the challenges and opportunities to advance trustworthy AI, the Federal Government has led a number of engagements about AI with the public. The White House Office of Science and Technology Policy (OSTP) issued a Request for Information (RFI), alongside plans to host listening sessions to hear from the public, on the topic of AI-enabled biometric technologies. As part of developing an AI risk management framework, NIST issued an RFI that received 106 responses from the public, which were used to inform a workshop with over 800 participants and a draft taxonomy of AI risk. A longer list of AI-related RFIs can be found in the Publications Library, under the publication type of “Requests for Information.”

OSTP is also working on a “bill of rights” for an AI-powered society, seeking to clarify the rights and freedoms that data-driven technologies are expected to respect. As described in an opinion piece, this initiative will be developed by working with partners and experts from across the federal government, in academia, civil society, the private sector, and communities all over the country.

The Federal Government also receives external input on trustworthy AI via independent advisory committees. These include the former National Security Commission on AI and the forthcoming National AI Advisory Committee, which will include a Subcommittee on AI and Law Enforcement.

↑ Top