Advancing Trustworthy AI
One of the key purposes of the National AI Initiative is to ensure that the United States leads the world in the development and use of trustworthy AI systems in the public and private sectors. The United States has long been a champion and defender of the core values of freedom; guarantees of human rights; the rule of law; stability in our institutions; rights to privacy, civil rights, and civil liberties; respect for intellectual property; and opportunities to all to pursue their dreams.
To be trustworthy, AI technologies must appropriately reflect characteristics such as accuracy, explainability and interpretability, privacy, reliability, robustness, safety, and security or resilience to attacks – and ensure that bias is mitigated. Factors such as fairness and transparency should be considered, particularly during deployment or use. In addition, the broader impacts of AI on society must be considered, such as implications for the workforce. Developing and using AI in ways that are ethical, reduce bias, promote fairness, and protect privacy is essential for fostering a positive effect on society consistent with core U.S. values.
Advancing trustworthy AI requires a multifaceted approach, including R&D investments addressing key technical challenges, development of metrics, standards, and assessment tools to measure and evaluate AI trustworthiness, engagement in the development of AI technical standards, governance approaches for the use of AI in the public and private sectors, and preparing a diverse and inclusive workforce for the jobs of the future. It also requires international collaborations and partnerships, which are described on the International Cooperation page of this website.
Research and Development for Trustworthy AI
The Federal Government has prioritized AI R&D activities that address the ethical, legal, and societal implications of AI, as well as the safety and security of AI systems. The National AI R&D Strategic Plan: 2019 Update details many of the research challenges in these areas, while the 2016-2019 Progress Report: Advancing Artificial Intelligence R&D provides an overview of the numerous Federal R&D programs that address these research challenges.
For example, NSF’s Program on Fairness in Artificial Intelligence, in collaboration with a major technology company, supports research on fairness in AI with the goal of contributing to trustworthy AI systems that are readily accepted and deployed to tackle grand challenges facing society. NSF collaborated with a non-profit to support early concept grants for exploratory research on AI and Society to better understand the social challenges arising from AI technology and enable scientific contributions to overcome them. Some of NSF’s National AI Research Institutes are also funding research on trustworthy AI, such as through the NSF AI Institute for Research on Trustworthy AI in Weather, Climate, and Coastal Oceanography. Outside of the NSF, other initiatives include a center of excellence that works on trustworthy AI established jointly by the Department of Defense (DoD) and a historically black research university.
An important R&D emphasis for trustworthy AI is providing AI systems with explainability mechanisms that help human users understand reasons for AI outputs. DARPA’s Explainable AI (XAI) program aims to create machine learning techniques that produce more explainable solutions while maintaining high performance and appropriate levels of trust in the system.
The complexity of many AI systems creates important safety and security challenges that must be addressed to ensure that these systems are trustworthy. AI systems have some inherent cybersecurity risks because of the characteristics of how the technology is designed. R&D investments such as DARPA’s AI Next Campaign are creating solutions for countering adversarial attacks on AI technologies, such as those that attempt to contaminate training data, modify algorithms, create adversarial inputs, or exploit flaws in AI system goals. This research is expected to lead to more secure, robust, and safe AI systems that are reliable and trustworthy.
Metrics, Assessment Tools, and Technical Standards for AI
Metrics, assessment tools, and technical standards are essential for ensuring that AI technologies meet critical objectives for functionality and interoperability. The National Institute of Standards and Technology (NIST) leads many efforts to advance foundational research for measuring and assessing AI technologies, including the development of AI data standards and best practices, as well as AI evaluation methodologies and standard testing protocols. As called for by Executive Order 13859, NIST released a plan in August 2019 for Federal engagement in the development of AI technical standards and related tools in support of reliable, robust, and trustworthy AI systems. Other NIST efforts toward developing standards and tools for trustworthy AI include multistakeholder convenings, papers, and roadmaps that address challenges such as explainable AI, AI bias, AI security, and U.S.-EU alignment.
Recognizing the importance of tools for evaluating and managing risk in AI systems, the National AI Initiative calls for NIST, in collaboration with NSF, DOE, and other public and private stakeholders, to develop a voluntary AI Risk Management Framework (AI RMF) for trustworthy AI. This framework will include common definitions and characterizations for trustworthiness; identify and provide approaches for developing and assessing the trustworthiness of AI; identify and provide approaches for mitigating risks from AI systems; along with other relevant content. NIST published a second draft of the AI RMF in August 2022, and NIST plans to publish AI RMF 1.0 in January 2023.
Use of AI in the Private Sector
Important to fostering public trust and confidence in the use of AI is the development of an appropriate governance approach to the use of AI in the private sector. In response to a directive in EO 13859, the Office of Management and Budget (OMB) issued Memorandum M-21-06, “Guidance for Regulation of AI Applications”, which provides guidance to Federal agencies informing the development of regulatory and non-regulatory approaches to the use of AI in the private sector. This guidance reflects the goals of ensuring public engagement in the regulatory process, adopting a risk assessment and cost-benefit approach to AI that is based on evidence and scientific integrity, and an emphasis on the importance of promoting AI that is trustworthy – fair, nondiscriminatory, transparent, safe, and secure. It further requires agencies to create plans consistent with this guidance, including identifying AI use case priorities, regulatory barriers, and planned actions, as informed by multi-stakeholder engagement. Some agencies have published their responses publicly, including the Department of Health and Human Services, the Department of Veterans Affairs, the US Agency for International Development, and the Department of Energy.
The Food and Drug Administration (FDA) has identified 10 guiding principles to promote safe, effective, and high-quality medical devices that use AI and machine learning. These principles were developed jointly with counterparts in Canada and the United Kingdom.
Independent agencies have also issued their own guidance about the use of AI in a trustworthy manner that protects consumers. In April 2020, the Federal Trade Commission (FTC) issued business guidance on AI and algorithms, which stated that AI should be used in ways that are transparent, explainable, fair, empirically sound, and accountable; since then, the FTC has elaborated on its approach, taken enforcement action aligned with its guidance, issued a report on the risks of using AI to combat online problems, and announced that it is considering initiating a rulemaking to ensure algorithmic decision-making does not result in unlawful discrimination. In November 2020, the Consumer Product Safety Commission (CPSC) made AI and machine learning a priority in its operating plan; in May 2021, CPSC issued a report on its work to ensure the use of AI and machine learning does not result in unsafe products. In March 2021, five Federal agencies and departments issued a Request for Information on financial institutions’ use of AI; in June 2022, the Consumer Financial Protection Bureau (CFPB) issued a circular stating that creditors who use complex algorithms, including AI, in their credit decisions must provide a notice that discloses the specific principal reasons for taking an adverse action, such as denying credit.
Several Federal agencies have taken steps to promote trustworthy AI in workplaces and schools. In October 2021, the Equal Employment Opportunity Commission (EEOC) launched an initiative on AI and algorithmic fairness focused on ensuring that AI used in employment decisions complies with federal civil rights laws; in January 2022, EEOC and the Department of Labor launched a multi-year collaborative effort to reimagine hiring and recruitment practices, including in the use of automated systems; and in May 2022, EEOC and the Department of Justice released technical assistance and guidance about disability discrimination when employers use AI and other software tools to make employment decisions. In November 2021, the Partnership on Employment & Accessible Technology, funded by the Department of Labor, released a toolkit on implementing equitable AI in the workplace; in September 2022, the Department of Labor announced efforts to ramp up enforcement of required surveillance reporting to protect worker organizing. In October 2022, the National Labor Relations Board’s General Counsel issued a memo outlining a framework for protecting employees from unlawful electronic monitoring and automated management practices. By early 2023, the Department of Education will release recommendations on the use of AI for teaching and learning.
Additionally, Federal agencies are pursuing guidance to advance trustworthy AI in health care and housing. In July 2022, the Department of Health and Human Services (HHS) issued a proposed rule that includes a provision prohibiting algorithmic discrimination in clinical decision-making by covered health programs and activities. Through multiple rulemaking processes, HHS requested information on how Medicare policy can encourage software developers to prevent and mitigate bias in algorithms and predictive modeling; by the end of 2022, HHS will release a vision for advancing Health Equity by Design that includes methods to reduce algorithmic discrimination in healthcare algorithms, and will also release an examination of health care algorithms and racial and ethnic disparities for public comment. The Department of Housing and Urban Development will release guidance addressing the use of tenant screening algorithms in ways that may violate the Fair Housing Act; and, as part of a forthcoming proposed rule establishing quality control standards on automated home valuation models, the Federal agencies that regulate mortgage financing will establish a nondiscrimination standard so that these models do not rely upon biased data that could replicate past discrimination in housing.
Use of AI by the Federal Government
credit: Erik Jacobs, White House Office of Science and Technology Policy
At the White House Summit on Artificial Intelligence in Government in September 2019, Dr. Lynne Parker (White House Office of Science and Technology Policy) led a panel discussion on how Federal agencies have adopted AI, with Lt Gen Jack Shanahan (DoD’s Joint AI Center), Dr. Patricia Flatley Brennan (NIH), and Mr. Charles Keckler (HHS).
Through Executive Order 13960, the Federal Government has established guidelines promoting the use of trustworthy AI by Federal Agencies to more effectively deliver services to the American people and foster public trust in this technology. These guidelines define principles for the use of AI in Government, establish a common policy for implementing the principles, direct agencies to catalogue their AI use cases, and call on General Services Administration (GSA) and the Office of Personnel Management to enhance AI implementation expertise at the agencies. These guidelines apply primarily to civilian agencies outside the national security space, and are synergistic with existing Federal AI ethics principles, including the Department of Defense’s Ethical Principles for AI and Responsible AI Strategy and Implementation Pathway and the Intelligence Community’s Principles of AI Ethics and AI Ethics Framework. Agencies across the entire Federal government are now working to implement these principles as they design, develop, acquire, and use AI to advance their mission. For example, the Defense Innovation Unit (DIU) developed Responsible AI Guidelines to help implement the DoD’s Ethical Principles into DIU’s commercial prototyping and acquisition programs. As another example, the U.S. Agency for International Development (USAID) developed an AI Action Plan to use AI in equitable, inclusive, and rights-respecting ways. And, in July 2022, the Department of Veterans Affairs instituted a Principle-Based Ethics Framework for Access to and Use of Veteran Data.
As part of the cross-government effort to implement EO 13960, many agencies completed an inventory of their non-classified and non-sensitive use cases of AI, and if any inconsistencies were discovered between EO 13960’s principles for the use of AI in government and AI applications, agencies were required to develop plans to achieve consistency with EO 13960 or to retire those AI applications. Additionally, OMB incorporated a number of AI-related milestones into the Federal Data Strategy 2021 Action Plan. As part of this action plan, in April 2022, the Chief Information Officers Council provided a preliminary, alpha version of an algorithmic impact assessment tool for Federal agencies.
In early work to accelerate the adoption of AI in government, the White House hosted the Summit on Artificial Intelligence in Government in September 2019 to spark ideas for how the Federal government can adopt AI to better achieve its mission and improve services to the American people. The Summit highlighted innovative efforts at Federal agencies that have already adopted AI, and looked ahead to future transformative AI applications that will make government more effective, efficient, and responsive.
At the agency and department level, several new AI centers and offices have been established to advance the adoption of trustworthy AI within those departments, including the Department of Defense (DoD) Chief Digital and AI Office (succeeding the Joint AI Center), the Department of Energy (DOE) AI and Technology Office, the Department of Health and Human Services (HHS) Office of the Chief AI Officer, and the Department of Veterans Affairs (VA) National AI Institute. Some of these entities have developed their own resources and procedures to advance trustworthy AI. For example, HHS’s Office of the Chief AI Officer developed a Trustworthy AI Playbook DOE’s AI and Technology Office developed an AI Risk Management Playbook, and the Census Bureau’s xD team developed a Bias Toolkit.
Furthering these efforts, GSA launched the GSA AI Center of Excellence in September 2019. The AI Center of Excellence was later codified into law by the AI in Government Act of 2020. These efforts are supporting and coordinating the use of AI in the Federal agencies, helping to deploy scalable solutions and facilitating the sharing of best practices and tools for AI adoption. Illustrating the benefits of interagency collaboration on AI through the AI Center of Excellence, an AI pilot project developed by the Department of Health and Human Services in 2019 led to a new effort announced by GSA in September 2020 to assist Federal agencies in using AI for Regulatory Workflow Modernization. A collaboration between GSA and OMB, the Technology Modernization Fund also supports a few projects that advance the use of trustworthy AI in the Federal Government, such as the 2022 project on the use of AI to improve cybersecurity. In April 2022, GSA’s Technology Transformation Service launched the Applied AI Challenge competition to engage U.S.-based companies and organizations in accelerating the use of AI technologies that expand opportunities for new business processes and service delivery. And, in late 2022, GSA released an AI Guide for Government, providing tips for government decision-makers on developing AI capabilities.
GSA also launched a number of efforts to mitigate AI risks in 2021, creating a Guide to AI Ethics, publishing the Digital Worker Identity Playbook, and supporting a project on Combating Bias in AI in partnership with the Census Bureau and a university. The AI Community of Practice, launched in November 2019, brings together AI practitioners across Federal agencies, some of whom participate in working groups on Responsible AI and AI and Privacy. Additionally, as part of EO 13960, the GSA-based Presidential Innovation Fellowship also established a track focused on advancing the use of trustworthy AI in the Federal Government.
In May 2022, Executive Order 14074 called for the National Academy of Sciences to conduct a study of facial recognition technology, other technologies using biometric information, and predictive algorithms. EO 14074 also directed the Attorney General, the Secretary of Homeland Security, and the Director of the White House Office of Science and Technology Policy (OSTP) to lead an interagency process on law enforcement agencies’ use of these technologies, which will address safeguards for privacy, civil rights, and civil liberties, as well as considerations about accuracy and disparate impact.
The Government Accountability Office (GAO), a part of the legislative branch, created in 2021 an AI accountability framework for Federal agencies and other entities. GAO has also published a number of other reports related to AI, some of which are available on the Publications Library page of this website.
The Biden-Harris Administration is currently working across agencies to develop new policies and guidance for Federal procurement of AI products and services that are based on effective and promising practices to prevent and address bias and algorithmic discrimination resulting from the use of AI and other advanced technologies.
Engaging stakeholders, experts, and the public
In order to better understand the challenges and opportunities to advance trustworthy AI, the Federal Government has led a number of engagements about AI with the public. OSTP issued a Request for Information (RFI), alongside two listening sessions to hear from the public, on the topic of AI-enabled biometric technologies; the RFI received 130 responses from the public. OSTP also issued an RFI on Advancing Privacy-Enhancing Technologies, in order to help inform national strategy on privacy-preserving data sharing and analytics. As part of developing an AI Risk Management Framework (RMF), NIST issued an RFI that received 106 responses from the public, which were used to inform a workshop with over 800 participants, a draft taxonomy of AI risk, and a RMF concept paper. A longer list of AI-related RFIs can be found in the Publications Library, under the publication type of “Requests for Information.” In June 2022, USAID announced the launch of a collaborative process – which will include listening sessions – to develop a Code of Ethics for technologists.
In October 2022, OSTP released a Blueprint for an AI Bill of Rights to help guide the design, development, and deployment of artificial intelligence (AI) and other automated systems so that they protect the rights of the American public. The Blueprint for an AI Bill of Rights identifies five principles to help protect the American public in the age of artificial intelligence: safe and effective systems; algorithmic discrimination protections; data privacy; notice and explanation; alternative options. Developed through extensive consultation with the American public, stakeholders, and U.S. government agencies, the Blueprint also includes concrete steps which governments, companies, communities, and others can take in order to build these key protections into policy, practice, or technological design to ensure automated systems work for the American people.
The Federal Government also receives external input on trustworthy AI via independent advisory committees. These include the former National Security Commission on AI, which developed its Key Considerations for the Responsible Development and Fielding of AI, and the National AI Advisory Committee, which includes 27 experts who will advise the President and Congress on a variety of AI issues, including the advancement of trustworthy AI.