Advancing Trustworthy AI

One of the key purposes of the National AI Initiative is to ensure that the United States leads the world in the development and use of trustworthy AI systems in the public and private sectors. The United States has long been a champion and defender of the core values of freedom; guarantees of human rights; the rule of law; stability in our institutions; rights to privacy, civil rights, and civil liberties; respect for intellectual property; and opportunities to all to pursue their dreams.

To be trustworthy, AI technologies must appropriately reflect characteristics such as accuracy, explainability and interpretability, privacy, reliability, robustness, safety, and security or resilience to attacks – and ensure that bias is mitigated. Factors such as fairness and transparency should be considered, particularly during deployment or use. In addition, the broader impacts of AI on society must be considered, such as implications for the workforce. Developing and using AI in ways that are ethical, reduce bias, promote fairness, and protect privacy is essential for fostering a positive effect on society consistent with core U.S. values.

Advancing trustworthy AI requires a multifaceted approach, including R&D investments addressing key technical challenges, development of metrics, standards, and assessment tools to measure and evaluate AI trustworthiness, engagement in the development of AI technical standards, governance approaches for the use of AI in the public and private sectors, and preparing a diverse and inclusive workforce for the jobs of the future.

↑ Top
 

Research and Development for Trustworthy AI

The Federal Government has prioritized AI R&D activities that address the ethical, legal, and societal implications of AI, as well as the safety and security of AI systems.  The National AI R&D Strategic Plan: 2019 Update details many of the research challenges in these areas, while the 2016-2019 Progress Report: Advancing Artificial Intelligence R&D provides an overview of the numerous Federal R&D programs that address these research challenges.

For example, NSF’s Program on Fairness in Artificial Intelligence, in collaboration with a major technology company, supports research on fairness in AI with the goal of contributing to trustworthy AI systems that are readily accepted and deployed to tackle grand challenges facing society.  NSF collaborated with a non-profit to support early concept grants for exploratory research on AI and Society to better understand the social challenges arising from AI technology and enable scientific contributions to overcome them.

An important R&D emphasis for trustworthy AI is providing AI systems with explainability mechanisms that help human users understand reasons for AI outputs. DARPA’s Explainable AI (XAI) program aims to create machine learning techniques that produce more explainable solutions while maintaining high performance and appropriate levels of trust in the system.

The complexity of many AI systems creates important safety and security challenges that must be addressed to ensure that these systems are trustworthy. AI systems have some inherent cybersecurity risks because of the characteristics of how the technology is designed. R&D investments such as DARPA’s AI Next Campaign are creating solutions for countering adversarial attacks on AI technologies, such as those that attempt to contaminate training data, modify algorithms, create adversarial inputs, or exploit flaws in AI system goals. This research is expected to lead to more secure, robust, and safe AI systems that are reliable and trustworthy.

↑ Top
 

Metrics, Assessment Tools, and Technical Standards for AI

Metrics, assessment tools, and technical standards are essential for ensuring that AI technologies meet critical objectives for functionality and interoperability. The National Institute of Standards and Technology (NIST) leads many efforts to advance foundational research for measuring and assessing AI technologies, including the development of AI data standards and best practices, as well as AI evaluation methodologies and standard testing protocols. As called for by the February 2019 Executive Order on Maintaining American Leadership in AI (EO 13859), NIST released a plan in August 2019 for Federal engagement in the development of AI technical standards and related tools in support of reliable, robust, and trustworthy AI systems. Other NIST efforts toward an AI measurement science include multistakeholder convenings and papers that address challenges such as explainable AI, AI bias, and AI security.

Recognizing the importance of tools for evaluating and managing risk in AI systems, the National AI Initiative calls for NIST, in collaboration with NSF, DOE, and other public and private stakeholders, to develop a voluntary risk management framework for trustworthy AI. This framework will include common definitions and characterizations for trustworthiness, identify and provide approaches for developing and assessing the trustworthiness of AI, and identify and provide approaches for mitigating risks from AI systems, along with other relevant content.

↑ Top
 

Guidance for the Regulation of AI in the Private Sector

Important to fostering public trust and confidence in the use of AI is the development of an appropriate governance approach to the use of AI in the private sector.  In response to a directive in EO 13859, OMB issued Memorandum M-21-06, “Guidance for Regulation of AI Applications“, which provides guidance to Federal agencies informing the development of regulatory and non-regulatory approaches to the use of AI in the private sector.  This guidance reflects the goals of ensuring public engagement in the regulatory process, adopting a risk assessment and cost-benefit approach to AI that is based on evidence and scientific integrity, and an emphasis on the importance of promoting AI that is trustworthy – fair, nondiscriminatory, transparent, safe, and secure.  It further requires agencies to create plans consistent with this guidance, including identifying AI use case priorities, regulatory barriers, and planned actions, as informed by multi-stakeholder engagement.

↑ Top
 

Use of AI by the Federal Government

At the White House Summit on Artificial Intelligence in Government in September 2019, Dr. Lynne Parker (White House Office of Science and Technology Policy) led a panel discussion on how Federal agencies have adopted AI, with Lt Gen Jack Shanahan (DoD’s Joint AI Center), Dr. Patricia Flatley Brennan (NIH), and Mr. Charles Keckler (HHS).

credit: Erik Jacobs, White House Office of Science and Technology Policy

At the White House Summit on Artificial Intelligence in Government in September 2019, Dr. Lynne Parker (White House Office of Science and Technology Policy) led a panel discussion on how Federal agencies have adopted AI, with Lt Gen Jack Shanahan (DoD’s Joint AI Center), Dr. Patricia Flatley Brennan (NIH), and Mr. Charles Keckler (HHS).

Through Executive Order 13960, the Federal Government has established guidelines promoting the use of trustworthy AI by Federal Agencies to more effectively deliver services to the American people and foster public trust in this technology.   These guidelines define principles for the use of AI in Government, establish a common policy for implementing the principles, direct agencies to catalogue their AI use cases, and call on General Services Administration (GSA) and the Office of Personnel Management to enhance AI implementation expertise at the agencies.  These guidelines apply primarily to civilian agencies outside the national security space, and are synergistic with existing Federal AI ethics principles, including the Department of Defense’s Ethical Principles for AI and the Intelligence Community’s Principles of AI Ethics and AI Ethics Framework.  Agencies across the entire Federal government are now working to implement these principles as they design, develop, acquire, and use AI to advance their mission.

To accelerate the adoption of AI in government, the White House hosted the Summit on Artificial Intelligence in Government in September 2019 to spark ideas for how the Federal government can adopt AI to better achieve its mission and improve services to the American people.  The Summit highlighted innovative efforts at Federal agencies that have already adopted AI, and looked ahead to future transformative AI applications that will make government more effective, efficient, and responsive.

At the agency and department level, several new AI centers and offices have been established to advance the adoption of AI within those departments, including the Department of Defense (DoD) Joint AI Center, the Department of Energy (DOE) AI and Technology Office, and the Department of Veterans Affairs (VA) National AI Institute.

Furthering these efforts, the General Service Administration (GSA) launched the GSA AI Center of Excellence in September 2019, along with the AI Community of Practice soon thereafter. The AI Center of Excellence was later codified into law by the AI in Government Act of 2020. These efforts are supporting and coordinating the use of AI in the Federal agencies, helping to deploy scalable solutions and facilitating the sharing of best practices and tools for AI adoption.

Illustrating the benefits of interagency collaboration on AI through the AI Center of Excellence, an AI pilot project developed by the Department of Health and Human Services in 2019 led to a new effort announced by GSA in September 2020 to assist Federal agencies in using AI for Regulatory Workflow Modernization.

↑ Top