Seeking to promote the development and use of artificial intelligence (AI) technologies and systems that are trustworthy and responsible, NIST today released for public comment an initial draft of the AI Risk Management Framework (AI RMF). The draft addresses risks in the design, development, use and evaluation of AI systems.
The voluntary framework is intended to improve understanding and help manage enterprise and societal risks related to AI systems. It aims to provide a flexible, structured and measurable process to address AI risks throughout the AI lifecycle, and offers guidance for the development and use of trustworthy and responsible AI. NIST is also developing a companion guide to the AI RMF with additional practical guidance; comments about the framework also will be taken into account in preparing that practice guide.
“We have developed this draft with extensive input from the private and public sectors, knowing full well how quickly AI technologies are being developed and put to use and how much there is to be learned about related benefits and risks,” said Elham Tabassi, chief of staff of the NIST Information Technology Laboratory (ITL), who is coordinating the agency’s AI work, including the AI RMF.