It is widely acknowledged that “trustworthiness” in artificial intelligence (AI) systems is critical to their development and appropriate use in all parts of our society. That’s easier said than done, of course, and there is little agreement on what constitutes trustworthy AI and the research, standards, and policy steps needed to define and achieve the goal of trustworthy AI systems.
This workshop kicked off a NIST initiative involving private and public sector organizations and individuals in discussions about building blocks for trustworthy AI systems and the associated measurements, methods, standards, and tools to implement those building blocks when developing, using, and testing AI systems. NIST’s effort is being informed by a series of workshops following this initial session.
The second workshop, held August 18, 2020, aims to develop a shared understanding of one characteristic of trustworthiness – bias in AI, what it is, and how to measure it. Future workshops on other technical requirements of trustworthy AI will be announced. All workshops for the immediate future will be virtual and are open to the public at no cost.
This launch event brought together experts from the private and public sectors to engage in collaborative discussions.