On August 18, NIST hosted the Bias in AI Workshop, a virtual event to develop a shared understanding of bias in AI, what it is, and how to measure it.
Workshop goal: To develop a shared understanding of bias in AI that can guide and speed the innovation and adoption of trustworthy AI systems – including the measurements, standards, and related tools which are critical parts of the AI foundation.
The intensity of dialogue around trustworthy AI has been increasing over the past year. A key but still insufficiently defined building block of trustworthiness is bias in AI-based products and systems. Insights shared by speakers and other participants during this interactive workshop is intended to move the AI community closer to agreement on defining bias in AI. That will guide and strengthen efforts by NIST and others to make progress in multiple aspects of trustworthy AI. The workshop will help to build a community of interest for NIST’s multi-faceted work in this arena, which includes foundational and use-inspired research, evaluation, standards, and policy engagement.
Workshop participants: