NIST Asks AI to Explain Itself

It’s a question that many of us encounter in childhood: “Why did you do that?” As artificial intelligence (AI) begins making more consequential decisions that affect our lives, we also want these machines to be capable of answering that simple yet profound question. After all, why else would we trust AI’s decisions?

This desire for satisfactory explanations has spurred scientists at the National Institute of Standards and Technology (NIST) to propose a set of principles by which we can judge how explainable AI’s decisions are. Their draft publication, Four Principles of Explainable Artificial Intelligence (Draft NISTIR 8312), is intended to stimulate a conversation about what we should expect of our decision-making devices.

The report is part of a broader NIST effort to help develop trustworthy AI systems. NIST’s foundational research aims to build trust in these systems by understanding their theoretical capabilities and limitations and by improving their accuracy, reliability, security, robustness and explainability, which is the focus of this latest publication.

Read more