When should someone trust an AI assistant’s predictions?

In a busy hospital, a radiologist uses an artificial intelligence system to help her diagnose medical conditions based on patients’ X-ray images. Using the AI system can help her make faster diagnoses, but how does she know when to trust the AI’s predictions?

Traditionally, she doesn’t. Instead, she may rely on her expertise, a confidence level provided by the system itself, or an explanation of how the algorithm made its prediction — which may look convincing but still be wrong — to make an estimation.

To help people better understand when to trust an AI “teammate,” Massachusetts Institute of Technology researchers created a technique that guides humans to a more accurate understanding of when a machine makes correct predictions and when it makes incorrect ones. The research is supported by the U.S. National Science Foundation.

Read more…