A new research effort at the National Institute of Standards and Technology (NIST) aims to address a pervasive issue in our data-driven society: a lack of fairness that sometimes turns up in the answers we get from information retrieval software.
Software of this type is everywhere, from popular search engines to less-known algorithms that help specialists comb through databases. This software usually incorporates forms of artificial intelligence that help it learn to make better decisions over time. But it bases these decisions on the data it receives, and if that data is biased in some way, the software will learn to make decisions that reflect that bias too. These decisions can have real-world consequences, influencing what music artists a streaming service suggests and whether you get recommended for a job interview.
“It’s now recognized that systems aren’t unbiased. They can actually amplify existing bias because of the historical data the systems train on,” said Ellen Voorhees, a NIST computer scientist. “The systems are going to learn that bias and recommend you take an action that reflects it.”