The inability of artificial intelligence (AI) to represent and model human partners is the single biggest challenge preventing effective human-machine teaming today. Current AI agents are able to respond to commands and follow through on instructions that are within their training, but are unable to understand intentions, expectations, emotions, and other aspects of social intelligence that are inherent to their human counterparts. This lack of understanding stymies efforts to create safe, efficient, and productive human-machine collaboration.
“As humans, we are able to infer unobservable states, such as situational beliefs and goals, and use those to predict the subsequent actions, reactions, or needs of another individual,” said Dr. Joshua Elliott, a program manager in DARPA’s Information Innovation Office (I2O). “Machines need to be able to do the same if we expect them to collaborate with us in a useful and effective way or serve as trusted members of a team.”
Teaching machines social intelligence however is no small feat. Humans intuitively build mental models of the world around them that include approximations of the mental models of other humans – a skill called Theory of Mind (ToM). Humans use their ToM skill to infer the mental states of their teammates from observed actions and context, and are able to predict future actions based on those inferences. These models are built on each individual’s existing sets of experiences, observations, and beliefs. Within a team setting, humans build shared mental models by aligning around key aspects of their environment, team, and strategies. ToM and shared mental models are key elements of human social intelligence that work together to enable effective human collaboration.