The United States Department of Defense (DOD) and intelligence community require computational systems that can robustly and automatically analyze large amounts of multimedia. These systems must also communicate and cooperate with people to resolve ambiguities and improve performance over time.
However, today’s machine learning approaches result in artificial intelligence (AI) agents that cannot interact with humans through conversation except in limited, specifically designed applications. Current computational paradigms rely on statistical methods and lack sufficiently diverse and representative, annotated data for training to achieve the accuracy required for successful implementation. Moreover, these agents lack the ability to understand concepts, such as the properties and capabilities of objects, which prevents them from being able to handle previously unseen objects, activities, scenes, or entities.
DARPA’s Environment-driven Conceptual Learning (ECOLE) program aims to radically improve these technologies by creating AI agents capable of continually learning from linguistic and visual input. The goal is to enable human-machine collaborative analysis of image, video, and multimedia documents during time-sensitive, mission-critical DOD analytic tasks where reliability and robustness are essential.