Designing Chips for Real Time Machine Learning

The current generation of machine learning (ML) systems would not have been possible without significant computing advances made over the past few decades. The development of the graphics-processing unit (GPU) was critical to the advancement of ML as it provided new levels of compute power needed for ML systems to process and train on large data sets. As the field of artificial intelligence looks towards advancing beyond today’s ML capabilities, pushing into the realms of “learning” in real-time, new levels of computing are required. Highly specialized Application-Specific Integrated Circuits (ASICs) show promise in meeting the physical size, weight, and power (SWaP) requirements of advanced ML applications, such as autonomous systems and 5G. However, the high cost of design and implementation has made the development of ML-specific ASICs impractical for all but the highest volume applications.

“A critical challenge in computing is the creation of processors that can proactively interpret and learn from data in real-time, apply previous knowledge to solve unfamiliar problems, and operate with the energy efficiency of the human brain,” said Andreas Olofsson, a program manager in DARPA’s Microsystems Technology Office (MTO). “Competing challenges of low-SWaP, low-latency, and adaptability require the development of novel algorithms and circuits specifically for real-time machine learning. What’s needed is the rapid development of energy efficient hardware and ML architectures that can learn from a continuous stream of new data in real time.”

Read more