Over the past decade, the world has seen tremendous increases in the deployment of artificial
intelligence (AI) technology. The main horsepower behind the success of AI systems is provided by
deep learning models and machine learning (ML) algorithms. Recently, a new AI paradigm has
emerged: Automated Machine Learning (AutoML) including its subfield Neural Architecture Search
(NAS).
State-of-the-art ML models consist of complex workflows with numerous design choices and
variables that must be tuned for optimal performance. Optimizing all the variables manually can be
complex and even intractable. Neural Architecture Search (NAS) allows us to find a high-performing
deep learning model architecture automatically. NAS uses ML models to design or train other ML
models, executing trial-and-error processes millions or billions of times faster than humans. The figure
below illustrates this paradigm shift and the place of a new tool known as TE-NAS within it.
NAS automates the process of finding a good model, but this process can be quite expensive.
NAS has to examine many (good and bad) models’ performance before discovering the pattern for a
good architecture, and it can take a long time even to determine if a single model will perform well
or not. Training NAS thus requires the use of supercomputers, but even with advanced
supercomputers, it can take days or even weeks.