Loading web-font TeX/Main/Regular
SCANN: Synthesis of Compact and Accurate Neural Networks | IEEE Journals & Magazine | IEEE Xplore

SCANN: Synthesis of Compact and Accurate Neural Networks


Abstract:

Deep neural networks (DNNs) have become the driving force behind recent artificial intelligence (AI) research. With the help of a vast amount of training data, neural net...Show More

Abstract:

Deep neural networks (DNNs) have become the driving force behind recent artificial intelligence (AI) research. With the help of a vast amount of training data, neural networks can perform better than traditional machine learning algorithms in many applications. An important problem with implementing a neural network is the design of its architecture. Typically, such an architecture is obtained manually by exploring its hyperparameter space and kept fixed during training. This approach is both time consuming and inefficient. Another issue is that modern neural networks often contain millions of parameters, whereas many applications require small inference models due to imposed resource constraints, such as energy constraints on battery-operated devices. However, efforts to migrate DNNs to such devices typically entail a significant loss of classification accuracy. To address these challenges, we propose a two-step neural network synthesis methodology, called DR+SCANN, that combines two complementary approaches to design compact and accurate DNNs. At the core of our framework is the SCANN methodology that uses three basic architecture-changing operations, namely, connection growth, neuron growth, and connection pruning, to synthesize feedforward architectures with arbitrary structure. These neural networks are not limited to the multilayer perceptron structure. SCANN encapsulates three synthesis methodologies that apply a repeated grow-and-prune paradigm to three architectural starting points. DR+SCANN combines the SCANN methodology with dataset dimensionality reduction to alleviate the curse of dimensionality. We demonstrate the efficacy of SCANN and DR+SCANN on various image and nonimage datasets. We evaluate SCANN on MNIST, CIFAR-10, and ImageNet benchmarks. Without any loss in accuracy, SCANN generates a 46.3\times smaller network than the LeNet-5 Caffe model. We also compare SCANN-synthesized networks with a state-of-the-art fully connected (FC) feedforward ...
Page(s): 3012 - 3025
Date of Publication: 29 September 2021

ISSN Information:

Funding Agency:


Contact IEEE to Subscribe

References

References is not available for this document.