Abstract:
In this paper a method based on the combination of product quantization and pruning to compress deep neural network with large size model and great amount of calculation ...Show MoreMetadata
Abstract:
In this paper a method based on the combination of product quantization and pruning to compress deep neural network with large size model and great amount of calculation is proposed. First of all, we use pruning to reduce redundant parameters in deep neural network, and then refine the tune network for fine tuning. Then we use product quantization to quantize the parameters of the neural network to 8 bits, which reduces the storage overhead so that the deep neural network can be deployed in embedded devices. For the classification tasks in the Mnist dataset and Cifar10 dataset, the network models such as LeNet5, AlexNet, ResNet are compressed to 23 to 38 times without losing accuracy as much as possible.
Published in: 2020 39th Chinese Control Conference (CCC)
Date of Conference: 27-29 July 2020
Date Added to IEEE Xplore: 09 September 2020
ISBN Information: