Deep Neural Network Compression Method Based on Product Quantization | IEEE Conference Publication | IEEE Xplore

Deep Neural Network Compression Method Based on Product Quantization


Abstract:

In this paper a method based on the combination of product quantization and pruning to compress deep neural network with large size model and great amount of calculation ...Show More

Abstract:

In this paper a method based on the combination of product quantization and pruning to compress deep neural network with large size model and great amount of calculation is proposed. First of all, we use pruning to reduce redundant parameters in deep neural network, and then refine the tune network for fine tuning. Then we use product quantization to quantize the parameters of the neural network to 8 bits, which reduces the storage overhead so that the deep neural network can be deployed in embedded devices. For the classification tasks in the Mnist dataset and Cifar10 dataset, the network models such as LeNet5, AlexNet, ResNet are compressed to 23 to 38 times without losing accuracy as much as possible.
Date of Conference: 27-29 July 2020
Date Added to IEEE Xplore: 09 September 2020
ISBN Information:

ISSN Information:

Conference Location: Shenyang, China

Contact IEEE to Subscribe

References

References is not available for this document.