Abstract:
Convolutional neural networks use parameter sharing to greatly reduce the number of weights. However, multi-channel feature maps greatly increase the amount of computatio...Show MoreMetadata
Abstract:
Convolutional neural networks use parameter sharing to greatly reduce the number of weights. However, multi-channel feature maps greatly increase the amount of computation, and at the same time, it is difficult to continue to reduce the number of weights. The Inception module solves this problem by using global average pooling and network in network(NIN) architecture. We propose a deep neural network using the inception module and the LSTM module, using the inception module to reduce the computational complexity of the convolutional network, and using LSTM to preserve the internal timing characteristics of the time series dataset. At the same time, the sliding window method is used to simply augment the training data. The method was tested on the UCR time series classification archive, with a lower error rate than the baseline model.
Published in: 2019 IEEE International Conference on Integrated Circuits, Technologies and Applications (ICTA)
Date of Conference: 13-15 November 2019
Date Added to IEEE Xplore: 28 February 2020
ISBN Information: