Abstract:
Gradient descent optimization algorithm is very important in deep learning. In order to obtain a more stable convergence process and reduce overfitting in multiple epochs...Show MoreMetadata
Abstract:
Gradient descent optimization algorithm is very important in deep learning. In order to obtain a more stable convergence process and reduce overfitting in multiple epochs, we propose an improved Adagrad gradient descent optimization algorithm in this paper. Our approach is tested both on the Reuters dataset and the IMDB dataset with many gradient descent optimization algorithms. The results show that our approach has a more stable convergence process and can reduce overfitting in multiple epochs.
Published in: 2018 Chinese Automation Congress (CAC)
Date of Conference: 30 November 2018 - 02 December 2018
Date Added to IEEE Xplore: 24 January 2019
ISBN Information: