Loading [MathJax]/extensions/MathMenu.js
On the Generalization Ability of Online Gradient Descent Algorithm Under the Quadratic Growth Condition | IEEE Journals & Magazine | IEEE Xplore

On the Generalization Ability of Online Gradient Descent Algorithm Under the Quadratic Growth Condition


Abstract:

Online learning has been successfully applied in various machine learning problems. Conventional analysis of online learning achieves a sharp generalization bound with a ...Show More

Abstract:

Online learning has been successfully applied in various machine learning problems. Conventional analysis of online learning achieves a sharp generalization bound with a strongly convex assumption. In this paper, we study the generalization ability of the classic online gradient descent algorithm under the quadratic growth condition (QGC), a strictly weaker condition than strong convexity. Under some mild assumptions, we prove that the excess risk converges no worse than O(log T/T) when the data are independently and identically distributed (i.i.d.). When the data are generated from a φ-mixing process, we achieve the excess risk bound O(log T/T + φ(τ)), where φ(τ) is the mixing coefficient capturing the non-i.i.d. attribute. Our key technique is based on the combination of the QGC and the martingale concentrations. Our results indicate that the strong convexity is not necessary to achieve the sharp O(log T/T) convergence rate in online learning. We verify our theories on both synthetic and real-world data.
Published in: IEEE Transactions on Neural Networks and Learning Systems ( Volume: 29, Issue: 10, October 2018)
Page(s): 5008 - 5019
Date of Publication: 17 January 2018

ISSN Information:

PubMed ID: 29994750

Funding Agency:


Contact IEEE to Subscribe

References

References is not available for this document.