Abstract:
Online learning has been successfully applied in various machine learning problems. Conventional analysis of online learning achieves a sharp generalization bound with a ...Show MoreMetadata
Abstract:
Online learning has been successfully applied in various machine learning problems. Conventional analysis of online learning achieves a sharp generalization bound with a strongly convex assumption. In this paper, we study the generalization ability of the classic online gradient descent algorithm under the quadratic growth condition (QGC), a strictly weaker condition than strong convexity. Under some mild assumptions, we prove that the excess risk converges no worse than O(log T/T) when the data are independently and identically distributed (i.i.d.). When the data are generated from a φ-mixing process, we achieve the excess risk bound O(log T/T + φ(τ)), where φ(τ) is the mixing coefficient capturing the non-i.i.d. attribute. Our key technique is based on the combination of the QGC and the martingale concentrations. Our results indicate that the strong convexity is not necessary to achieve the sharp O(log T/T) convergence rate in online learning. We verify our theories on both synthetic and real-world data.
Published in: IEEE Transactions on Neural Networks and Learning Systems ( Volume: 29, Issue: 10, October 2018)