Loading [a11y]/accessibility-menu.js
Computational Convergence Analysis of Distributed Gradient Tracking for Smooth Convex Optimization Using Dissipativity Theory | IEEE Conference Publication | IEEE Xplore

Computational Convergence Analysis of Distributed Gradient Tracking for Smooth Convex Optimization Using Dissipativity Theory


Abstract:

We present a computational analysis that establishes the O(1/K) convergence of the distributed gradient tracking method when the objective function is smooth and convex b...Show More

Abstract:

We present a computational analysis that establishes the O(1/K) convergence of the distributed gradient tracking method when the objective function is smooth and convex but not strongly convex. The analysis is inspired by recent work on applying dissipativity theory to the analysis of centralized optimization algorithms, in which convergence is proved by searching for a numerical certificate consisting of a storage function and a supply rate. We derive a base supply rate that can be used to analyze distributed optimization with non-strongly convex objective functions. The base supply rate is then used to create a class of supply rates by combining with integral quadratic constraints. Provided that the class of supply rates is rich enough, a numerical certificate of convergence can be automatically generated following a standard procedure that involves solving a linear matrix inequality. Our computational analysis is found capable of certifying convergence under a broader range of step sizes than what is given by the original analytic result.
Date of Conference: 10-12 July 2019
Date Added to IEEE Xplore: 29 August 2019
ISBN Information:

ISSN Information:

Conference Location: Philadelphia, PA, USA

I. Introduction

Distributed optimization algorithms have a wide range of applications in engineering [9], [10], [14] and statistics [2] when the scale of the optimization problem becomes too large to be solved centrally. A fundamental issue in the analysis of optimization algorithms is convergence, in particular convergence rate, which is a measure of how quickly an algorithm is able to locate an optimal solution. Traditional analysis of convergence rates relies on nonconstructive analytic proof techniques, which are often devised on an algorithm-by-algorithm basis and therefore do not readily generalize to new algorithms. As a result, one often needs to start the analysis from scratch when new requirements such as robustness, security, and communication constraints are introduced to existing algorithms.

Contact IEEE to Subscribe

References

References is not available for this document.