Scheduled System Maintenance:
On May 6th, system maintenance will take place from 8:00 AM - 12:00 PM ET (12:00 - 16:00 UTC). During this time, there may be intermittent impact on performance. We apologize for the inconvenience.
By Topic

Random Coordinate Descent Algorithms for Multi-Agent Convex Optimization Over Networks

Sign In

Cookies must be enabled to login.After enabling cookies , please use refresh or reload or ctrl+f5 on the browser for the login options.

The purchase and pricing options are temporarily unavailable. Please try again later.
1 Author(s)
Necoara, I. ; Dept. of Autom. Control & Syst. Eng., Univ. Politeh. Bucharest, Bucharest, Romania

In this paper, we develop randomized block-coordinate descent methods for minimizing multi-agent convex optimization problems with linearly coupled constraints over networks and prove that they obtain in expectation an ε accurate solution in at most O(1/λ2(Q)ϵ) iterations, where λ2(Q) is the second smallest eigenvalue of a matrix Q that is defined in terms of the probabilities and the number of blocks. However, the computational complexity per iteration of our methods is much simpler than the one of a method based on full gradient information and each iteration can be computed in a completely distributed way. We focus on how to choose the probabilities to make these randomized algorithms to converge as fast as possible and we arrive at solving a sparse SDP. Analysis for rate of convergence in probability is also provided. For strongly convex functions our distributed algorithms converge linearly. We also extend the main algorithm to a more general random coordinate descent method and to problems with more general linearly coupled constraints. Preliminary numerical tests confirm that on very large optimization problems our method is much more numerically efficient than methods based on full gradient.

Published in:

Automatic Control, IEEE Transactions on  (Volume:58 ,  Issue: 8 )