Abstract:
Mirror descent (MD) is a powerful first-order optimization technique that subsumes several optimization algorithms including gradient descent (GD). In this work, we lever...Show MoreMetadata
Abstract:
Mirror descent (MD) is a powerful first-order optimization technique that subsumes several optimization algorithms including gradient descent (GD). In this work, we leverage quadratic constraints and Lyapunov functions to analyze the stability and characterize the convergence rate of the MD algorithm as well as its distributed variant using semidefinite programming (SDP). For both algorithms, we consider both strongly convex and nonstrongly convex assumptions. For centralized MD and strongly convex problems, we construct an SDP that certifies exponential convergence rates and derive a closed-form feasible solution to the SDP that recovers the optimal rate of GD as a special case. We complement our analysis by providing an explicit O(1/k) convergence rate for convex problems. Next, we analyze the convergence of distributed MD and characterize the rate numerically using an SDP whose dimensions are independent of the network size. To the best of our knowledge, the numerical rate of distributed MD has not been previously reported in the literature. We further prove an O(1/k) convergence rate for distributed MD in the convex setting. Our numerical experiments on strongly convex problems indicate that our framework certifies superior convergence rates compared to the existing rates for distributed GD.
Published in: IEEE Transactions on Automatic Control ( Volume: 68, Issue: 5, May 2023)