Abstract:
Online optimization has recently opened avenues to study optimal control for time-varying cost functions that are unknown in advance. Inspired by this line of research, w...Show MoreMetadata
Abstract:
Online optimization has recently opened avenues to study optimal control for time-varying cost functions that are unknown in advance. Inspired by this line of research, we study the distributed online linear quadratic regulator (LQR) problem for linear time-invariant (LTI) systems with unknown dynamics. Consider a multiagent network where each agent is modeled as an LTI system. The network has a global time-varying quadratic cost, which may evolve adversarially and is only partially observed by each agent sequentially. The goal of the network is to collectively 1) estimate the unknown dynamics and 2) compute local control sequences competitive to the best centralized policy in hindsight, which minimizes the sum of network costs over time. This problem is formulated as a regret minimization. We propose a distributed variant of the online LQR algorithm, where agents compute their system estimates during an exploration stage. Each agent then applies distributed online gradient descent on a semidefinite programming whose feasible set is based on the agent system estimate. We prove that with high probability, the regret bound of our proposed algorithm scales as O(T^{2/3}\log T), implying the consensus of all agents over time. We also provide simulation results verifying our theoretical guarantee.
Published in: IEEE Transactions on Automatic Control ( Volume: 69, Issue: 1, January 2024)