Skip to Main Content
We consider a convex optimization problem for non-hierarchical agent networks where each agent has access to a local or private time-varying function, and the network-wide objective is to find a time-invariant minimizer of the sum of these functions, provided that such a minimizer exists. Problems of this type are common in dynamic systems where the objective function is time-varying because of, for instance, the dependency on measurements that arrive continuously to each agent. A typical outer-loop optimization iteration for optimization problems of this type consists of a local optimization step based on the information provided by neighboring agents, followed by a consensus step to exchange and fuse local estimates of the agents. A great deal of research effort has been directed towards developing and better understanding such algorithms, which find many applications in distinct areas such as cognitive radio networks, distributed acoustic source localization, coordination of unmanned vehicles, and environmental modeling. Contrasting with existing work, which considers either dynamic systems or noisy links (but not both jointly), in this study we devise and analyze a novel distributed online algorithm for dynamic optimization problems in noisy communication environments. The main result of the study proves sufficient conditions for almost sure convergence of the algorithm as the number of iterations tends to infinity. The algorithm is applicable to a wide range of distributed optimization problems with time-varying cost functions and consensus updates corrupted by additive noise. Our results therefore extend previous work to include recently proposed schemes that merge the processes of computation and data transmission over noisy wireless networks for fast and efficient consensus protocols. To give a concrete example of an application, we show how to apply our general technique to the problem of distributed detection with adaptive filters.