Abstract:
We focus on the problem of distributed online constrained convex optimization with statistical privacy in multiagent systems. The participating agents aim to collaborativ...Show MoreMetadata
Abstract:
We focus on the problem of distributed online constrained convex optimization with statistical privacy in multiagent systems. The participating agents aim to collaboratively minimize the cumulative system-wide cost while a passive adversary corrupts some of them. The passive adversary collects information from corrupted agents and attempts to estimate the private information of the uncorrupted ones. In this scenario, we adopt a correlated perturbation mechanism with globally balanced property to cover the local information of agents to enable privacy preservation. This work is the first attempt to integrate such a mechanism into the distributed online (sub)gradient descent algorithm, and then a new algorithm called privacy-preserving distributed online convex optimization (PP-DOCO) is designed. It is proved that the designed algorithm provides a statistical privacy guarantee for uncorrupted agents and achieves an expected regret in \mathcal{O}(\sqrt{K}) for convex cost functions, where K denotes the time horizon. Furthermore, an improved expected regret in \mathcal{O}(\log(K)) is derived for strongly convex cost functions. The obtained results are equivalent to the best regret scalings achieved by state-of-the-art algorithms. The privacy bound is established to describe the level of statistical privacy using the notion of Kullback–Leibler divergence (KLD). In addition, we observe that a tradeoff exists between our algorithm’s expected regret and statistical privacy. Finally, the effectiveness of our algorithm is validated by simulation results.
Published in: IEEE Transactions on Neural Networks and Learning Systems ( Early Access )