By Topic

Distributed Consensus Algorithms in Sensor Networks With Imperfect Communication: Link Failures and Channel Noise

Sign In

Cookies must be enabled to login.After enabling cookies , please use refresh or reload or ctrl+f5 on the browser for the login options.

Formats Non-Member Member
$31 $13
Learn how you can qualify for the best price for this item!
Become an IEEE Member or Subscribe to
IEEE Xplore for exclusive pricing!
close button

puzzle piece

IEEE membership options for an individual and IEEE Xplore subscriptions for an organization offer the most affordable access to essential journal articles, conference papers, standards, eBooks, and eLearning courses.

Learn more about:

IEEE membership

IEEE Xplore subscriptions

2 Author(s)
Kar, S. ; Dept. of Electr. & Comput. Eng., Carnegie Mellon Univ., Pittsburgh, PA ; Moura, J.M.F.

The paper studies average consensus with random topologies (intermittent links) and noisy channels. Consensus with noise in the network links leads to the bias-variance dilemma-running consensus for long reduces the bias of the final average estimate but increases its variance. We present two different compromises to this tradeoff: the A-ND algorithm modifies conventional consensus by forcing the weights to satisfy a persistence condition (slowly decaying to zero;) and the A-NC algorithm where the weights are constant but consensus is run for a fixed number of iterations [^(iota)], then it is restarted and rerun for a total of [^(p)] runs, and at the end averages the final states of the [^(p)] runs (Monte Carlo averaging). We use controlled Markov processes and stochastic approximation arguments to prove almost sure convergence of A-ND to a finite consensus limit and compute explicitly the mean square error (mse) (variance) of the consensus limit. We show that A-ND represents the best of both worlds-zero bias and low variance-at the cost of a slow convergence rate; rescaling the weights balances the variance versus the rate of bias reduction (convergence rate). In contrast, A-NC, because of its constant weights, converges fast but presents a different bias-variance tradeoff. For the same number of iterations [^(iota)][^(p)] , shorter runs (smaller [^(iota)] ) lead to high bias but smaller variance (larger number [^(p)] of runs to average over.) For a static nonrandom network with Gaussian noise, we compute the optimal gain for A-NC to reach in the shortest number of iterations [^(iota)][^(p)] , with high probability (1-delta), (epsiv, delta)-consensus (epsiv residual bias). Our results hold under fairly general assumptions on the random link failures and communication noise.

Published in:

Signal Processing, IEEE Transactions on  (Volume:57 ,  Issue: 1 )