By Topic

Incremental Robbins-Monro Gradient Algorithm for Regression in Sensor Networks

Sign In

Cookies must be enabled to login.After enabling cookies , please use refresh or reload or ctrl+f5 on the browser for the login options.

Formats Non-Member Member
$31 $13
Learn how you can qualify for the best price for this item!
Become an IEEE Member or Subscribe to
IEEE Xplore for exclusive pricing!
close button

puzzle piece

IEEE membership options for an individual and IEEE Xplore subscriptions for an organization offer the most affordable access to essential journal articles, conference papers, standards, eBooks, and eLearning courses.

Learn more about:

IEEE membership

IEEE Xplore subscriptions

3 Author(s)
Ram, S.S. ; Univ. of Illinois at Urbana-Champaign, Urbana, IL ; Veeravalli, V.V. ; Nedic, A.

We consider a network of sensors deployed to sense a spatial field for the purposes of parameter estimation. Each sensor makes a sequence of measurements that is corrupted by noise. The estimation problem is to determine the value of a parameter that minimizes a cost that is a function of the measurements and the unknown parameter. The cost function is such that it can be written as the sum of functions (one corresponding to each sensor), each of which is associated with one sensor's measurements. Such a cost function is of interest in regression. We are interested in solving the resulting optimization problem in a distributed and recursive manner. Towards this end, we combine the incremental gradient approach with the Robbins-Monro approximation algorithm to develop the incremental Robbins-Monro gradient (IRMG) algorithm. We investigate the convergence of the algorithm under a convexity assumption on the cost function and a stochastic model for the sensor measurements. In particular, we show that if the observations at each are independent and identically distributed, then the IRMG algorithm converges to the optimum solution almost surely as the number of observations goes to infinity. We emphasize that the IRMG algorithm itself requires no information about the stochastic model.

Published in:

Computational Advances in Multi-Sensor Adaptive Processing, 2007. CAMPSAP 2007. 2nd IEEE International Workshop on

Date of Conference:

12-14 Dec. 2007