By Topic

Convergence probability bounds for stochastic approximation

Sign In

Cookies must be enabled to login.After enabling cookies , please use refresh or reload or ctrl+f5 on the browser for the login options.

Formats Non-Member Member
$31 $13
Learn how you can qualify for the best price for this item!
Become an IEEE Member or Subscribe to
IEEE Xplore for exclusive pricing!
close button

puzzle piece

IEEE membership options for an individual and IEEE Xplore subscriptions for an organization offer the most affordable access to essential journal articles, conference papers, standards, eBooks, and eLearning courses.

Learn more about:

IEEE membership

IEEE Xplore subscriptions

1 Author(s)

In certain stochastic-approximation applications, sufficient conditions for mean-square and probability-one convergence are satisfied within some unknown bounded convex set, referred to as a convergence region. Globally, the conditions are not satisfied. Important examples are found in decision-directed procedures. If a convergence region were known, a reflecting barrier at the boundary would solve the problem. Then the estimate would converge in mean square and with probability one. Since a convergence region may not be known in practice, the possibility of nonconvergence must be accepted. LetAbe the event where the estimation sequence never crosses a particular convergence-region boundary. The sequence of estimates conditioned onAconverges in mean square and with probability one, because the sequence of estimates is the same as if there were a reflecting barrier at the boundary. Therefore, the unconditional probability of convergence exceeds the probability of the eventA. Starting from this principle, a lower bound on the convergence probability is derived in this paper. The results can also be used when the convergence conditions are satisfied globally to bound the maximum-error probability distribution. Specific examples are presented.

Published in:

Information Theory, IEEE Transactions on  (Volume:16 ,  Issue: 6 )