By Topic

Learning neural networks with respect to tolerances to weight errors

Sign In

Cookies must be enabled to login.After enabling cookies , please use refresh or reload or ctrl+f5 on the browser for the login options.

Formats Non-Member Member
$31 $13
Learn how you can qualify for the best price for this item!
Become an IEEE Member or Subscribe to
IEEE Xplore for exclusive pricing!
close button

puzzle piece

IEEE membership options for an individual and IEEE Xplore subscriptions for an organization offer the most affordable access to essential journal articles, conference papers, standards, eBooks, and eLearning courses.

Learn more about:

IEEE membership

IEEE Xplore subscriptions

1 Author(s)
Ruzicka, P. ; Inst. of Comput. & Inf. Sci., Acad. of Sci., Prague, Czech Republic

The problem of neural network learning to get the most convenient configuration, i.e., the vector of synaptic weights and thresholds of formal neurons creating the network, is treated. The possible errors in keeping precise the designed configuration during the realization as well as fluctuations of the configuration during the net exploitation are taken into account using the theory of tolerances. A cumulative loss function that expresses the loss evoked by imprecise learning is introduced, allowing the mathematical formalism used in the theory of tolerances and sensitivity to be applied. Learning is expressed as the problem of maximization of the volume of the area in the configuration space where the neural network exhibits small values of the cumulative loss function. The general task of synthesizing the parameters and their tolerances is shown to be a nonconvex problem of stochastic optimization with stochastic constraints, and a stochastic approximation algorithm for solving this problem is given. Results of teaching a three-layer feedforward network are given

Published in:

Circuits and Systems I: Fundamental Theory and Applications, IEEE Transactions on  (Volume:40 ,  Issue: 5 )