By Topic

Deterministic design for neural network learning: an approach based on discrepancy

Sign In

Cookies must be enabled to login.After enabling cookies , please use refresh or reload or ctrl+f5 on the browser for the login options.

Formats Non-Member Member
$31 $13
Learn how you can qualify for the best price for this item!
Become an IEEE Member or Subscribe to
IEEE Xplore for exclusive pricing!
close button

puzzle piece

IEEE membership options for an individual and IEEE Xplore subscriptions for an organization offer the most affordable access to essential journal articles, conference papers, standards, eBooks, and eLearning courses.

Learn more about:

IEEE membership

IEEE Xplore subscriptions

2 Author(s)
Cervellera, C. ; Inst. di Studi sui Sistemi Intelligenti per l''Automazione, Genova, Italy ; Muselli, M.

The general problem of reconstructing an unknown function from a finite collection of samples is considered, in case the position of each input vector in the training set is not fixed beforehand but is part of the learning process. In particular, the consistency of the empirical risk minimization (ERM) principle is analyzed, when the points in the input space are generated by employing a purely deterministic algorithm (deterministic learning). When the output generation is not subject to noise, classical number-theoretic results, involving discrepancy and variation, enable the establishment of a sufficient condition for the consistency of the ERM principle. In addition, the adoption of low-discrepancy sequences enables the achievement of a learning rate of O(1/L), with L being the size of the training set. An extension to the noisy case is provided, which shows that the good properties of deterministic learning are preserved, if the level of noise at the output is not high. Simulation results confirm the validity of the proposed approach.

Published in:

Neural Networks, IEEE Transactions on  (Volume:15 ,  Issue: 3 )