By Topic

Learning efficiency of redundant neural networks in Bayesian estimation

Sign In

Cookies must be enabled to login.After enabling cookies , please use refresh or reload or ctrl+f5 on the browser for the login options.

Formats Non-Member Member
$33 $13
Learn how you can qualify for the best price for this item!
Become an IEEE Member or Subscribe to
IEEE Xplore for exclusive pricing!
close button

puzzle piece

IEEE membership options for an individual and IEEE Xplore subscriptions for an organization offer the most affordable access to essential journal articles, conference papers, standards, eBooks, and eLearning courses.

Learn more about:

IEEE membership

IEEE Xplore subscriptions

1 Author(s)
S. Watanabe ; Precision & Intelligence Lab., Tokyo Inst. of Technol., Yokohama, Japan

This paper proves that the Bayesian stochastic complexity of a layered neural network is asymptotically smaller than that of a regular statistical model if it contains the true distribution. We consider a case when a three-layer perceptron with M input units, H hidden units and N output units is trained to estimate the true distribution represented by the model with H0 hidden units and prove that the stochastic complexity is asymptotically smaller than (1/2) {H0 (M+N)+R} log n where n is the number of training samples and R is a function of H-H0, M, and N that is far smaller than the number of redundant parameters. Since the generalization error of Bayesian estimation is equal to the increase of stochastic complexity, it is smaller than (1/2 n) {H0 (M+N)+R} if it has an asymptotic expansion. Based on the results, the difference between layered neural networks and regular statistical models is discussed from the statistical point of view

Published in:

IEEE Transactions on Neural Networks  (Volume:12 ,  Issue: 6 )