By Topic

Bayesian adaptation of hidden layers in Boolean feedforward neural networks

Sign In

Cookies must be enabled to login.After enabling cookies , please use refresh or reload or ctrl+f5 on the browser for the login options.

Formats Non-Member Member
$31 $13
Learn how you can qualify for the best price for this item!
Become an IEEE Member or Subscribe to
IEEE Xplore for exclusive pricing!
close button

puzzle piece

IEEE membership options for an individual and IEEE Xplore subscriptions for an organization offer the most affordable access to essential journal articles, conference papers, standards, eBooks, and eLearning courses.

Learn more about:

IEEE membership

IEEE Xplore subscriptions

2 Author(s)
Utschick, W. ; Lehrstuhl fur Netzwerktheorie und Schaltungtechnik, Tech. Univ. Munchen, Germany ; Nossek, J.A.

In this paper a statistical point of view of feedforward neural networks is presented. The hidden layer of a multilayer perceptron neural network is identified of representing the mapping of random vectors. Utilizing hard limiter activation functions, the second and all further layers of the multilayer perceptron, including the output layer represent the mapping of a Boolean function. Boolean type of neural networks are naturally appropriate for categorization of input data. Training is exclusively carried out on the first layer of the neural network, whereas the definition of the Boolean function generally remains a matter of experience or due to considerations of symmetry. In this work a method is introduced, how to adapt the Boolean function of the network, utilizing statistical knowledge of the internal representation of input data. Applied to the classification problem of greylevel bitmaps of handwritten characters the misclassification rate of the neural network is approximately reduced by 20%

Published in:

Pattern Recognition, 1996., Proceedings of the 13th International Conference on  (Volume:4 )

Date of Conference:

25-29 Aug 1996