By Topic

Minimum description length pruning and maximum mutual information training of adaptive probabilistic neural networks

Sign In

Cookies must be enabled to login.After enabling cookies , please use refresh or reload or ctrl+f5 on the browser for the login options.

Formats Non-Member Member
$33 $13
Learn how you can qualify for the best price for this item!
Become an IEEE Member or Subscribe to
IEEE Xplore for exclusive pricing!
close button

puzzle piece

IEEE membership options for an individual and IEEE Xplore subscriptions for an organization offer the most affordable access to essential journal articles, conference papers, standards, eBooks, and eLearning courses.

Learn more about:

IEEE membership

IEEE Xplore subscriptions

2 Author(s)
W. Fakhr ; Dept. of Electr. & Comput. Eng., Waterloo Univ., Ont., Canada ; M. I. Elmasry

An approximated version of the minimum description length criterion (MDL) is applied to find optimal size adaptive probabilistic neural networks (APNNs) by adaptively pruning Gaussian windows from the probabilistic neural network (PNN). The authors discuss and compare both stochastic maximum likelihood (ML) and stochastic maximum mutual information (MMI) training applied to the APNN, for probability density estimation (PDF) and pattern recognition applications. Results on four benchmark problems show that the APNN performs better than or similar to the PNN, and that its size is optimal and much smaller than that of the PNN

Published in:

Neural Networks, 1993., IEEE International Conference on

Date of Conference: