By Topic

Information measure of knowledge extracted from neurons as a tool for analyzing Boolean learning in artificial neural networks

Sign In

Cookies must be enabled to login.After enabling cookies , please use refresh or reload or ctrl+f5 on the browser for the login options.

Formats Non-Member Member
$33 $13
Learn how you can qualify for the best price for this item!
Become an IEEE Member or Subscribe to
IEEE Xplore for exclusive pricing!
close button

puzzle piece

IEEE membership options for an individual and IEEE Xplore subscriptions for an organization offer the most affordable access to essential journal articles, conference papers, standards, eBooks, and eLearning courses.

Learn more about:

IEEE membership

IEEE Xplore subscriptions

2 Author(s)
L. Peh ; Dept. of Comput. Sci., Western Australia Univ., Nedlands, WA, Australia ; C. P. Tsang

Neural network research depends on convergence and learning characteristics traditionally derived from error measures. Recent studies have attempted more direct extraction of knowledge from a network, but they require control of the training process. We show how Boolean information may be extracted and measured efficiently from a neuron's internal representation. The information measure is compared with training error by observing twelve-input three-layer networks during multiple training runs. The experiment indicates a natural termination point for training by backpropagation

Published in:

Neural Networks, 1995. Proceedings., IEEE International Conference on  (Volume:1 )

Date of Conference:

Nov/Dec 1995