By Topic

An empirical measure of element contribution in neural networks

Sign In

Cookies must be enabled to login.After enabling cookies , please use refresh or reload or ctrl+f5 on the browser for the login options.

Formats Non-Member Member
$31 $13
Learn how you can qualify for the best price for this item!
Become an IEEE Member or Subscribe to
IEEE Xplore for exclusive pricing!
close button

puzzle piece

IEEE membership options for an individual and IEEE Xplore subscriptions for an organization offer the most affordable access to essential journal articles, conference papers, standards, eBooks, and eLearning courses.

Learn more about:

IEEE membership

IEEE Xplore subscriptions

2 Author(s)
Mak, B. ; Dept. of Comput. Sci. & Inf. Syst., Hong Kong Univ., Hong Kong ; Blanning, R.W.

A frequent complaint about neural net models is that they fail to explain their results in any useful way. The problem is not a lack of information, but an abundance of information that is difficult to interpret. When trained, neural nets will provide a predicted output for a posited input, and they can provide additional information in the form of interelement connection strengths. This latter information is of little use to analysts and managers who wish to interpret the results they have been given. We develop a measure of the relative importance of the various input elements and hidden layer elements, and we use this to interpret the contribution of these components to the outputs of the neural net

Published in:

Systems, Man, and Cybernetics, Part C: Applications and Reviews, IEEE Transactions on  (Volume:28 ,  Issue: 4 )