By Topic

Robustness in neural computation: random graphs and sparsity

Sign In

Cookies must be enabled to login.After enabling cookies , please use refresh or reload or ctrl+f5 on the browser for the login options.

Formats Non-Member Member
$31 $13
Learn how you can qualify for the best price for this item!
Become an IEEE Member or Subscribe to
IEEE Xplore for exclusive pricing!
close button

puzzle piece

IEEE membership options for an individual and IEEE Xplore subscriptions for an organization offer the most affordable access to essential journal articles, conference papers, standards, eBooks, and eLearning courses.

Learn more about:

IEEE membership

IEEE Xplore subscriptions

1 Author(s)
Venkatesh, S.S. ; Dept. of Electr. Eng., Pennsylvania Univ., Philadelphia, PA

An attempt is made to mathematically codify the belief that fully interconnected neural networks continue to function efficiently in the presence of component damage. Component damage is introduced in a fully interconnected neural network model of n neurons by randomly deleting the links between neurons. An analysis of the outer-product algorithm for this random graph model of sparse interconnectivity yields the following result: If the probability of losing any given link between two neurons is 1- , then the outer-product algorithm can store on the order of pn/log pn2 stable memories correcting a linear number of random errors. In particular, the average degree of the interconnectivity graph dictates the memory storage capability, and functional storage of memories as stable states is feasible abruptly when the average number of neural interconnections retained by a neuron exceeds the order of log n links (of a total of n possible links) with other neurons

Published in:

Information Theory, IEEE Transactions on  (Volume:38 ,  Issue: 3 )