By Topic

Push-pull separability objective for supervised layer-wise training of neural networks

Sign In

Cookies must be enabled to login.After enabling cookies , please use refresh or reload or ctrl+f5 on the browser for the login options.

Formats Non-Member Member
$31 $13
Learn how you can qualify for the best price for this item!
Become an IEEE Member or Subscribe to
IEEE Xplore for exclusive pricing!
close button

puzzle piece

IEEE membership options for an individual and IEEE Xplore subscriptions for an organization offer the most affordable access to essential journal articles, conference papers, standards, eBooks, and eLearning courses.

Learn more about:

IEEE membership

IEEE Xplore subscriptions

2 Author(s)
Szymanski, L. ; Dept. of Comput. Sci., Univ. of Otago, Dunedin, New Zealand ; McCane, B.

Deep architecture neural networks have been shown to generalise well for many classification problems, however, outside the empirical evidence, it is not entirely clear how deep representation benefits these problems. This paper proposes a supervised cost function for an individual layer in a deep architecture classifier that improves data separability. From this measure, a training algorithm for a multi-layer neural network is developed and evaluated against backpropagation and deep belief net learning. The results confirm that the proposed supervised training objective leads to appropriate internal representation with respect to the classification task, especially for datasets where unsupervised pre-conditioning is not effective. Separability of the hidden layers offers new directions and insights in the quest to illuminate the black box model of deep architectures.

Published in:

Neural Networks (IJCNN), The 2012 International Joint Conference on

Date of Conference:

10-15 June 2012