By Topic

A comparison of second-order neural networks to transform-based method for translation- and orientation-invariant object recognition

Sign In

Cookies must be enabled to login.After enabling cookies , please use refresh or reload or ctrl+f5 on the browser for the login options.

Formats Non-Member Member
$33 $13
Learn how you can qualify for the best price for this item!
Become an IEEE Member or Subscribe to
IEEE Xplore for exclusive pricing!
close button

puzzle piece

IEEE membership options for an individual and IEEE Xplore subscriptions for an organization offer the most affordable access to essential journal articles, conference papers, standards, eBooks, and eLearning courses.

Learn more about:

IEEE membership

IEEE Xplore subscriptions

2 Author(s)
R. Duren ; General Dynamics Corp., Fort Worth, TX, USA ; B. Peikari

Neural networks can use second-order neurons to obtain invariance to translations in the input pattern. Alternatively transform methods can be used to obtain translation invariance before classification by a neural network. The authors compare the use of second-order neurons to various translation-invariant transforms. The mapping properties of second-order neurons are compared to those of the general class of fast translation-invariant transforms introduced by Wagh and Kanetkar (1977) and to the power spectra of the Walsh-Hadamard and discrete Fourier transforms. A fast transformation based on the use of higher-order correlations is introduced. Three theorems are proven concerning the ability of various methods to discriminate between similar patterns. Second-order neurons are shown to have several advantages over the transform methods. Experimental results are presented that corroborate the theory

Published in:

Neural Networks for Signal Processing [1991]., Proceedings of the 1991 IEEE Workshop

Date of Conference:

30 Sep-1 Oct 1991