By Topic

Hierarchical Artificial Neural Networks for Recognizing High Similar Large Data Sets

Sign In

Cookies must be enabled to login.After enabling cookies , please use refresh or reload or ctrl+f5 on the browser for the login options.

Formats Non-Member Member
$31 $13
Learn how you can qualify for the best price for this item!
Become an IEEE Member or Subscribe to
IEEE Xplore for exclusive pricing!
close button

puzzle piece

IEEE membership options for an individual and IEEE Xplore subscriptions for an organization offer the most affordable access to essential journal articles, conference papers, standards, eBooks, and eLearning courses.

Learn more about:

IEEE membership

IEEE Xplore subscriptions

2 Author(s)
Yen-Ling Lu ; Nat. Taiwan Univ. of Sci. & Technol., Taipei ; Chin-Shyurng Fahn

This paper proposes a hierarchical artificial neural network for recognizing high similar large data sets. It is usually required to classify large data sets with high similar characteristics in many applications. Analyzing and identifying those data is a laborious task when the methods adopted are primarily based on visual inspection. In many field applications, data sets are measured and recorded continuously using automatic monitoring equipments. Therefore, a large amount of data can be collected, and manual inspection has become an unsuitable approach to recognizing those data. This proposed hierarchical neural network integrates self-organizing feature map (SOM) networks and learning vector quantization (LVQ) networks. The SOM networks provide an approximate method for computing the input vectors in an unsupervised manner. Then the computation of the SOM may be viewed as the first stage of the proposed hierarchical network. The second stage is provided by the LVQ networks based on a supervised learning technique that uses class information to improve the quality of the classifier from the first stage. The multistage hierarchical network attempts to factorize the overall input vector into a number of small groups, each of which requires very little computation. Consequently, by use of the proposed network, the loss in accuracy can be small.

Published in:

Machine Learning and Cybernetics, 2007 International Conference on  (Volume:4 )

Date of Conference:

19-22 Aug. 2007