By Topic

A multi-sieving neural network architecture that decomposes learning tasks automatically

Sign In

Cookies must be enabled to login.After enabling cookies , please use refresh or reload or ctrl+f5 on the browser for the login options.

Formats Non-Member Member
$33 $13
Learn how you can qualify for the best price for this item!
Become an IEEE Member or Subscribe to
IEEE Xplore for exclusive pricing!
close button

puzzle piece

IEEE membership options for an individual and IEEE Xplore subscriptions for an organization offer the most affordable access to essential journal articles, conference papers, standards, eBooks, and eLearning courses.

Learn more about:

IEEE membership

IEEE Xplore subscriptions

3 Author(s)
B. -L. Lu ; Dept. of Electr. Eng., Kyoto Univ., Japan ; H. Kita ; Y. Nishikawa

This paper presents a multi-sieving network (MSN) architecture and the multi-sieving learning (MSL) algorithm for it. The basic idea behind MSN architecture is that patterns are classified by a rough sieve at the beginning and done by finer ones gradually. MSN is constructed by adding a sieving module (SM) adaptively with progress of training. SM consists of two different neural networks and a simple logical circuit. MSL algorithm starts with a single SM, then does the following three phases repeatedly until all the training samples are successfully learned: 1) the learning phase in which the training samples are learned by the current SM; 2) the sieving phase in which the training samples that have been successfully learned are sifted out from the training set; and 3) the growing phase in which the current SM is frozen and a new SM is added in order to learn the remaining training samples. The performance of MSN architecture is illustrated on two benchmark problems

Published in:

Neural Networks, 1994. IEEE World Congress on Computational Intelligence., 1994 IEEE International Conference on  (Volume:3 )

Date of Conference:

27 Jun-2 Jul 1994