By Topic

Parallel distributed processing with multiple one-output back-propagation neural networks

Sign In

Cookies must be enabled to login.After enabling cookies , please use refresh or reload or ctrl+f5 on the browser for the login options.

Formats Non-Member Member
$33 $13
Learn how you can qualify for the best price for this item!
Become an IEEE Member or Subscribe to
IEEE Xplore for exclusive pricing!
close button

puzzle piece

IEEE membership options for an individual and IEEE Xplore subscriptions for an organization offer the most affordable access to essential journal articles, conference papers, standards, eBooks, and eLearning courses.

Learn more about:

IEEE membership

IEEE Xplore subscriptions

5 Author(s)
I. Chang Jou ; Telecommun. Lab., MOC, Chung-Li, Taiwan ; Yuh-Jiuan Tsay ; Shuh-Chuan Tsay ; Quen-Zong Wu
more authors

A novel architecture of neural networks with distributed structures which is designed so that each class in the application has a one-output backpropagation subnetwork is presented. A novel architecture (one-net-one-class) can overcome the drawbacks of conventional backpropagation architectures which must be completely retrained whenever a class is added. This architecture features complete parallel distributed processing in that the network is comprised of subnetworks each of which is a single output two-layer backpropagation which can be trained and retrieved parallely and independently. The proposed architecture also enjoys rapid convergence in both the training phase and the retrieving phase

Published in:

Circuits and Systems, 1991., IEEE International Sympoisum on

Date of Conference:

11-14 Jun 1991