By Topic

Error-bound for the non-exact SVD-based complexity reduction of the generalized type hybrid neural networks with non-singleton consequents

Sign In

Cookies must be enabled to login.After enabling cookies , please use refresh or reload or ctrl+f5 on the browser for the login options.

Formats Non-Member Member
$31 $13
Learn how you can qualify for the best price for this item!
Become an IEEE Member or Subscribe to
IEEE Xplore for exclusive pricing!
close button

puzzle piece

IEEE membership options for an individual and IEEE Xplore subscriptions for an organization offer the most affordable access to essential journal articles, conference papers, standards, eBooks, and eLearning courses.

Learn more about:

IEEE membership

IEEE Xplore subscriptions

2 Author(s)
Takacs, O. ; Dept. of Meas. & Inf. Syst., Budapest Univ. of Technol. & Econ., Hungary ; Varkonyi-Koczy, A.R.

The main advantage of neural networks (NNs) is that they are able to solve complicated problems, even if the exact mathematical model is not known. However, there is no universal method for the approximation of the proper size of the neural networks which usually results in the overestimation of the needed size. Therefore, the need arises to have formal methods for the complexity reduction of neural networks. Singular Value Decomposition (SVD) based complexity reduction was first proposed for various fuzzy inference systems. Recently, the method has been extended to generalized neural network, which made possible the use of neural networks in time-critical systems. Beyond the elimination of redundancy, the SVD-based reduction can be used to achieve further reduction, if a certain amount of error can be tolerated. This paper gives an error-bound for this further complexity reduction of generalized type hybrid neural networks with non-singleton consequents

Published in:

Instrumentation and Measurement Technology Conference, 2001. IMTC 2001. Proceedings of the 18th IEEE  (Volume:3 )

Date of Conference: