By Topic

Computing models based on GRNNS

Sign In

Cookies must be enabled to login.After enabling cookies , please use refresh or reload or ctrl+f5 on the browser for the login options.

Formats Non-Member Member
$31 $13
Learn how you can qualify for the best price for this item!
Become an IEEE Member or Subscribe to
IEEE Xplore for exclusive pricing!
close button

puzzle piece

IEEE membership options for an individual and IEEE Xplore subscriptions for an organization offer the most affordable access to essential journal articles, conference papers, standards, eBooks, and eLearning courses.

Learn more about:

IEEE membership

IEEE Xplore subscriptions

4 Author(s)
Zhanwei Li ; Tianjin Univ., China ; Jizhou Sun ; Jiawan Zhang ; Zzunce Wei

In this paper, a method of dynamically adjusting kernel width of general regression neural networks (GRNNs) is presented. This method chooses kernel width automatically and flexibly according to the distance between input vectors and training samples. Another method, increment addition based on GRNNs, is also presented. When a large kernel width is chosen, the computed output can smoothly balance the samples and input vectors. If we use the output to modulate input, namely, the input vector superimpose the increment vector, the interpolation can befit very closely. The two methods presented here are applied in image processing.

Published in:

Electrical and Computer Engineering, 2003. IEEE CCECE 2003. Canadian Conference on  (Volume:3 )

Date of Conference:

4-7 May 2003