Skip to Main Content
In this paper, a method of dynamically adjusting kernel width of general regression neural networks (GRNNs) is presented. This method chooses kernel width automatically and flexibly according to the distance between input vectors and training samples. Another method, increment addition based on GRNNs, is also presented. When a large kernel width is chosen, the computed output can smoothly balance the samples and input vectors. If we use the output to modulate input, namely, the input vector superimpose the increment vector, the interpolation can befit very closely. The two methods presented here are applied in image processing.