By Topic

Integrated approximation and non-convex optimization using radial basis function networks

Sign In

Cookies must be enabled to login.After enabling cookies , please use refresh or reload or ctrl+f5 on the browser for the login options.

Formats Non-Member Member
$33 $13
Learn how you can qualify for the best price for this item!
Become an IEEE Member or Subscribe to
IEEE Xplore for exclusive pricing!
close button

puzzle piece

IEEE membership options for an individual and IEEE Xplore subscriptions for an organization offer the most affordable access to essential journal articles, conference papers, standards, eBooks, and eLearning courses.

Learn more about:

IEEE membership

IEEE Xplore subscriptions

3 Author(s)
Saha, A. ; Dept. of Electr. & Comput. Eng., Texas Univ., Austin, TX, USA ; Tang, D.S. ; Chuan-lin Wu

The authors consider the problem of learning inverse maps x '=f1(y) within the framework of radial basis function networks. If the forward function y=f( x) is approximated using a radial basis function network, it is found that the linear weights can be a good indicator of the network output. Centers then correspond to classes in the input space of the function, and the superposed weights correspond to properties associated with the respective classes. This provides suitable grounds for implementing efficient search strategies, for nonconvex and constrained or unconstrained optimization. The authors highlight the advantages of this scheme over other proposed methods for nonconvex optimization and present experimental results

Published in:

Neural Networks, 1991., IJCNN-91-Seattle International Joint Conference on  (Volume:ii )

Date of Conference:

8-14 Jul 1991