By Topic

The ensemble approach to neural-network learning and generalization

Sign In

Cookies must be enabled to login.After enabling cookies , please use refresh or reload or ctrl+f5 on the browser for the login options.

Formats Non-Member Member
$31 $13
Learn how you can qualify for the best price for this item!
Become an IEEE Member or Subscribe to
IEEE Xplore for exclusive pricing!
close button

puzzle piece

IEEE membership options for an individual and IEEE Xplore subscriptions for an organization offer the most affordable access to essential journal articles, conference papers, standards, eBooks, and eLearning courses.

Learn more about:

IEEE membership

IEEE Xplore subscriptions

4 Author(s)
Igelnik, B. ; Case Western Reserve Univ., Cleveland, OH, USA ; Pao, Y.-H. ; LeClair, S.R. ; Chang Yun Shen

A method is suggested for learning and generalization with a general one-hidden layer feedforward neural network. This scheme encompasses the use of a linear combination of heterogeneous nodes having randomly prescribed parameter values. The learning of the parameters is realized through adaptive stochastic optimization using a generalization data set. The learning of the linear coefficients in the linear combination of nodes is achieved with a linear regression method using data from the training set. One node is learned at a time. The method allows for choosing the proper number of net nodes, and is computationally efficient. The method was tested on mathematical examples and real problems from materials science and technology

Published in:

Neural Networks, IEEE Transactions on  (Volume:10 ,  Issue: 1 )