By Topic

Empirical modeling of very large data sets using neural networks

Sign In

Cookies must be enabled to login.After enabling cookies , please use refresh or reload or ctrl+f5 on the browser for the login options.

Formats Non-Member Member
$31 $13
Learn how you can qualify for the best price for this item!
Become an IEEE Member or Subscribe to
IEEE Xplore for exclusive pricing!
close button

puzzle piece

IEEE membership options for an individual and IEEE Xplore subscriptions for an organization offer the most affordable access to essential journal articles, conference papers, standards, eBooks, and eLearning courses.

Learn more about:

IEEE membership

IEEE Xplore subscriptions

1 Author(s)
Owens, A.J. ; DuPont Central Res. & Dev., Wilmington, DE, USA

Building empirical predictive models from very large data sets is challenging. One has to deal both with the `curse of dimensionality' (hundreds or thousands of variables) and with `too many records' (many thousands of instances). While neural networks [Rumelhart, et al., 1986] are widely recognized as universal function approximators [Cybenko, 1989], their training time rapidly increases with the number of variables and instances. I discuss practical methods for overcoming this problem so that neural network models can be developed for very large databases. The methods include: Dimensionality reduction with neural net modeling, PLS modeling, and bottleneck neural networks; Sub-sampling and re-sampling with many smaller data sets to reduce training time; Committee of networks to make the final prediction more robust and to estimate its uncertainty

Published in:

Neural Networks, 2000. IJCNN 2000, Proceedings of the IEEE-INNS-ENNS International Joint Conference on  (Volume:6 )

Date of Conference: