Scheduled System Maintenance:
Some services will be unavailable Sunday, March 29th through Monday, March 30th. We apologize for the inconvenience.
By Topic

How limited training data can allow a neural network to outperform an `optimal' statistical classifier

Sign In

Cookies must be enabled to login.After enabling cookies , please use refresh or reload or ctrl+f5 on the browser for the login options.

Formats Non-Member Member
$31 $13
Learn how you can qualify for the best price for this item!
Become an IEEE Member or Subscribe to
IEEE Xplore for exclusive pricing!
close button

puzzle piece

IEEE membership options for an individual and IEEE Xplore subscriptions for an organization offer the most affordable access to essential journal articles, conference papers, standards, eBooks, and eLearning courses.

Learn more about:

IEEE membership

IEEE Xplore subscriptions

4 Author(s)
Niles, L. ; Div. of Eng., Brown Univ., Providence, RI, USA ; Silverman, H. ; Tajchman, G. ; Bush, M.

Experiments comparing artificial neural network (ANN), k-nearest-neighbor (KNN), and Bayes' rule with Gaussian distributions and maximum-likelihood estimation (BGM) classifiers were performed. Classifier error rate as a function of training set size was tested for synthetic data drawn from several different probability distributions. In cases where the true distributions were poorly modeled, ANN was significantly better than BGM. In some cases, ANN was also better than KNN. Similar experiments were performed on a voiced/unvoiced speech classification task. ANN had a lower error rate than KNN or BGM for all training set sizes, although BGM approached the ANN error rate as the training set became larger. It is concluded that there are pattern classification tasks in which an ANN is able to make better use of training data to achieve a lower error rate with a particular size training set

Published in:

Acoustics, Speech, and Signal Processing, 1989. ICASSP-89., 1989 International Conference on

Date of Conference:

23-26 May 1989