By Topic

Independence, Measurement Complexity, and Classification Performance

Sign In

Cookies must be enabled to login.After enabling cookies , please use refresh or reload or ctrl+f5 on the browser for the login options.

Formats Non-Member Member
$33 $13
Learn how you can qualify for the best price for this item!
Become an IEEE Member or Subscribe to
IEEE Xplore for exclusive pricing!
close button

puzzle piece

IEEE membership options for an individual and IEEE Xplore subscriptions for an organization offer the most affordable access to essential journal articles, conference papers, standards, eBooks, and eLearning courses.

Learn more about:

IEEE membership

IEEE Xplore subscriptions

2 Author(s)
B. Chandrasekaran ; Department of Computer and Information Science, Ohio State University, Columbus, Ohio 43210. ; Anil K. Jain

If f(x) and g(x) are the densities for the N-dimensional measurement vector x, conditioned on the classes c1 and c2, and if finite sets of samples from the two classes are available, then a decision function based on estimates f(x) and ¿(x) can be used to classify future observations. In general, however, when the measurement complexity (the dimensionality N) is increased arbitrarily and the sets of training samples remain finite, a ``peaking phenomenon'' of the following kind is observed: classification accuracy improves at first, peaks at a finite value of N, called the optimum measurement complexity, and starts deteriorating thereafter. We derive, for the case of statistically independent measurements, general conditions under which it can be guaranteed that the peaking phenomenon will not occur, and the correct classification probability will keep increasing to value unity as N ¿ ¿. Several applications are considered which together indicate, contrary to general belief, that independence of measurements alone does not guarantee the absence of the peaking phenomenon.

Published in:

IEEE Transactions on Systems, Man, and Cybernetics  (Volume:SMC-5 ,  Issue: 2 )