Skip to Main Content
On the subject of optimal subspaces for appearance-based object recognition, it is generally believed that algorithms based on LDA (linear discriminant analysis) are superior to those based on PCA (principal components analysis), provided that relatively large training data sets are available. In this paper, we show that while this is generally true for classification with the nearest-neighbor classifier, it is not always the case with a maximum-likelihood classifier. We support our claim by presenting both intuitively plausible arguments and actual results on a large data set of human chromosomes. Our conjecture is that perhaps only when the underlying object classes are linearly separable would LDA be truly superior to other known subspaces of equal dimensionality.