Skip to Main Content
Linear discriminant analysis (LDA) and biased discriminant analysis (BDA) are two effective techniques for dimension reduction, which pay attention to different roles of the positive and negative samples in finding discriminating subspace. However, the drawbacks of these two methods are obvious: LDA has limited efficiency in classifying sample data from subclasses with different distributions, and BDA does not account for the underlying distribution of negative samples. In order to effectively exploit favorable attributes of both BDA and LDA and avoid their unfavorable ones, we propose a novel adaptive discriminant analysis (ADA) for image classification. ADA can find an optimal discriminative subspace with adaptation to different sample distributions. In addition, three novel variants and extensions of ADA are further proposed: 1) integrated boosting (i.Boosting), which enhances and combines a set of ADA classifiers into a more powerful one. i.Boosting integrates feature re-weighting, relevance feedback, and AdaBoost into one framework. With affordable computational cost, i.Boosting can provide a unified and stable solution to ADA prediction result. 2) Fast adaptive discriminant analysis (FADA). Instead of searching parameters, FADA can directly find a close-to-optimal projection very fast based on different sample distributions. 3) Two-dimensional adaptive discriminant analysis (2DADA). As opposed to ADA, 2DADA is based on 2-D image matrix representation rather than 1-D vector. So it is simpler, more straightforward, and has lower time complexity to use for image feature extraction. Extensive experiments on synthetic data, UCI benchmark data sets, hand-digit data set, four facial image data sets, and COREL color image data sets show the superior performance of our proposed approaches.