Skip to Main Content
This paper proposes to improve the classification accuracy of hyperspectral data with support vector machines (SVMs) by using stacked generalization (stacking) as well as the complementary information of magnitude and shape feature spaces. Stacking is a method to combine multiple classifiers by learning a meta-level (or level-1) classifier from the outputs of base-level (or level-0) classifiers (estimated via cross-validation). In the processing of hyperspectral data, magnitude features are the radiance values at different sensor bands, whereas shape features are the differences in direction rather than the magnitude of the spectral signatures. In particular, the proposed method is as follows: (1) SVMs trained in magnitude and shape feature spaces are adopted as level-0 classifiers (termed as level-0 SVMs); (2) outputs (decision values) of the level-0 SVMs are used as inputs (termed as meta-level features) of level-1 classifier, since the decision values contain much more information than class labels; (3) level-1 classifier adopts SVMs (level-1 SVMs) trained in the meta-level feature space. In addition, we also discuss the possibility of reducing the number of level-0 SVMs by meta-level feature selection and present one simple solution. Experiments on a benchmark hyperspectral data set demonstrate that our method significantly outperforms the methods with the single feature space and other combining methods, namely, simple voting, absolute maximum decision value, and stacking with class labels.