Skip to Main Content
The curse of dimensionality is the main reason for the computational complexity and the Hughes phenomenon in supervised hyperspectral classification. Previous studies seldom consider in a simultaneous fashion the real situation of insufficiency of available training samples, particularly for small land covers that often contain the key information of the scene, and the problem of complexity. In this paper, the capabilities of a feature reduction technique used for discrimination are combined with the advantages of a Bayesian learning-based probabilistic sparse kernel model, the relevance vector machine (RVM), to develop a new supervised classification method. In the proposed method, the hyperdimensional data are first transformed to a lower dimensionality feature space using the feature reduction technique to maximize separability between classes. The transformed data are then processed by a multiclass RVM classifier based on the parallel architecture and one-against-one strategy. To verify the effectiveness of the method, experiments were carried out on real hyperspectral data. The results are compared with the most efficient supervised classification techniques such as the support vector machine using appropriate performance indicators. The results show that the proposed method performs better than the other approaches particularly for small and scattered landcover classes which are harder to be precisely classified. In addition, this method has the advantages of low computational complexity and robustness to the Hughes phenomenon.