Skip to Main Content
An important component in many supervised classifiers is the estimation of one or more covariance matrices, and the often low training-sample count in supervised hyperspectral image classification yields the need for strong regularization when estimating such matrices. Often, this regularization is accomplished through adding some kind of scaled regularization matrix, e.g., the identity matrix, to the sample covariance matrix. We introduce a framework for specifying and interpreting a broad range of such regularization matrices in the linear and quadratic discriminant analysis (LDA and QDA, respectively) classifier settings. A key component in the proposed framework is the relationship between regularization and linear dimensionality reduction. We show that the equivalent of the LDA or the QDA classifier in any linearly reduced subspace can be reached by using an appropriate regularization matrix. Furthermore, several such regularization matrices can be added together forming more complex regularizers. We utilize this framework to build regularization matrices that incorporate multiscale spectral representations. Several realizations of such regularization matrices are discussed, and their performances when applied to QDA classifiers are tested on four hyperspectral data sets. Often, the classifiers benefit from using the proposed regularization matrices.