A key component in most parametric classifiers is the estimation of an inverse covariance matrix. In hyperspectral images, the number of bands can be in the hundreds, leading to covariance matrices having tens of thousands of elements. Lately, the use of linear regression in estimating the inverse covariance matrix has been introduced in the time-series literature. This paper adopts and expands these ideas to ill-posed hyperspectral image classification problems. The results indicate that at least some of the approaches can give a lower classification error than traditional methods such as the linear discriminant analysis and the regularized discriminant analysis. Furthermore, the results show that, contrary to earlier beliefs, estimating long-range dependencies between bands appears necessary to build an effective hyperspectral classifier and that the high correlations between neighboring bands seem to allow differing sparsity configurations of the inverse covariance matrix to obtain similar classification results.