It is well known that there is a strong relation between class definition precision and classification accuracy in pattern classification applications. In hyperspectral data analysis, usually classes of interest contain one or more components and may not be well represented by a single Gaussian density function. In this paper, a model-based mixture classifier, which uses mixture models to characterize class densities, is discussed. However, a key outstanding problem of this approach is how to choose the number of components and determine their parameters for such models in practice, and to do so in the face of limited training sets where estimation error becomes a significant factor. The proposed classifier estimates the number of subclasses and class statistics simultaneously by choosing the best model. The structure of class covariances is also addressed through a model-based covariance estimation technique introduced in this paper.