Skip to Main Content
We propose new techniques for unsupervised segmentation of multimodal grayscale images such that each region-of-interest relates to a single dominant mode of the empirical marginal probability distribution of grey levels. We follow the most conventional approaches in that initial images and desired maps of regions are described by a joint Markov-Gibbs random field (MGRF) model of independent image signals and interdependent region labels. However, our focus is on more accurate model identification. To better specify region borders, each empirical distribution of image signals is precisely approximated by a linear combination of Gaussians (LCG) with positive and negative components. We modify an expectation-maximization (EM) algorithm to deal with the LCGs and also propose a novel EM-based sequential technique to get a close initial LCG approximation with which the modified EM algorithm should start. The proposed technique identifies individual LCG models in a mixed empirical distribution, including the number of positive and negative Gaussians. Initial segmentation based on the LCG models is then iteratively refined by using the MGRF with analytically estimated potentials. The convergence of the overall segmentation algorithm at each stage is discussed. Experiments show that the developed techniques segment different types of complex multimodal medical images more accurately than other known algorithms.