Skip to Main Content
We present a novel method for aligning images under arbitrary poses, based on finding correspondences between image region features. In contrast with using purely feature-based or intensity-based methods, we adopt a hybrid method that integrates the merits of both approaches. Our method uses a small number of automatically extracted scale-invariant salient region features, whose interior intensities can be matched using robust similarity measures. While previous techniques have primarily focused on finding correspondences between individual features, we emphasize the importance of geometric configural constraints in preserving global consistency of individual matches and thus eliminating false feature matches. Our matching algorithm consists of two steps: region component matching (RCPM) and region configural matching (RCFM), respectively. The first step finds correspondences between individual region features. The second step detects a joint correspondence between multiple pairs of salient region features using a generalized Expectation-Maximization framework. The resulting joint correspondence is then used to recover the optimal transformation parameters. We applied our method to registering a pair of aerial images and several pairs of single and multiple modality medical images with promising results. The preliminary results, in particular, showed that the proposed method has excellent robustness to image noise, intensity change and inhomogeneity, appearance and disappearance of structures, as well as partial matching.