Skip to Main Content
We present a general framework for tracking image regions in two views simultaneously based on sum-of-squared differences (SSD) minimization. Our method allows for motion models up to affine transformations. Contrary to earlier approaches, we incorporate the well-known epipolar constraints directly into the SSD optimization process. Since the epipolar geometry can be computed from the image directly, no prior calibration is necessary. Our algorithm has been tested in different applications including camera localization, wide-baseline stereo, object tracking and medical imaging. We show experimental results on robustness and accuracy compared to the known ground truth given by a conventional tracking device.