I. Introduction
In order to build a video coder that is robust in the presence of noise, the motion estimation process must be able to track objects within a noisy source. In a noisy source, objects appear to change from frame to frame because of the noise, not necessarily as the result of object motion [1]. Noise gets added to video in the process of recording it. This problem is even more acute when converting from video on analog tapes to video in digital format. Noise is undesirable not only because it degrades the visual quality of the video but also because it degrades the performance of subsequent processing such as compression [2]. Many motion estimation schemes have been developed. They can be classified into spatial-domain and frequency-domain approaches. The spatial domain algorithms consist of matching algorithms and gradient-based algorithms. The frequency domain algorithms consist of phase correlation algorithms, wavelet transform-based algorithms, and DCT-based algorithms [3].