Skip to Main Content
Images rendered by remote sensing multi-camera platforms typically contain jitter caused by decoding timing delays, target movement, and platform motion. In this paper, we address the problem of stabilizing large-frame, low-frame rate imagery acquired from a multi-camera array system for persistent surveillance and monitoring. The algorithm utilizes temporal coherence properties between the cameras, eliminating the need to perform motion estimation on each individual camera sequence. The video stabilization algorithm includes three main modes of scalability:1) quality, 2) resolution, and 3) camera. To demonstrate the feasibility of the developed algorithm in real-world scenarios, we present results with imagery collected from a prototype multi-camera array persistent surveillance system.