Skip to Main Content
Dual-camera systems have been widely used in surveillance because of the ability to explore the wide field of view (FOV) of the omnidirectional camera and the wide zoom range of the PTZ camera. Most existing algorithms require a priori knowledge of the omnidirectional camera's projection model to solve the nonlinear spatial correspondences between the two cameras. To overcome this limitation, two methods are proposed: 1) geometry and 2) homography calibration, where polynomials with automated model selection are used to approximate the camera's projection model and spatial mapping, respectively. The proposed methods not only improve the mapping accuracy by reducing its dependence on the knowledge of the projection model but also feature reduced computations and improved flexibility in adjusting to varying system configurations. Although the fusion of multiple cameras has attracted increasing attention, most existing algorithms assume comparable FOV and resolution levels among multiple cameras. Different FOV and resolution levels of the omnidirectional and PTZ cameras result in another critical issue in practical tracking applications. The omnidirectional camera is capable of multiple object tracking while the PTZ camera is able to track one individual target at one time to maintain the required resolution. It becomes necessary for the PTZ camera to distribute its observation time among multiple objects and visit them in sequence. Therefore, this paper addresses a novel scheme where an optimal visiting sequence of the PTZ camera is obtained so that in a given period of time the PTZ camera automatically visits multiple detected motions in a target-hopping manner. The effectiveness of the proposed algorithms is illustrated via extensive experiments using both synthetic and real tracking data and comparisons with two reference systems.