Skip to Main Content
Over the last decade, there has been an increasing interest in developing vision systems and technologies that support the operation of unmanned submersible platforms. Selected examples include the detection of obstacles and tracking of moving targets, station keeping and positioning, pipeline following, navigation and mapping. Currently, these developments rely on images form standard CCD cameras with a single optical center and limited field of view, making them restrictive for some applications. Panoramic images have been explored extensively in recent years (Peleg et al., 2001; Swaminathan et al., 2001; Yagi and Yachida, 1991; Zhang et al., 1991; Zheng and Tsuji, 1992), and was previously proposed for a number of applications, capabilities, and operational modes of underwater vehicles (Negahdaripour et al., 1988); scenarios that are also common in airborne and space robotics applications. A particular configuration of interest in this investigation yields a conical view. Unlike a single catadioptric camera (Gluckman and Nayar, 1999; Swaminathan et al., 2001), combination of conventional cameras may be used to generate images at much higher resolution (Negahdaripour et al., 2001). In this paper, we derive complete mathematical model of projection and image motion equations for a down-look conical camera that may be installed on a mobile platform - e.g., a submersible or airborne system in terrain flyover imaging. We describe the calibration of a system comprising multiple cameras with overlapping fields of view to generate the conical view. We finally demonstrate through experiments with synthetic and real data that such images provide improved accuracy in 3-D visual motion estimation. Which is the underlying issue in a number of key problems, including 3-D positioning, navigation, mapping, as well as image registration and photo-mosaicing.