Skip to Main Content
Geometric structure of a scene can be reconstructed using many methods. In recent years, two prominent approaches have been digital photogrammetric analysis using passive stereo imagery and feature extraction from lidar point clouds. In the first method, the traditional technique relies on finding common points in two or more 2D images that were acquired from different view perspectives. More recently, similar approaches have been proposed where stereo mosaics are built from aerial video using parallel ray interpolation, and surfaces are subsequently extracted from these mosaics using stereo geometry. Although the lidar data inherently contain 2.5 or 3 dimensional information, they also require processing to extract surfaces. In general, structure from stereo approaches work well when the scene surfaces are flat and have strong edges in the video frames. Lidar processing works well when the data is densely sampled. In this paper, we analyze and discuss the pros and cons of the two approaches. We also present three challenging situations that illustrate the benefits that could be derived from this data fusion: when one or more edges are not clearly visible in the video frames, when the lidar data sampling density is low, and when the object surface is not planar. Examples are provided from the processing of real airborne data gathered using a combination of lidar and passive imagery taken from separate aircraft platforms at different times.