By Topic

3D Scene Reconstruction through a Fusion of Passive Video and Lidar Imagery

Sign In

Cookies must be enabled to login.After enabling cookies , please use refresh or reload or ctrl+f5 on the browser for the login options.

The purchase and pricing options are temporarily unavailable. Please try again later.
5 Author(s)
Gurram, P. ; Rochester Inst. of Technol., Rochester ; Rhody, H. ; Kerekes, J. ; Lach, S.
more authors

Geometric structure of a scene can be reconstructed using many methods. In recent years, two prominent approaches have been digital photogrammetric analysis using passive stereo imagery and feature extraction from lidar point clouds. In the first method, the traditional technique relies on finding common points in two or more 2D images that were acquired from different view perspectives. More recently, similar approaches have been proposed where stereo mosaics are built from aerial video using parallel ray interpolation, and surfaces are subsequently extracted from these mosaics using stereo geometry. Although the lidar data inherently contain 2.5 or 3 dimensional information, they also require processing to extract surfaces. In general, structure from stereo approaches work well when the scene surfaces are flat and have strong edges in the video frames. Lidar processing works well when the data is densely sampled. In this paper, we analyze and discuss the pros and cons of the two approaches. We also present three challenging situations that illustrate the benefits that could be derived from this data fusion: when one or more edges are not clearly visible in the video frames, when the lidar data sampling density is low, and when the object surface is not planar. Examples are provided from the processing of real airborne data gathered using a combination of lidar and passive imagery taken from separate aircraft platforms at different times.

Published in:

Applied Imagery Pattern Recognition Workshop, 2007. AIPR 2007. 36th IEEE

Date of Conference:

10-12 Oct. 2007