Skip to Main Content
This paper is concerned with fusing aerial imagery, LIDAR point clouds, and hyperspectral imagery for the purpose of automated urban mapping. Instead of performing traditional supervised and unsupervised classification of hyperspectral data we propose a region growing approach from seed pixels that originate from fusing LIDAR and aerial imagery. This requires a thorough alignment of all sensors involved - a problem that is solved with sensor invariant features. The common system is the geodetic reference frame in which the LIDAR points are computed. The alignment results in transformations from sensor space to object space and back, avoiding resampling the sensor data. After describing the major aspects, an example demonstrates the feasibility of the proposed fusion approach.