Skip to Main Content
We summarize our methods for the fusion of multisensor/spectral imagery based on concepts derived from neural models of visual processing (adaptive contrast enhancement, opponent-color contrast, multi-scale contour completion, and multi-scale texture enhancement) and semi-supervised pattern learning and recognition. These methods have been applied to the problem of aided feature extraction (AFE) from remote sensing airborne multispectral and hyperspectral imaging systems, and space-based multi-platform multi-modality imaging sensors. The methods enable color fused 3D visualization, as well as interactive exploitation and data mining in the form of human-guided machine learning and search for objects, landcover, and cultural features. This technology has been evaluated on space-based imagery for the National Imagery and Mapping Agency, and real-time implementation has also been demonstrated for terrestrial fused-color night imaging. We have recently incorporated these methods into a commercial software platform (ERDAS Imagine) for imagery exploitation. We describe the approach and user interfaces, and show results for a variety of sensor systems with application to remote sensing feature extraction including EO/IR/MSI/SAR imagery from Landsat and Radarsat, multispectral Ikonos imagery, and Hyperion and HyMap hyperspectral imagery.
Date of Conference: 27-28 Oct. 2003