Abstract:
Hyperspectral anomaly detection (HAD) is a challenging task since it identifies the anomaly targets without prior knowledge. In recent years, deep learning methods have e...Show MoreMetadata
Abstract:
Hyperspectral anomaly detection (HAD) is a challenging task since it identifies the anomaly targets without prior knowledge. In recent years, deep learning methods have emerged as one of the most popular algorithms in the HAD. These methods operate on the assumption that the background is well reconstructed while anomalies cannot, and the degree of anomaly for each pixel is represented by reconstruction errors. However, most approaches treat all background pixels of a hyperspectral image (HSI) as one type of ground object. This assumption does not always hold in practical scenes, making it difficult to distinguish between backgrounds and anomalies effectively. To address this issue, a novel deep feature aggregation network (DFAN) is proposed in this article, and it develops a new paradigm for HAD to represent multiple patterns of backgrounds. The DFAN adopts an adaptive aggregation model (AAM), which combines the orthogonal spectral attention module (OSAM) with the background anomaly category statistics module. This allows effective utilization of spectral and spatial information to capture the distribution of the background and anomaly. To optimize the proposed DFAN better, a novel multiple aggregation separation loss (MASL) is designed, and it is based on the intrasimilarity and interdifference from the background and anomaly. The constraint function reduces the potential anomaly representation and strengthens the potential background representation. Additionally, the extensive experiments on the six real hyperspectral datasets demonstrate that the proposed DFAN achieves superior performance for HAD. The code is available at https://github.com/ChengXi-1217/DFAN-HAD.
Published in: IEEE Transactions on Instrumentation and Measurement ( Volume: 73)