Abstract:
Filling gaps in high-resolution satellite imagery is essential for tracking vegetation changes over time. Spatiotemporal fusion (STF) aims to create fusion products that ...Show MoreMetadata
Abstract:
Filling gaps in high-resolution satellite imagery is essential for tracking vegetation changes over time. Spatiotemporal fusion (STF) aims to create fusion products that improve both spatial resolution and temporal coverage by using images from various remote sensing sources. However, most existing STF methods rely on the assumption that reflectance values for the same land-cover type remain constant between base and prediction dates, a premise often invalidated by the variability in vegetation disturbance and recovery scenarios, where differences in disturbance intensity, patterns, and phenological stages challenge this uniformity. Therefore, we propose a novel Transformer-based method, the spatiotemporal integration network (STINet), which is effective in fusing multiscale spatiotemporal dynamic features. STINet is structured around three key components. The feature fusion (FF) block effectively integrates multiscale spatiotemporal information into a deep learning (DL) framework. The adaptive feature extraction (AFE) block significantly improves the precision of pixel-level features, essential for detecting subtle changes in diverse vegetation patterns. The spatiotemporal-wise multihead self-attention (ST-MSA) module through its innovative self-attention mechanism across spatiotemporal dimensions, facilitated the reconstruction of vegetation dynamics. To verify the effectiveness and robustness of the proposed method, we conducted experiments in three carefully selected scenarios using multisensor and multitemporal imagery to reconstruct the dynamic changes in vegetation due to various disturbances and recovery processes. Compared to the four typical fusion methods [enhanced spatial and temporal adaptive reflectance fusion model (ESTARFM), flexible spatiotemporal data fusion method (FSDAF), extended super-resolution convolutional neural network (ESRCNN), and multiscene spatiotemporal fusion network (MUSTFN)], STINet achieved the best performance in preserving both ...
Published in: IEEE Transactions on Geoscience and Remote Sensing ( Volume: 62)