Abstract:
Smoke detection is essential for fire prevention, yet it is significantly hampered by the visual similarities between smoke and fog. To address this challenge, a split to...Show MoreMetadata
Abstract:
Smoke detection is essential for fire prevention, yet it is significantly hampered by the visual similarities between smoke and fog. To address this challenge, a split top-k attention transformer framework (STKformer) is proposed. The STKformer incorporates split top-k attention (STKA), which partitions the attention map for top-k selection to retain informative self-attention values while capturing long-range dependencies. This approach effectively filters out irrelevant attention scores, preventing information loss. Furthermore, the adaptive dark-channel-prior guidance network (ADGN) is designed to enhance smoke recognition under foggy conditions. ADGN employs pooling operations instead of minimum value filtering, allowing for efficient dark channel extraction with learnable parameters and adaptively reducing the impact of fog. The extracted prior information subsequently guides feature extraction through a priorformer block, improving model robustness. Additionally, a cross-stage fusion module (CSFM) is introduced to aggregate features from different stages efficiently, enabling flexible adaptation to smoke features at various scales and enhancing detection accuracy. Comprehensive experiments demonstrate that the proposed method achieves state-of-the-art performance across multiple datasets, with an accuracy of 89.68% on dataset for smoke detection in fog, 99.76% on CCTV images of smoke, and 99.76% on UAV images of wildfire. The method maintains high speed and lightweight characteristics, validated with an inference speed of 211.46 FPS on an NVIDIA Jetson AGX Orin after TensorRT acceleration, confirming its effectiveness and efficiency for real-world applications. The source code is available at https://github.com/Jiongze-Yu/STKformerhttps://github.com/Jiongze-Yu/STKformer.
Published in: IEEE Internet of Things Journal ( Volume: 12, Issue: 6, 15 March 2025)