Abstract:
This work aims to advance the topic of lightweight semantic segmentation transformer models by exploring changes of the architecture that promote faster inference. Here, ...Show MoreMetadata
Abstract:
This work aims to advance the topic of lightweight semantic segmentation transformer models by exploring changes of the architecture that promote faster inference. Here, we proposed to replace the classical attention mechanism with two alternatives: skip-attention and pool-unpool attention. Before experimenting with the modified attention mechanisms, a layer correlation analysis was performed. Evaluation on the ADE20K and Cityscapes datasets demonstrated that both skip-attention and pool-unpool attention have a strong positive impact on inference speed. Namely, skip-attention model achieves +26.7% for ADE20k and +101.7% for Cityscapes inference speed gain, while pool-unpool attention model yields +14.8% for ADE20k and +73.3% for Cityscapes. The conducted experiments demonstrate that skip-attention and pool-unpool attention present viable alternatives to classical attention mechanism when higher inference speed is required.
Published in: 2024 International Conference on Artificial Intelligence, Computer, Data Sciences and Applications (ACDSA)
Date of Conference: 01-02 February 2024
Date Added to IEEE Xplore: 20 March 2024
ISBN Information: