Loading [MathJax]/extensions/MathMenu.js
Towards Lightweight Transformer Architecture: an Analysis on Semantic Segmentation | IEEE Conference Publication | IEEE Xplore

Towards Lightweight Transformer Architecture: an Analysis on Semantic Segmentation


Abstract:

This work aims to advance the topic of lightweight semantic segmentation transformer models by exploring changes of the architecture that promote faster inference. Here, ...Show More

Abstract:

This work aims to advance the topic of lightweight semantic segmentation transformer models by exploring changes of the architecture that promote faster inference. Here, we proposed to replace the classical attention mechanism with two alternatives: skip-attention and pool-unpool attention. Before experimenting with the modified attention mechanisms, a layer correlation analysis was performed. Evaluation on the ADE20K and Cityscapes datasets demonstrated that both skip-attention and pool-unpool attention have a strong positive impact on inference speed. Namely, skip-attention model achieves +26.7% for ADE20k and +101.7% for Cityscapes inference speed gain, while pool-unpool attention model yields +14.8% for ADE20k and +73.3% for Cityscapes. The conducted experiments demonstrate that skip-attention and pool-unpool attention present viable alternatives to classical attention mechanism when higher inference speed is required.
Date of Conference: 01-02 February 2024
Date Added to IEEE Xplore: 20 March 2024
ISBN Information:
Conference Location: Victoria, Seychelles

Contact IEEE to Subscribe

References

References is not available for this document.