Boosting Salient Object Detection With Transformer-Based Asymmetric Bilateral U-Net | IEEE Journals & Magazine | IEEE Xplore

Boosting Salient Object Detection With Transformer-Based Asymmetric Bilateral U-Net


Abstract:

Existing salient object detection (SOD) methods mainly rely on U-shaped convolution neural networks (CNNs) with skip connections to combine the global contexts and local ...Show More

Abstract:

Existing salient object detection (SOD) methods mainly rely on U-shaped convolution neural networks (CNNs) with skip connections to combine the global contexts and local spatial details that are crucial for locating salient objects and refining object details, respectively. Despite great successes, the ability of CNNs in learning global contexts is limited. Recently, the vision transformer has achieved revolutionary progress in computer vision owing to its powerful modeling of global dependencies. However, directly applying the transformer to SOD is suboptimal because the transformer lacks the ability to learn local spatial representations. To this end, this paper explores the combination of transformers and CNNs to learn both global and local representations for SOD. We propose a transformer-based Asymmetric Bilateral U-Net (ABiU-Net). The asymmetric bilateral encoder has a transformer path and a lightweight CNN path, where the two paths communicate at each encoder stage to learn complementary global contexts and local spatial details, respectively. The asymmetric bilateral decoder also consists of two paths to process features from the transformer and CNN encoder paths, with communication at each decoder stage for decoding coarse salient object locations and fine-grained object details, respectively. Such communication between the two encoder/decoder paths enables AbiU-Net to learn complementary global and local representations, taking advantage of the natural merits of transformers and CNNs, respectively. Hence, ABiU-Net provides a new perspective for transformer-based SOD. Extensive experiments demonstrate that ABiU-Net performs favorably against previous state-of-the-art SOD methods. The code is available at https://github.com/yuqiuyuqiu/ABiU-Net.
Page(s): 2332 - 2345
Date of Publication: 23 August 2023

ISSN Information:

Funding Agency:


I. Introduction

Salient object detection (SOD) aims at detecting the most visually conspicuous objects or regions in an image [1], [2], [3], [4], [5], [6], [7], [8], [9]. It has a wide range of computer vision applications such as human-robot interaction [10], content-aware image editing [11], image retrieval [12], object recognition [13], image thumbnailing [14], weakly supervised learning [15], etc. In the last decade, convolutional neural networks (CNNs) have significantly pushed forward this field. Intuitively, the global contextual information (existing in the top CNN layers) is essential for locating salient objects, while the local fine-grained information (existing in the bottom CNN layers) is helpful in refining object details [1], [8], [9], [16], [17], [18], [19]. This is why the U-shaped encoder-decoder CNNs have dominated this field [2], [3], [16], [17], [20], [21], [22], [23], [24], [25], [26], [27], [28], [29], [30], [31], [32], [33], [34], [35], [36], where the encoder extracts multi-level deep features from raw images and the decoder integrates the extracted features with skip connections to make image-to-image predictions [3], [16], [17], [20], [21], [22], [23], [24], [25], [26], [27], [28], [29], [37]. The encoder is usually the existing CNN backbones, e.g., ResNet [38], while most efforts are put into the design of the decoder [30], [31], [32], [33], [35]. Although remarkable progress has been seen in this direction, CNN-based encoders share the intrinsic limitation of extracting features from images in a local manner. The lack of powerful global modeling has been the main bottleneck for CNN-based SOD.

Contact IEEE to Subscribe

References

References is not available for this document.