Learning Multiscale Deep Features and SVM Regressors for Adaptive RGB-T Saliency Detection | IEEE Conference Publication | IEEE Xplore

Learning Multiscale Deep Features and SVM Regressors for Adaptive RGB-T Saliency Detection


Abstract:

This paper investigates how to perform robust image saliency detection by adaptively leveraging different source data. Given the aligned RGB-T image pair, we learn the ro...Show More

Abstract:

This paper investigates how to perform robust image saliency detection by adaptively leveraging different source data. Given the aligned RGB-T image pair, we learn the robust representations for each modality by using deep convolutional neural networks (CNNs) at different scales, which can capture multiscale context features and rich semantic information inherited from the previous CNNs trained on the ImageNet Dataset. Then, we employ fully connected neural network layer to concatenate multiscale CNN features, and infer the saliency map for each modality. For adaptively incorporating the information from RGB and thermal images, we train a SVM regressor on the multiscale CNN features to compute the reliability weight of each modality, and combine them with the corresponding saliency maps to achieve the fused saliency map. In addition, we create a new image dataset and implement some baseline methods with different modality inputs for facilitating the evaluations of RGB-T saliency detection. Experimental results on the newly created dataset demonstrate the effectiveness of the proposed approach against other baseline methods.
Date of Conference: 09-10 December 2017
Date Added to IEEE Xplore: 01 February 2018
ISBN Information:
Electronic ISSN: 2473-3547
Conference Location: Hangzhou, China

Contact IEEE to Subscribe

References

References is not available for this document.