Loading [MathJax]/extensions/MathMenu.js
Occluded Visible-Infrared Person Re-Identification | IEEE Journals & Magazine | IEEE Xplore

Occluded Visible-Infrared Person Re-Identification


Abstract:

Visible-infrared person re-identification (VI-ReID) aims to match person images between the visible and near-infrared modalities. Previous VI-ReID methods are based on ho...Show More

Abstract:

Visible-infrared person re-identification (VI-ReID) aims to match person images between the visible and near-infrared modalities. Previous VI-ReID methods are based on holistic pedestrian images and achieve excellent performance. However, in real-world scenarios, images captured by visible and near-infrared cameras usually contain occlusions. The performance of these methods degrades significantly due to the loss of information of discriminative features from the occlusion of the images. We define visible-infrared person re-identification in this occlusion scene as Occluded VI-ReID, where only partial content information of pedestrian images can be used to match images of different modalities from different cameras. In this paper, we propose a matching framework for occlusion scenes, which contains a local feature enhance module (LFEM) and a modality information fusion module (MIFM). LFEM adopts Transformer to learn features of each modality, and adjusts the importance of patches to enhance the representation ability of local features of the non-occluded areas. MIFM utilizes a co-attention mechanism to infer the correlation between each image for reducing the difference between modalities. We construct two occluded VI-ReID datasets, namely Occluded-SYSU-MM01 and Occluded-RegDB datasets. Our approach outperforms existing state-of-the-art methods on two occlusion datasets, while remains top performance on two holistic datasets.
Published in: IEEE Transactions on Multimedia ( Volume: 25)
Page(s): 1401 - 1413
Date of Publication: 16 December 2022

ISSN Information:

Funding Agency:


Contact IEEE to Subscribe

References

References is not available for this document.