Plausible Proxy Mining With Credibility for Unsupervised Person Re-Identification | IEEE Journals & Magazine | IEEE Xplore

Plausible Proxy Mining With Credibility for Unsupervised Person Re-Identification


Abstract:

One effective way to address unsupervised person re-identification is to use a clustering-based contrastive learning approach. Existing state-of-the-art methods adopt clu...Show More

Abstract:

One effective way to address unsupervised person re-identification is to use a clustering-based contrastive learning approach. Existing state-of-the-art methods adopt clustering algorithms (e.g., DBSCAN) and camera ID information to divide all person images into several camera-aware proxies. Then, for each person image, the extracted feature representation is pulled closer to the centroids of its pseudo-positive proxies (the proxies that share the same pseudo-identity label with this image) and pushed away from the centroids of other pseudo-negative proxies (the proxies that share the different pseudo-identity label with this image). However, the quality of the proxy centroid is significantly affected by the proxy impurity issue and thus deteriorates the learned feature representations. On the premise that we cannot introduce superior supervision signals by thoroughly solving the proxy impurity issue, for a person image, identifying its plausible proxies: the pseudo-negative proxies which potentially include its wrongly-clustered instances (the instances with the same ground-truth identity with this image), and further fixing the resulted incorrect supervision signals become an urgent and challenging problem. This paper proposes a simple yet effective approach to address this problem. With a given image, our method can effectively locate its plausible proxies. Then we introduce credibility to measure how much we should treat the centroid of each mined plausible proxy as a positive supervision signal rather than entirely negative. Extensive experiments on three widely-used person re-ID datasets validate the effectiveness of our proposed approach. Codes will be available at: https://github.com/Dingyuan-Zheng/PPCL.
Page(s): 3308 - 3318
Date of Publication: 27 December 2022

ISSN Information:

Funding Agency:

References is not available for this document.

I. Introduction

person re-identification (re-ID) aims to retrieve pedestrians with the same identity as the given probe image across non-overlapping camera views. To meet increasing demands in urban security, person re-ID is an essential intelligent video surveillance technique. Relying on video surveillance systems widely deployed in railway stations, subway stations, supermarkets, and other crowded public places, person re-ID technology is of significant importance in improving the efficiency of tracing criminal suspects, crime prevention, and missing population investigation. Traditional fully-supervised works achieve impressive performance, where all identity annotations are available during training [1], [2], [3], [4]. Such methods, however, heavily rely on manually-labelled person identity annotations, which are pretty costly to collect. Thus, it becomes formidable to apply these fully-supervised methods into large-scale practical scenarios, where many person images need to be annotated in a limited time. To address this issue, the unsupervised re-ID methods, without the need for identity annotations during training, have been studied by an increasing number of researchers [5], [6], [7], [8].

Select All
1.
M. Ester, “A density-based algorithm for discovering clusters in large spatial databases with noise,” in Proc. KDD, 1996, pp. 226–231.
2.
Z. Zhong, L. Zheng, D. Cao, and S. Li, “Re-ranking person re-identification with k-reciprocal encoding,” in Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), Jul. 2017, pp. 1318–1327.
3.
Z. Zheng, L. Zheng, and Y. Yang, “Pedestrian alignment network for large-scale person re-identification,” IEEE Trans. Circuits Syst. Video Technol., vol. 29, no. 10, pp. 3037–3045, Oct. 2019.
4.
G. Zhang, Z. Luo, Y. Chen, Y. Zheng, and W. Lin, “Illumination unification for person re-identification,” IEEE Trans. Circuits Syst. Video Technol., vol. 32, no. 10, pp. 6766–6777, Oct. 2022.
5.
K. Zheng, W. Liu, L. He, T. Mei, J. Luo, and Z.-J. Zha, “Group-aware label transfer for domain adaptive person re-identification,” in Proc. IEEE/CVF Conf. Comput. Vis. Pattern Recognit. (CVPR), Jun. 2021, pp. 5310–5319.
6.
D. Zheng, J. Xiao, K. Chen, X. Huang, L. Chen, and Y. Zhao, “Soft pseudo-label shrinkage for unsupervised domain adaptive person re-identification,” Pattern Recognit., vol. 127, Jul. 2022, Art. no. 108615.
7.
Y. Wu, X. Wu, X. Li, and J. Tian, “MGH: Metadata guided hypergraph modeling for unsupervised person re-identification,” in Proc. 29th ACM Int. Conf. Multimedia, Oct. 2021, pp. 1571–1580.
8.
L. Qi, L. Wang, J. Huo, Y. Shi, X. Geng, and Y. Gao, “Adversarial camera alignment network for unsupervised cross-camera person re-identification,” IEEE Trans. Circuits Syst. Video Technol., vol. 32, no. 5, pp. 2921–2936, May 2022.
9.
J. Li and S. Zhang, “Joint visual and temporal consistency for unsupervised domain adaptive person re-identification,” in Proc. ECCV, 2020, pp. 483–499.
10.
K. Zheng, C. Lan, W. Zeng, Z. Zhang, and Z.-J. Zha, “Exploiting sample uncertainty for domain adaptive person re-identification,” in Proc. AAAI, 2021, pp. 3538–3546.
11.
Y. Dai, J. Liu, Y. Sun, Z. Tong, C. Zhang, and L.-Y. Duan, “IDM: An intermediate domain module for domain adaptive person re-ID,” in Proc. IEEE/CVF Int. Conf. Comput. Vis. (ICCV), Oct. 2021, pp. 1–11.
12.
R. Hadsell, S. Chopra, and Y. LeCun, “Dimensionality reduction by learning an invariant mapping,” in Proc. IEEE Comput. Soc. Conf. Comput. Vis. Pattern Recognit. (CVPR), vol. 2, Jun. 2006, pp. 1735–1742.
13.
Y. Ge, D. Chen, F. Zhu, R. Zhao, and H. Li, “Self-paced contrastive learning with hybrid memory for domain adaptive object re-ID,” in Proc. NeurIPS, 2020, pp. 11309–11321.
14.
M. Wang, B. Lai, J. Huang, X. Gong, and X.-S. Hua, “Camera-aware proxies for unsupervised person re-identification,” in Proc. AAAI, 2021, pp. 2764–2772.
15.
H. Chen, B. Lagadec, and F. Bremond, “ICE: Inter-instance contrastive encoding for unsupervised person re-identification,” in Proc. IEEE/CVF Int. Conf. Comput. Vis. (ICCV), Oct. 2021, pp. 14960–14969.
16.
D. Wang and S. Zhang, “Unsupervised person re-identification via multi-label classification,” in Proc. IEEE/CVF Conf. Comput. Vis. Pattern Recognit. (CVPR), Jun. 2020, pp. 10981–10990.
17.
H. Zhang, G. Zhang, Y. Chen, and Y. Zheng, “Global relation-aware contrast learning for unsupervised person re-identification,” IEEE Trans. Circuits Syst. Video Technol., vol. 32, no. 12, pp. 8599–8610, Dec. 2022.
18.
Y. Cho, W. J. Kim, S. Hong, and S.-E. Yoon, “Part-based pseudo label refinement for unsupervised person re-identification,” in Proc. IEEE/CVF Conf. Comput. Vis. Pattern Recognit. (CVPR), Jun. 2022, pp. 7308–7318.
19.
X. Zhang, “Implicit sample extension for unsupervised person re-identification,” in Proc. IEEE/CVF Conf. Comput. Vis. Pattern Recognit. (CVPR), Jun. 2022, pp. 7369–7378.
20.
D. Mekhazni, A. Bhuiyan, G. Ekladious, and E. Granger, “Unsupervised domain adaptation in the dissimilarity space for person re-identification,” in Proc. ECCV, 2020, pp. 159–174.
21.
S. Lin, H. Li, C. T. Li, and A. C. Kot, “Multi-task mid-level feature alignment network for unsupervised cross-dataset person re-identification,” in Proc. BMVC, 2018, pp. 1–13.
22.
A. Gretton, K. M. Borgwardt, M. J. Rasch, B. Schölkopf, and A. Smola, “A kernel two-sample test,” J. Mach. Learn. Res., vol. 13, pp. 723–773, Mar. 2012.
23.
W. Deng, L. Zheng, Q. Ye, G. Kang, Y. Yang, and J. Jiao, “Image-image domain adaptation with preserved self-similarity and domain-dissimilarity for person re-identification,” in Proc. IEEE/CVF Conf. Comput. Vis. Pattern Recognit., Jun. 2018, pp. 994–1003.
24.
L. Wei, S. Zhang, W. Gao, and Q. Tian, “Person transfer GAN to bridge domain gap for person re-identification,” in Proc. IEEE/CVF Conf. Comput. Vis. Pattern Recognit., Jun. 2018, pp. 79–88.
25.
I. Goodfellow, “Generative adversarial nets,” in Proc. NeurIPS, 2014, pp. 139–144.
26.
Y. Ge, D. Chen, and H. Li, “Mutual mean-teaching: Pseudo label refinery for unsupervised domain adaptation on person re-identification,” in Proc. ICLR, 2020, pp. 1–15.
27.
Y. Zheng, “Online pseudo label generation by hierarchical cluster dynamics for adaptive person re-identification,” in Proc. IEEE/CVF Int. Conf. Comput. Vis. (ICCV), Oct. 2021, pp. 8371–8381.
28.
D. Zheng, J. Xiao, Y. Wei, Q. Wang, K. Huang, and Y. Zhao, “Unsupervised domain adaptation in homogeneous distance space for person re-identification,” Pattern Recognit., vol. 132, Dec. 2022, Art. no. 108941.
29.
K. He, H. Fan, Y. Wu, S. Xie, and R. Girshick, “Momentum contrast for unsupervised visual representation learning,” in Proc. IEEE/CVF Conf. Comput. Vis. Pattern Recognit. (CVPR), Jun. 2020, pp. 9729–9738.
30.
E. Ristani, F. Solera, R. Zou, R. Cucchiara, and C. Tomasi, “Performance measures and a data set for multi-target, multi-camera tracking,” in Proc. ECCV, 2016, pp. 17–35.

Contact IEEE to Subscribe

References

References is not available for this document.