Abstract:
Video-based person re-identification (ReID) has emerged as a pivotal task in multi-camera surveillance and security systems, enabling the accurate identification of indiv...Show MoreMetadata
Abstract:
Video-based person re-identification (ReID) has emerged as a pivotal task in multi-camera surveillance and security systems, enabling the accurate identification of individuals across diverse viewpoints. While traditional ReID approaches predominantly rely on image-based methodologies, video-based ReID introduces distinct challenges, including spatial distractions such as background clutter and temporal variations across consecutive frames, which frequently impede robust identity recognition. Building upon the Spatial and Temporal Memory Networks (STMN) [2] architecture, this study proposes an advanced framework for video-based ReID by integrating Orthogonal Projection [1], aiming to enhance model robustness in highly cluttered and dynamic environments with numerous distractors. The proposed method leverages spatial memory modules to identify and suppress distracting artifacts, thereby mitigating the influence of background noise on the learned person representations. Simultaneously, temporal memory modules are employed to model repetitive patterns in background dynamics, enabling the model to focus on temporally consistent identity-related features across video frames. To further enhance discriminative capabilities, Orthogonal Projection is introduced, which enforces orthogonal constraints within the embedding space. This mechanism ensures better separation between identity clusters by creating well-defined, non-overlapping decision boundaries. This integration of orthogonal regularization not only improves the discriminative power of the learned feature space but also establishes clear classification criteria, effectively reducing feature overlap and enhancing identity representation fidelity. Extensive experiments conducted on the MARS [3] dataset demonstrate that the proposed STMN with Orthogonal Projection significantly outperforms existing state-of-the-art methods, particularly under challenging scenarios involving partial oc-clusions, dynamic lighting conditi...
Date of Conference: 19-22 January 2025
Date Added to IEEE Xplore: 18 February 2025
ISBN Information: