Video-Based Person Re-Identification With Accumulative Motion Context | IEEE Journals & Magazine | IEEE Xplore

Video-Based Person Re-Identification With Accumulative Motion Context


Abstract:

Video-based person re-identification plays a central role in realistic security and video surveillance. In this paper, we propose a novel accumulative motion context (AMO...Show More

Abstract:

Video-based person re-identification plays a central role in realistic security and video surveillance. In this paper, we propose a novel accumulative motion context (AMOC) network for addressing this important problem, which effectively exploits the long-range motion context for robustly identifying the same person under challenging conditions. Given a video sequence of the same or different persons, the proposed AMOC network jointly learns appearance representation and motion context from a collection of adjacent frames using a two-stream convolutional architecture. Then, AMOC accumulates clues from motion context by recurrent aggregation, allowing effective information flow among adjacent frames and capturing dynamic gist of the persons. The architecture of AMOC is end-to-end trainable, and thus, motion context can be adapted to complement appearance clues under unfavorable conditions (e.g., occlusions). Extensive experiments are conduced on three public benchmark data sets, i.e., the iLIDS-VID, PRID-2011, and MARS data sets, to investigate the performance of AMOC. The experimental results demonstrate that the proposed AMOC network outperforms state-of-the-arts for video-based re-identification significantly and confirm the advantage of exploiting long-range motion context for video-based person re-identification, validating our motivation evidently.
Page(s): 2788 - 2802
Date of Publication: 14 June 2017

ISSN Information:

Funding Agency:


Contact IEEE to Subscribe

References

References is not available for this document.