I. Introduction
Person retrieval with the querying conditions of specific visual attributes or portrait images is very useful in hunting criminal or terrorism suspects in large-scale surveillance scenarios. For example, describable person attributes can play the critical role in the retrieval of the two suspects in Boston marathon bombing event [1]. To complete the task, how to extract a “good” feature representation for the target person is a crucial and challenging problem due to the low image quality, the large variations of camera viewpoints, the large pose variations and occlusions in real unconstrained scenes. Although deep neural network based methods learning visual features from large-scale training samples have achieved a series of breakthroughs in various vision tasks, the used large-scale benchmark datasets e.g. ImageNet and COCO, are collected from Internet, which have intrinsic bias of cyberspace, e.g. selection bias, capture bias, and negative set bias [2]. Thus, for person retrieval in the domain of physical world, it is necessary to construct a large-scale and richly annotated pedestrian dataset for feature learning and algorithms evaluations. In this paper, considering two types of query modalities in person retrieval, i.e., image-based query and attribute-based query, as shown in Fig. 1, we collect a large-scale and richly annotated pedestrian (RAP) dataset as a unified benchmark for person retrieval in real visual surveillance scenarios.
The general framework for person retrieval based on different types of queries. The green and purple lines represent attribute-based and image-based queries, respectively.