Skip to Main Content
This paper describes a body of multicamera human action video data with manually annotated silhouette data that has been generated for the purpose of evaluating silhouette-based human action recognition methods. It provides a realistic challenge to both the segmentation and human action recognition communities and can act as a benchmark to objectively compare proposed algorithms. The public multi-camera, multi-action dataset is an improvement over existing datasets (e.g. PETS, CAVIAR, soccerdataset) that have not been developed specifically for human action recognition and complements other action recognition datasets (KTH, Weizmann, IXMAS, HumanEva, CMU Motion). It consists of 17 action classes, 14 actors and 8 cameras. Each actor performs an action several times in the action zone. The paper describes the dataset and illustrates a possible approach to algorithm evaluation using a previously published action simple recognition method. In addition to showing an evaluation methodology, these results establish a baseline for other researchers to improve upon.