Abstract:
We present in this paper a new approach for human-action extraction and recognition in a multi-modal context. Our solution contains two modules. The first one applies tem...Show MoreMetadata
Abstract:
We present in this paper a new approach for human-action extraction and recognition in a multi-modal context. Our solution contains two modules. The first one applies temporal action segmentation by combining a heuristic analysis with augmented-joint description and SVM classification. The second one aims for a frame-wise action recognition using skeletal, RGB and depth modalities coupled with a label-grouping strategy in the decision level. Our contribution consists of (1) a selective concatenation of features extracted from the different modalities, (2) the introduction of features relative to the face region in addition to the hands, and (3) the applied multilevel frames-grouping strategy. Our experiments carried on the Chalearn gesture challenge 2014 dataset have proved the effectiveness of our approach within the literature.
Date of Conference: 31 August 2015 - 04 September 2015
Date Added to IEEE Xplore: 28 December 2015
Electronic ISBN:978-0-9928-6263-3
Electronic ISSN: 2076-1465