Abstract:
Depth-sensors, such as the Kinect, have predominately been used as a gesture recognition device. Recent works, however, have proposed using these sensors for user authent...Show MoreMetadata
Abstract:
Depth-sensors, such as the Kinect, have predominately been used as a gesture recognition device. Recent works, however, have proposed using these sensors for user authentication using biometric modalities such as: face, speech, gait and gesture. The last of these modalities - gestures, used in the context of full-body and hand-based gestures, is relatively new but has shown promising authentication performance. In this paper, we focus on hand-based gestures that are performed in-air. We present a novel approach to user authentication from such gestures by leveraging a temporal hierarchy of depth-aware silhouette covariances. Further, we investigate the usefulness of shape and depth information in this modality, as well as the importance of hand movement when performing a gesture. By exploiting both shape and depth information our method attains an average 1.92% Equal Error Rate (EER) on a dataset of 21 users across 4 predefined hand-gestures. Our method consistently outperforms related methods on this dataset.
Date of Conference: 27-30 September 2015
Date Added to IEEE Xplore: 10 December 2015
ISBN Information:
Department of Electrical and Computer Engineering, Boston University, Boston, MA
Department of Electrical and Computer Engineering, Boston University, Boston, MA
Department of Electrical and Computer Engineering, Boston University, Boston, MA
Department of Electrical and Computer Engineering, Boston University, Boston, MA
Department of Electrical and Computer Engineering, Boston University, Boston, MA
Department of Electrical and Computer Engineering, Boston University, Boston, MA
Department of Electrical and Computer Engineering, Boston University, Boston, MA
Department of Electrical and Computer Engineering, Boston University, Boston, MA