Abstract:
Hand gesture recognition can benefit from directly processing 3D point cloud sequences, which carry rich geometric information and enable the learning of expressive spati...Show MoreMetadata
Abstract:
Hand gesture recognition can benefit from directly processing 3D point cloud sequences, which carry rich geometric information and enable the learning of expressive spatio-temporal features. However, currently employed single-stream models cannot sufficiently capture multi-scale features that include both fine-grained local posture variations and global hand movements. We therefore propose a novel dual-stream model, which decouples the learning of local and global features. These are eventually fused in an LSTM for temporal modelling. To induce the global and local stream to capture complementary position and posture features, we propose the use of different 3D learning architectures in both streams. Specifically, state-of-the-art point cloud networks excel at capturing fine posture variations from raw point clouds in the local stream. To track hand movements in the global stream, we combine an encoding with residual basis point sets and a fully-connected DenseNet. We evaluate the method on the Shrec'17 and DHG dataset and report state-of-the-art results at a reduced computational cost. Source code is available at https://github.com/multimodallearning/hand-gesture-posture-position.
Published in: 2021 International Conference on 3D Vision (3DV)
Date of Conference: 01-03 December 2021
Date Added to IEEE Xplore: 06 January 2022
ISBN Information: