Loading [MathJax]/extensions/MathMenu.js
Hierarchical Recurrent Deep Fusion Using Adaptive Clip Summarization for Sign Language Translation | IEEE Journals & Magazine | IEEE Xplore

Hierarchical Recurrent Deep Fusion Using Adaptive Clip Summarization for Sign Language Translation


Abstract:

Vision-based sign language translation (SLT) is a challenging task due to the complicated variations of facial expressions, gestures, and articulated poses involved in si...Show More

Abstract:

Vision-based sign language translation (SLT) is a challenging task due to the complicated variations of facial expressions, gestures, and articulated poses involved in sign linguistics. As a weakly supervised sequence-to-sequence learning problem, in SLT there are usually no exact temporal boundaries of actions. To adequately explore temporal hints in videos, we propose a novel framework named Hierarchical deep Recurrent Fusion (HRF). Aiming at modeling discriminative action patterns, in HRF we design an adaptive temporal encoder to capture crucial RGB visemes and skeleton signees. Specifically, RGB visemes and skeleton signees are learned by the same scheme named Adaptive Clip Summarization (ACS), respectively. ACS consists of three key modules, i.e., variable-length clip mining, adaptive temporal pooling, and attention-aware weighting. Besides, based on unaligned action patterns (RGB visemes and skeleton signees), a query-adaptive decoding fusion is proposed to translate the target sentence. Extensive experiments demonstrate the effectiveness of the proposed HRF framework.
Published in: IEEE Transactions on Image Processing ( Volume: 29)
Page(s): 1575 - 1590
Date of Publication: 23 September 2019

ISSN Information:

PubMed ID: 31545723

Funding Agency:


Contact IEEE to Subscribe

References

References is not available for this document.