Loading [MathJax]/extensions/MathMenu.js
Cross-Domain Action Recognition via Prototypical Graph Alignment | IEEE Conference Publication | IEEE Xplore

Cross-Domain Action Recognition via Prototypical Graph Alignment


Abstract:

Compared with the well-explored cross-domain image recognition, cross-domain action recognition is a more challenging task because not only spatial but also temporal doma...Show More

Abstract:

Compared with the well-explored cross-domain image recognition, cross-domain action recognition is a more challenging task because not only spatial but also temporal domain gaps exist across domains. Previous works attempt to bridge the temporal domain gap by aligning the domain-related key segments of videos from source and target domains. However, such practice overlooks the heterogeneous temporal domain gaps among different categories and presents temporal alignment strategies in a class-irrelevant manner. To address this issue, we propose to achieve class-wise temporal alignment for cross-domain action recognition via prototypical graph alignment (PGA). Concretely, we generate segment-level prototypes for the classes of both domains to capture per-class temporal dynamics. Furthermore, intra-domain and inter-domain prototypical graphs are established to mine the temporal relationships between each input video and its corresponding intra-domain and inter-domain prototypes. In this way, a discriminative and domain adaptive video representation is obtained by holistically reasoning cross-domain temporal dynamics. To class-wisely align the cross-domain video representations, each action category is equipped with a customized class-specific domain discriminator for temporal alignment via adversarial learning. Extensive experiments on three benchmarks show that PGA yeilds state-of-the-art performance on the task of cross-domain action recognition.
Date of Conference: 18-22 July 2022
Date Added to IEEE Xplore: 26 August 2022
ISBN Information:

ISSN Information:

Conference Location: Taipei, Taiwan

Funding Agency:


References

References is not available for this document.