Loading [a11y]/accessibility-menu.js
Motif-GCNs With Local and Non-Local Temporal Blocks for Skeleton-Based Action Recognition | IEEE Journals & Magazine | IEEE Xplore

Motif-GCNs With Local and Non-Local Temporal Blocks for Skeleton-Based Action Recognition


Abstract:

Recent works have achieved remarkable performance for action recognition with human skeletal data by utilizing graph convolutional models. Existing models mainly focus on...Show More

Abstract:

Recent works have achieved remarkable performance for action recognition with human skeletal data by utilizing graph convolutional models. Existing models mainly focus on developing graph convolutional operations to encode structural properties of a skeletal graph, whose topology is manually predefined and fixed over all action samples. Some recent works further take sample-dependent relationships among joints into consideration. However, the complex relationships between arbitrary pairwise joints are difficult to learn and the temporal features between frames are not fully exploited by simply using traditional convolutions with small local kernels. In this paper, we propose a motif-based graph convolution method, which makes use of sample-dependent latent relations among non-physically connected joints to impose a high-order locality and assigns different semantic roles to physical neighbors of a joint to encode hierarchical structures. Furthermore, we propose a sparsity-promoting loss function to learn a sparse motif adjacency matrix for latent dependencies in non-physical connections. For extracting effective temporal information, we propose an efficient local temporal block. It adopts partial dense connections to reuse temporal features in local time windows, and enrich a variety of information flow by gradient combination. In addition, we introduce a non-local temporal block to capture global dependencies among frames. Our model can capture local and non-local relationships both spatially and temporally, by integrating the local and non-local temporal blocks into the sparse motif-based graph convolutional networks (SMotif-GCNs). Comprehensive experiments on four large-scale datasets show that our model outperforms the state-of-the-art methods. Our code is publicly available at https://github.com/wenyh1616/SAMotif-GCN.
Published in: IEEE Transactions on Pattern Analysis and Machine Intelligence ( Volume: 45, Issue: 2, 01 February 2023)
Page(s): 2009 - 2023
Date of Publication: 26 April 2022

ISSN Information:

PubMed ID: 35471870

Funding Agency:


Contact IEEE to Subscribe

References

References is not available for this document.