Loading [MathJax]/extensions/MathMenu.js
A Context Knowledge Map Guided Coarse-to-Fine Action Recognition | IEEE Journals & Magazine | IEEE Xplore

A Context Knowledge Map Guided Coarse-to-Fine Action Recognition


Abstract:

Human actions involve a wide variety and a large number of categories, which leads to a big challenge in action recognition. However, according to similarities on human b...Show More

Abstract:

Human actions involve a wide variety and a large number of categories, which leads to a big challenge in action recognition. However, according to similarities on human body poses, scenes, interactive objects, human actions can be grouped into some semantic groups, i.e. sports, cooking, etc. Therefore, in this paper, we propose a novel approach which recognizes human actions from coarse to fine. Taking full advantage of contributions from high-level semantic contexts, a context knowledge map guided recognition method is designed to realize the coarse-to-fine procedure. In the approach, we define semantic contexts with interactive objects, scenes and body motions in action videos, and build a context knowledge map to automatically define coarse-grained groups. Then fine-grained classifiers are proposed to realize accurate action recognition. The coarse-to-fine procedure narrows action categories in target classifiers, so it is beneficial to improving recognition performance. We evaluate the proposed approach on the CCV, the HMDB-51, and the UCF101 database. Experiments verify its significant effectiveness, on average, improving more than 5% of recognition precisions than current approaches. Compared with the state-of-the-art, it also obtains outstanding performance. The proposed approach achieves higher accuracies of 93.1%, 95.4% and 74.5% in the CCV, the UCF-101 and the HMDB51 database, respectively.
Published in: IEEE Transactions on Image Processing ( Volume: 29)
Page(s): 2742 - 2752
Date of Publication: 12 November 2019

ISSN Information:

PubMed ID: 31725381

Funding Agency:


Contact IEEE to Subscribe

References

References is not available for this document.