By Topic

Boosted Exemplar Learning for Action Recognition and Annotation

Sign In

Cookies must be enabled to login.After enabling cookies , please use refresh or reload or ctrl+f5 on the browser for the login options.

Formats Non-Member Member
$31 $13
Learn how you can qualify for the best price for this item!
Become an IEEE Member or Subscribe to
IEEE Xplore for exclusive pricing!
close button

puzzle piece

IEEE membership options for an individual and IEEE Xplore subscriptions for an organization offer the most affordable access to essential journal articles, conference papers, standards, eBooks, and eLearning courses.

Learn more about:

IEEE membership

IEEE Xplore subscriptions

5 Author(s)
Tianzhu Zhang ; Nat. Lab. of Pattern Recognition, Chinese Acad. of Sci., Beijing, China ; Jing Liu ; Si Liu ; Changsheng Xu
more authors

Human action recognition and annotation is an active research topic in computer vision. How to model various actions, varying with time resolution, visual appearance, and others, is a challenging task. In this paper, we propose a boosted exemplar learning (BEL) approach to model various actions in a weakly supervised manner, i.e., only action bag-level labels are provided but action instance level ones are not. The proposed BEL method can be summarized as three steps. First, for each action category, amount of class-specific candidate exemplars are learned through an optimization formulation considering their discrimination and co-occurrence. Second, each action bag is described as a set of similarities between its instances and candidate exemplars. Instead of simply using a heuristic distance measure, the similarities are decided by the exemplar-based classifiers through the multiple instance learning, in which a positive (or negative) video or image set is deemed as a positive (or negative) action bag and those frames similar to the given exemplar in Euclidean Space as action instances. Third, we formulate the selection of the most discriminative exemplars into a boosted feature selection framework and simultaneously obtain an action bag-based detector. Experimental results on two publicly available datasets: the KTH dataset and Weizmann dataset, demonstrate the validity and effectiveness of the proposed approach for action recognition. We also apply BEL to learn representations of actions by using images collected from the Web and use this knowledge to automatically annotate action in YouTube videos. Results are very impressive, which proves that the proposed algorithm is also practical in unconstraint environments.

Published in:

Circuits and Systems for Video Technology, IEEE Transactions on  (Volume:21 ,  Issue: 7 )