By Topic

Action detection using multiple spatial-temporal interest point features

Sign In

Cookies must be enabled to login.After enabling cookies , please use refresh or reload or ctrl+f5 on the browser for the login options.

Formats Non-Member Member
$33 $13
Learn how you can qualify for the best price for this item!
Become an IEEE Member or Subscribe to
IEEE Xplore for exclusive pricing!
close button

puzzle piece

IEEE membership options for an individual and IEEE Xplore subscriptions for an organization offer the most affordable access to essential journal articles, conference papers, standards, eBooks, and eLearning courses.

Learn more about:

IEEE membership

IEEE Xplore subscriptions

6 Author(s)
Liangliang Cao ; Beckman Institute, University of Illinois at Urbana-Champaign ; YingLi Tian ; Zicheng Liu ; Benjamin Yao
more authors

This paper considers the problem of detecting actions from cluttered videos. Compared with the classical action recognition problem, this paper aims to estimate not only the scene category of a given video sequence, but also the spatial-temporal locations of the action instances. In recent years, many feature extraction schemes have been designed to describe various aspects of actions. However, due to the difficulty of action detection, e.g., the cluttered background and potential occlusions, a single type of features cannot solve the action detection problems perfectly in cluttered videos. In this paper, we attack the detection problem by combining multiple Spatial-Temporal Interest Point (STIP) features, which detect salient patches in the video domain, and describe these patches by feature of local regions. The difficulty of combining multiple STIP features for action detection is two folds: First, the number of salient patches detected by different STIP methods varies across different salient patches. How to combine such features is not considered by existing fusion methods. Second, the detection in the videos should be efficient, which excludes many slow machine learning algorithms. To handle these two difficulties, we propose a new approach which combines Gaussian Mixture Model with Branch-and-Bound search to efficiently locate the action of interest. We build a new challenging dataset for our action detection task, and our algorithm obtains impressive results. On classical KTH dataset, our method outperforms the state-of-the-art methods.

Published in:

Multimedia and Expo (ICME), 2010 IEEE International Conference on

Date of Conference:

19-23 July 2010