By Topic

Recognizing human action and identity based on affine-SIFT

Sign In

Cookies must be enabled to login.After enabling cookies , please use refresh or reload or ctrl+f5 on the browser for the login options.

Formats Non-Member Member
$31 $13
Learn how you can qualify for the best price for this item!
Become an IEEE Member or Subscribe to
IEEE Xplore for exclusive pricing!
close button

puzzle piece

IEEE membership options for an individual and IEEE Xplore subscriptions for an organization offer the most affordable access to essential journal articles, conference papers, standards, eBooks, and eLearning courses.

Learn more about:

IEEE membership

IEEE Xplore subscriptions

2 Author(s)
Zhuo Zhang ; Key Lab. of Network & Inf. Security Eng, Univ. of Armed Police Force, Xi''an, China ; Jia Liu

This paper presents a novel method based on Affine-SIFT detector to capture motion for human action recognition. More specifically, we propose a new action representation based on computing a rich set of descriptors from Affine-SIFT (ASIFT) key point trajectories. Since most previous approaches to human action recognition typically focus on action classification or localization, these approaches usually ignore the information about human identity. We propose using quantized local SIFT descriptors to represent human identity. A compact yet discriminative semantics visual vocabulary was built by a Latent Topic model for high-level representation. Given a novel video sequence, our algorithm can not only categorize human actions contained in the video, but also verify the persons who perform the actions. We test our algorithm on two datasets: the KTH human motion dataset and our action dataset. Our results reflect the promise of our approach.

Published in:

Electrical & Electronics Engineering (EEESYM), 2012 IEEE Symposium on

Date of Conference:

24-27 June 2012