By Topic

View-Invariant Action Recognition from Point Triplets

Sign In

Cookies must be enabled to login.After enabling cookies , please use refresh or reload or ctrl+f5 on the browser for the login options.

Formats Non-Member Member
$33 $13
Learn how you can qualify for the best price for this item!
Become an IEEE Member or Subscribe to
IEEE Xplore for exclusive pricing!
close button

puzzle piece

IEEE membership options for an individual and IEEE Xplore subscriptions for an organization offer the most affordable access to essential journal articles, conference papers, standards, eBooks, and eLearning courses.

Learn more about:

IEEE membership

IEEE Xplore subscriptions

2 Author(s)
Yuping Shen ; University of Central Florida, Orlando ; Hassan Foroosh

We propose a new view-invariant measure for action recognition. For this purpose, we introduce the idea that the motion of an articulated body can be decomposed into rigid motions of planes defined by triplets of body points. Using the fact that the homography induced by the motion of a triplet of body points in two identical pose transitions reduces to the special case of a homology, we use the equality of two of its eigenvalues as a measure of the similarity of the pose transitions between two subjects, observed by different perspective cameras and from different viewpoints. Experimental results show that our method can accurately identify human pose transitions and actions even when they include dynamic timeline maps, and are obtained from totally different viewpoints with different unknown camera parameters.

Published in:

IEEE Transactions on Pattern Analysis and Machine Intelligence  (Volume:31 ,  Issue: 10 )