By Topic

Modeling Scene and Object Contexts for Human Action Retrieval With Few Examples

Sign In

Cookies must be enabled to login.After enabling cookies , please use refresh or reload or ctrl+f5 on the browser for the login options.

Formats Non-Member Member
$33 $13
Learn how you can qualify for the best price for this item!
Become an IEEE Member or Subscribe to
IEEE Xplore for exclusive pricing!
close button

puzzle piece

IEEE membership options for an individual and IEEE Xplore subscriptions for an organization offer the most affordable access to essential journal articles, conference papers, standards, eBooks, and eLearning courses.

Learn more about:

IEEE membership

IEEE Xplore subscriptions

3 Author(s)
Yu-Gang Jiang ; Department of Electrical Engineering, Columbia University, New York, NY, USA ; Zhenguo Li ; Shih-Fu Chang

The use of context knowledge is critical for understanding human actions, which typically occur under particular scene settings with certain object interactions. For instance, driving car usually happens outdoors, and kissing involves two people moving toward each other. In this paper, we investigate the problem of context modeling for human action retrieval. We first identify ten simple object-level action atoms relevant to many human actions, e.g., people getting closer. With the action atoms and several background scene classes, we show that action retrieval can be improved through modeling action-scene-object dependency. An algorithm inspired by the popular semi-supervised learning paradigm is introduced for this purpose. One important contribution of this paper is to show that modeling the dependencies among actions, objects, and scenes can be efficiently achieved with very few examples. Such a solution has tremendous potential in practice as it is often expensive to acquire large sets of training data. Experiments were performed on the challenging Hollywood2 dataset containing 89 movies. The results validate the effectiveness of our approach, achieving a mean average precision of 26% with just ten examples per action.

Published in:

IEEE Transactions on Circuits and Systems for Video Technology  (Volume:21 ,  Issue: 5 )