By Topic

Robust Temporal Activity Templates Using Higher Order Statistics

Sign In

Cookies must be enabled to login.After enabling cookies , please use refresh or reload or ctrl+f5 on the browser for the login options.

Formats Non-Member Member
$31 $13
Learn how you can qualify for the best price for this item!
Become an IEEE Member or Subscribe to
IEEE Xplore for exclusive pricing!
close button

puzzle piece

IEEE membership options for an individual and IEEE Xplore subscriptions for an organization offer the most affordable access to essential journal articles, conference papers, standards, eBooks, and eLearning courses.

Learn more about:

IEEE membership

IEEE Xplore subscriptions

2 Author(s)
Briassouli, A. ; CERTH-ITI, Inf. & Telematics Inst., Thessaloniki, Greece ; Kompatsiaris, I.

A robust, theoretically founded approach for the extraction of temporal templates corresponding to areas of motion in video, is presented. Higher order statistics (kurtosis) are employed to extract activity areas, i.e., binary masks indicating which pixels in a video are active. The application of the kurtosis on illumination changes modeled as Gaussians and mixture of Gaussians is shown to be sensitive to outliers for both models, thus correctly localizing active pixels. Activity areas are compared to existing, difference-based temporal templates, known as motion energy images, and the robustness of both categories of temporal templates to additive noise is analyzed theoretically. Experiments with numerous real videos with additive noise, both indoors and outdoors, are conducted to compare the robustness of the activity areas and motion energy images, and their temporal extensions, the activity history areas, and motion history images. As expected from the theoretical analysis, the kurtosis-based activity areas prove to be more robust than the difference-based templates. Challenging videos containing occlusions, varying backgrounds, and shadows are also examined, and it is shown that the proposed approach outperforms the difference-based method for these cases, as well, consistently providing reliable localization of activity under a wide range of difficult circumstances. The proposed approach provides good results at a very low computational cost, and without requiring prior knowledge about the scene, nor training of any kind.

Published in:

Image Processing, IEEE Transactions on  (Volume:18 ,  Issue: 12 )