By Topic

Video Processing Via Implicit and Mixture Motion Models

Sign In

Cookies must be enabled to login.After enabling cookies , please use refresh or reload or ctrl+f5 on the browser for the login options.

Formats Non-Member Member
$31 $13
Learn how you can qualify for the best price for this item!
Become an IEEE Member or Subscribe to
IEEE Xplore for exclusive pricing!
close button

puzzle piece

IEEE membership options for an individual and IEEE Xplore subscriptions for an organization offer the most affordable access to essential journal articles, conference papers, standards, eBooks, and eLearning courses.

Learn more about:

IEEE membership

IEEE Xplore subscriptions

1 Author(s)
Xin Li ; West Virginia Univ., Morgantown

In this paper, we present an alternative framework for video processing without explicit motion estimation or segmentation. Motivated by the geometric constraint of motion trajectory, we propose an adaptive filtering-based model for video signals in which filter coefficients are locally estimated by the least-square method. Such localized estimation can be viewed as an implicit approach of exploiting motion-related temporal dependency. We also introduce the the concept of a virtual camera to further improve the modeling capability by exploiting the fundamental tradeoff between space and time. Using mixture models, we show how to probabilistically fuse the inference results obtained from virtual cameras in order to achieve spatio-temporal adaptation. Implicit and mixture motion model supplements the existing paradigm and provides a unified solution to a wide range of low-level vision problems including video dejittering, impulse removal, error concealment, video coding, and temporal interpolation.

Published in:

Circuits and Systems for Video Technology, IEEE Transactions on  (Volume:17 ,  Issue: 8 )