By Topic

Network-Based H.264/AVC Whole-Frame Loss Visibility Model and Frame Dropping Methods

Sign In

Cookies must be enabled to login.After enabling cookies , please use refresh or reload or ctrl+f5 on the browser for the login options.

Formats Non-Member Member
$31 $13
Learn how you can qualify for the best price for this item!
Become an IEEE Member or Subscribe to
IEEE Xplore for exclusive pricing!
close button

puzzle piece

IEEE membership options for an individual and IEEE Xplore subscriptions for an organization offer the most affordable access to essential journal articles, conference papers, standards, eBooks, and eLearning courses.

Learn more about:

IEEE membership

IEEE Xplore subscriptions

3 Author(s)
Yueh-Lun Chang ; Dept. of Electr. & Comput. Eng., Univ. of California, San Diego, CA, USA ; Ting-Lan Lin ; Cosman, P.C.

We examine the visual effect of whole-frame loss by different decoders. Whole-frame losses are introduced in H.264/AVC compressed videos which are then decoded by two different decoders with different common concealment effects: frame copy and frame interpolation. The videos are seen by human observers who respond to each glitch they spot. We found that about 39% of whole-frame losses of B frames are not observed by any of the subjects, and over 58% of the B frame losses are observed by 20% or fewer of the subjects. Using simple predictive features that can be calculated inside a network node with no access to the original video and no pixel level reconstruction of the frame, we develop models that can predict the visibility of whole B frame losses. The models are then used in a router to predict the visual impact of a frame loss and perform intelligent frame dropping to relieve network congestion. Dropping frames based on their visual scores proves superior to random dropping of B frames.

Published in:

Image Processing, IEEE Transactions on  (Volume:21 ,  Issue: 8 )