Cart (Loading....) | Create Account
Close category search window
 

Full-Reference Video Quality Assessment by Decoupling Detail Losses and Additive Impairments

Sign In

Cookies must be enabled to login.After enabling cookies , please use refresh or reload or ctrl+f5 on the browser for the login options.

Formats Non-Member Member
$31 $13
Learn how you can qualify for the best price for this item!
Become an IEEE Member or Subscribe to
IEEE Xplore for exclusive pricing!
close button

puzzle piece

IEEE membership options for an individual and IEEE Xplore subscriptions for an organization offer the most affordable access to essential journal articles, conference papers, standards, eBooks, and eLearning courses.

Learn more about:

IEEE membership

IEEE Xplore subscriptions

3 Author(s)
Songnan Li ; Dept. of Electron. Eng., Chinese Univ. of Hong Kong, Shatin, China ; Lin Ma ; King Ngi Ngan

Video quality assessment plays a fundamental role in video processing and communication applications. In this paper, we study the use of motion information and temporal human visual system (HVS) characteristics for objective video quality assessment. In our previous work, two types of spatial distortions, i.e., detail losses and additive impairments, are decoupled and evaluated separately for spatial quality assessment. The detail losses refer to the loss of useful visual information that will affect the content visibility, and the additive impairments represent the redundant visual information in the test image, such as the blocking or ringing artifacts caused by data compression and so on. In this paper, a novel full-reference video quality metric is developed, which conceptually comprises the following processing steps: 1) decoupling detail losses and additive impairments within each frame for spatial distortion measure; 2) analyzing the video motion and using the HVS characteristics to simulate the human perception of the spatial distortions; and 3) taking into account cognitive human behaviors to integrate frame-level quality scores into sequence-level quality score. Distinguished from most studies in the literature, the proposed method comprehensively investigates the use of motion information in the simulation of HVS processing, e.g., to model the eye movement, to predict the spatio-temporal HVS contrast sensitivity, to implement the temporal masking effect, and so on. Furthermore, we also prove the effectiveness of decoupling detail losses and additive impairments for video quality assessment. The proposed method is tested on two subjective quality video databases, LIVE and IVP, and demonstrates the state-of-the-art performance in matching subjective ratings.

Published in:

Circuits and Systems for Video Technology, IEEE Transactions on  (Volume:22 ,  Issue: 7 )

Date of Publication:

July 2012

Need Help?


IEEE Advancing Technology for Humanity About IEEE Xplore | Contact | Help | Terms of Use | Nondiscrimination Policy | Site Map | Privacy & Opting Out of Cookies

A not-for-profit organization, IEEE is the world's largest professional association for the advancement of technology.
© Copyright 2014 IEEE - All rights reserved. Use of this web site signifies your agreement to the terms and conditions.