By Topic

Selected Topics in Signal Processing, IEEE Journal of

Issue 6 • Date Oct. 2012

Filter Results

Displaying Results 1 - 19 of 19
  • Table of contents

    Page(s): C1
    Save to Project icon | Request Permissions | PDF file iconPDF (120 KB)  
    Freely Available from IEEE
  • IEEE Journal of Selected Topics in Signal Processing publication information

    Page(s): C2
    Save to Project icon | Request Permissions | PDF file iconPDF (34 KB)  
    Freely Available from IEEE
  • A Message from the Vice President of Publications on New Developments in Signal Processing Society Publications

    Page(s): 613
    Save to Project icon | Request Permissions | PDF file iconPDF (84 KB)  
    Freely Available from IEEE
  • Introduction to the Special Issue on New Subjective and Objective Methodologies for Audio and Visual Signal Processing

    Page(s): 614 - 615
    Save to Project icon | Request Permissions | PDF file iconPDF (31 KB)  
    Freely Available from IEEE
  • Analysis of Public Image and Video Databases for Quality Assessment

    Page(s): 616 - 625
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1563 KB) |  | HTML iconHTML  

    Databases of images or videos annotated with subjective ratings constitute essential ground truth for training, testing, and benchmarking algorithms for objective quality assessment. More than two dozen such databases are now available in the public domain; they are presented and analyzed in this paper. We propose several criteria for quantitative comparisons of source content, test conditions, and subjective ratings, which are used as the basis for the ensuing analyses and discussion. This information will allow researchers to make more well-informed decisions about databases, and may also guide the creation of additional test material and the design of future experiments. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Image Retargeting Quality Assessment: A Study of Subjective Scores and Objective Metrics

    Page(s): 626 - 639
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1929 KB) |  | HTML iconHTML  

    This paper presents the result of a recent large-scale subjective study of image retargeting quality on a collection of images generated by several representative image retargeting methods. Owning to many approaches to image retargeting that have been developed, there is a need for a diverse independent public database of the retargeted images and the corresponding subjective scores to be freely available. We build an image retargeting quality database, in which 171 retargeted images (obtained from 57 natural source images of different contents) were created by several representative image retargeting methods. And the perceptual quality of each image is subjectively rated by at least 30 viewers, meanwhile the mean opinion scores (MOS) were obtained. It is revealed that the subject viewers have arrived at a reasonable agreement on the perceptual quality of the retargeted image. Therefore, the MOS values obtained can be regarded as the ground truth for evaluating the quality metric performances. The database is made publicly available (Image Retargeting Subjective Database, [Online]. Available: http://ivp.ee.cuhk.edu.hk/projects/demo/retargeting/index.html) to the research community in order to further research on the perceptual quality assessment of the retargeted images. Moreover, the built image retargeting database is analyzed from the perspectives of the retargeting scale, the retargeting method, and the source image content. We discuss how to retarget the images according to the scale requirement and the source image attribute information. Furthermore, several publicly available quality metrics for the retargeted images are evaluated on the built database. How to develop an effective quality metric for retargeted images is discussed through a specifically designed subjective testing process. It is demonstrated that the metric performance can be further improved, by fusing the descriptors of shape distortion and content information loss. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • The Influence of Subjects and Environment on Audiovisual Subjective Tests: An International Study

    Page(s): 640 - 651
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (2057 KB) |  | HTML iconHTML  

    Traditionally, audio quality and video quality are evaluated separately in subjective tests. Best practices within the quality assessment community were developed before many modern mobile audiovisual devices and services came into use, such as internet video, smart phones, tablets and connected televisions. These devices and services raise unique questions that require jointly evaluating both the audio and the video within a subjective test. However, audiovisual subjective testing is a relatively under-explored field. In this paper, we address the question of determining the most suitable way to conduct audiovisual subjective testing on a wide range of audiovisual quality. Six laboratories from four countries conducted a systematic study of audiovisual subjective testing. The stimuli and scale were held constant across experiments and labs; only the environment of the subjective test was varied. Some subjective tests were conducted in controlled environments and some in public environments (a cafeteria, patio or hallway). The audiovisual stimuli spanned a wide range of quality. Results show that these audiovisual subjective tests were highly repeatable from one laboratory and environment to the next. The number of subjects was the most important factor. Based on this experiment, 24 or more subjects are recommended for Absolute Category Rating (ACR) tests. In public environments, 35 subjects were required to obtain the same Student's t-test sensitivity. The second most important variable was individual differences between subjects. Other environmental factors had minimal impact, such as language, country, lighting, background noise, wall color, and monitor calibration. Analyses indicate that Mean Opinion Scores (MOS) are relative rather than absolute. Our analyses show that the results of experiments done in pristine, laboratory environments are highly representative of those devices in actual use, in a typical user environment. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Video Quality Assessment on Mobile Devices: Subjective, Behavioral and Objective Studies

    Page(s): 652 - 671
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1662 KB) |  | HTML iconHTML  

    We introduce a new video quality database that models video distortions in heavily-trafficked wireless networks and that contains measurements of human subjective impressions of the quality of videos. The new LIVE Mobile Video Quality Assessment (VQA) database consists of 200 distorted videos created from 10 RAW HD reference videos, obtained using a RED ONE digital cinematographic camera. While the LIVE Mobile VQA database includes distortions that have been previously studied such as compression and wireless packet-loss, it also incorporates dynamically varying distortions that change as a function of time, such as frame-freezes and temporally varying compression rates. In this article, we describe the construction of the database and detail the human study that was performed on mobile phones and tablets in order to gauge the human perception of quality on mobile devices. The subjective study portion of the database includes both the differential mean opinion scores (DMOS) computed from the ratings that the subjects provided at the end of each video clip, as well as the continuous temporal scores that the subjects recorded as they viewed the video. The study involved over 50 subjects and resulted in 5,300 summary subjective scores and time-sampled subjective traces of quality. In the behavioral portion of the article we analyze human opinion using statistical techniques, and also study a variety of models of temporal pooling that may reflect strategies that the subjects used to make the final decision on video quality. Further, we compare the quality ratings obtained from the tablet and the mobile phone studies in order to study the impact of these different display modes on quality. We also evaluate several objective image and video quality assessment (IQA/VQA) algorithms with regards to their efficacy in predicting visual quality. A detailed correlation analysis and statistical hypothesis testing is carried out. Our general conclusion is that existing VQA algori- hms are not well-equipped to handle distortions that vary over time. The LIVE Mobile VQA database, along with the subject DMOS and the continuous temporal scores is being made available to researchers in the field of VQA at no cost in order to further research in the area of video quality assessment. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Content-Adaptive Packet-Layer Model for Quality Assessment of Networked Video Services

    Page(s): 672 - 683
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1426 KB) |  | HTML iconHTML  

    Packet-layer models are designed to use only the information provided by packet headers for real-time and non-intrusive quality monitoring of networked video services. This paper proposes a content-adaptive packet-layer (CAPL) model for networked video quality assessment. Considering the fact that the quality degradation of a networked video significantly relies on the temporal as well as the spatial characteristics of the video content, temporal complexity is incorporated in the proposed model. Due to very limited information directly available from packet headers, a simple and adaptive method for frame type detection is adopted in the CAPL model. The temporal complexity is estimated using the ratio of the number of bits for coding P and I frames. The estimated temporal complexity and frame type are incorporated in the CAPL model together with the information about the number of bits and positions of lost packets to obtain the quality estimate for each frame, by evaluating the distortions induced by both compression and packet loss. A two-level temporal pooling is employed to obtain the video quality given the frame quality. Using content related information, the proposed model is able to adapt to different video contents. Experimental results show that the CAPL model significantly outperforms the G.1070 model and the DT model in terms of widely used performance criteria, including the Root-Mean-Squared Error (RMSE), the Pearson Correlation Coefficient (PCC), the Spearman Rank Order Correlation Coefficient (SCC), and the Outlier Ratio (OR). View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Perceptual Video Compression: A Survey

    Page(s): 684 - 697
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (475 KB) |  | HTML iconHTML  

    With the advances in understanding perceptual properties of the human visual system and constructing their computational models, efforts toward incorporating human perceptual mechanisms in video compression to achieve maximal perceptual quality have received great attention. This paper thoroughly reviews the recent advances of perceptual video compression mainly in terms of the three major components, namely, perceptual model definition, implementation of coding, and performance evaluation. Furthermore, open research issues and challenges are discussed in order to provide perspectives for future research trends. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Stereoscopic Depth Cues Outperform Monocular Ones on Autostereoscopic Display

    Page(s): 698 - 709
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1066 KB) |  | HTML iconHTML  

    The aim of this study is two-fold; first, to compare how certain visual aids contribute in depth estimation tasks on a portable autostereoscopic display; and second, how these depth cues impact on perceived quality. These were studied on a quantitative subjective study using a portable autostereoscopic display in controlled laboratory environment. Test participants evaluated object depths on three-dimensional images, where either 2D cues, 3D cues, or their combinations were provided. The study was conducted using three different compression levels in order to study how image quality affects the perception of depth. The results indicate that the depth estimation task is faster conducted when the participants relied on stereoscopic depth cues compared to situations where only monocular cues were present. Also, depth estimation task is faster conducted with higher quality images. Depth estimation accuracy was also higher with stereoscopic depth cues than with monocular ones. These results suggest that the human visual system can make more reliable depth estimates on portable autostereoscopic displays when stereoscopic cues are present. However, the results of the quality evaluations indicate that, as stated also on previous studies, the added stereoscopic depth does not seem to increase the subjective image quality. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Evaluating Depth Perception of 3D Stereoscopic Videos

    Page(s): 710 - 720
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1287 KB) |  | HTML iconHTML  

    3D video quality of experience (QoE) is a multidimensional problem; many factors contribute to the global rating like image quality, depth perception and visual discomfort. Due to this multidimensionality, it is proposed in this paper, that as a complement to assessing the quality degradation due to coding or transmission, the appropriateness of the non-distorted signal should be addressed. One important factor here is the depth information provided by the source sequences. From an application-perspective, the depth-characteristics of source content are of relevance for pre-validating whether the content is suitable for 3D video services. In addition, assessing the interplay between binocular and monocular depth features and depth perception are relevant topics for 3D video perception research. To achieve the evaluation of the suitability of 3D content, this paper describes both a subjective experiment and a new objective indicator to evaluate depth as one of the added values of 3D video. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Analyzing Speech Quality Perception Using Electroencephalography

    Page(s): 721 - 731
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (2012 KB) |  | HTML iconHTML  

    Common speech quality evaluation methods rely on self-reported opinions after perceiving test stimuli. Whereas these methods-when carefully applied-provide valid and reliable quality indices, they provide little insight into the processes underlying perception and judgment. In this paper, we analyze the performance of electroencephalography (EEG) for indicating different types of degradations in speech stimuli. We show that a certain EEG technique, event-related-potentials (ERP) analysis, is a useful and valid tool in quality research. Three experiments are reported which show that quality degradations can be monitored in conscious and presumably non-conscious stages of processing. Potential and limitations of the approach are discussed and lines of future research are drawn. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • IEEE Journal of Selected Topics in Signal Processing information for authors

    Page(s): 732 - 733
    Save to Project icon | Request Permissions | PDF file iconPDF (130 KB)  
    Freely Available from IEEE
  • IEEE Xplore Digital Library [advertisement]

    Page(s): 734
    Save to Project icon | Request Permissions | PDF file iconPDF (1346 KB)  
    Freely Available from IEEE
  • IEEE Foundation

    Page(s): 735
    Save to Project icon | Request Permissions | PDF file iconPDF (320 KB)  
    Freely Available from IEEE
  • Quality without compromise [advertisement]

    Page(s): 736
    Save to Project icon | Request Permissions | PDF file iconPDF (324 KB)  
    Freely Available from IEEE
  • IEEE Signal Processing Society Information

    Page(s): C3
    Save to Project icon | Request Permissions | PDF file iconPDF (33 KB)  
    Freely Available from IEEE
  • [Blank page - back cover]

    Page(s): C4
    Save to Project icon | Request Permissions | PDF file iconPDF (5 KB)  
    Freely Available from IEEE

Aims & Scope

The Journal of Selected Topics in Signal Processing (J-STSP) solicits special issues on topics that cover the entire scope of the IEEE Signal Processing Society including the theory and application of filtering, coding, transmitting, estimating, detecting, analyzing, recognizing, synthesizing, recording, and reproducing signals by digital or analog devices or techniques.

Full Aims & Scope

Meet Our Editors

Editor-in-Chief
Fernando Pereira
Instituto Superior Técnico