By Topic

Selected Topics in Signal Processing, IEEE Journal of

Issue 7 • Date Nov. 2011

Filter Results

Displaying Results 1 - 18 of 18
  • Table of contents

    Page(s): C1
    Save to Project icon | Request Permissions | PDF file iconPDF (34 KB)  
    Freely Available from IEEE
  • IEEE Journal of Selected Topics in Signal Processing publication information

    Page(s): C2
    Save to Project icon | Request Permissions | PDF file iconPDF (34 KB)  
    Freely Available from IEEE
  • Introduction to the Issue on Emerging Technologies for Video Compression

    Page(s): 1277 - 1281
    Save to Project icon | Request Permissions | PDF file iconPDF (901 KB)  
    Freely Available from IEEE
  • Combined Intra-Prediction for High-Efficiency Video Coding

    Page(s): 1282 - 1289
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1191 KB) |  | HTML iconHTML  

    New activities in the video coding community are focused on the delivery of technologies that will enable economic handling of future visual formats at very high quality. The key characteristic of these new visual systems is the highly efficient compression of such content. In that context this paper presents a novel approach for intra-prediction in video coding based on the combination of spatial closed- and open-loop predictions. This new tool, called Combined Intra-Prediction (CIP), enables better prediction of frame pixels which is desirable for efficient video compression. The proposed tool addresses both the rate-distortion performance enhancement as well as low-complexity requirements that are imposed on codecs for targeted high-resolution content. The novel perspective CIP offers is that of exploiting redundancy not only between neighboring blocks but also within a coding block. While the proposed tool enables yet another way to exploit spatial redundancy within video frames, its main strength is being inexpensive and simple for implementation, which is a crucial requirement for video coding of demanding sources. As shown in this paper, the CIP can be flexibly modeled to support various coding settings, providing a gain of up to 4.5% YUV BD-rate for the video sequences in the challenging High-Efficiency Video Coding Test Model. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • New Video Coding Scheme Optimized for High-Resolution Video Sources

    Page(s): 1290 - 1297
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1567 KB) |  | HTML iconHTML  

    This paper presents a design of new video coding scheme targeting substantial compression performance with reasonable complexity for next-generation high-resolution video sources. While it takes a conventional block-based MC+DCT hybrid coding approach that is suitable for hardware implementation of a high-resolution video codec, the proposed scheme achieved significant improvement of coding efficiency by introducing various technical optimizations in each functional block, especially allowing the use of larger blocks for motion compensation and transform than conventional standards. According to our experimental analysis, the proposed scheme achieves approximately 26% bit-rate savings on average compared to the state-of-the-art standard AVC/H.264 high profile. We also study the tradeoff between complexity and performance of the codec towards its standardization and implementation. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Exploiting Non-Local Correlation via Signal-Dependent Transform (SDT)

    Page(s): 1298 - 1308
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (2289 KB) |  | HTML iconHTML  

    Over the past few decades, many studies on image and video compression have found various approaches to the exploitation of spatial and temporal local correlations. However, we believe it is imperative to find more efficient methods to progress the development of image and video compression. In this paper, we first study spatial non-local correlation, deducing that there exist strong correlations in non-local regions. However, it is rather difficult to make use of these non-local correlations while simultaneously minimizing overhead. To solve this problem, we propose the signal-dependent transform (SDT), which is derived from decoded non-local blocks that are selected by matching neighboring pixels. Since the encoder and decoder can use the same methods to derive the proposed transform, we can successfully eliminate overhead. Finally, we have implemented the proposed transform into the Key Technology Area (KTA) software to exploit both spatial and temporal non-local correlations. The experimental results show that the coding gain over KTA can be as high as 1.4 dB in intra-frame coding, and up to 1.0 dB in inter-frame coding. We believe we have effectively created an alternate method to improve image and video compression. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Rate-Distortion Optimized Video Coding Using Automatic Sprites

    Page(s): 1309 - 1321
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1817 KB) |  | HTML iconHTML  

    Object-based video coding has been of interest for many years. Recent work by the authors has shown that a video coding system using background Sprites with automatic segmentation and background subtraction can indeed perform better than the H.264/AVC coder. In this paper, we extend this work by developing a rate-distortion optimization approach for an object-based coder. A key issue addressed by this approach is the joint choice of quantization parameters for the foreground and background. The performance of the resulting rate-distortion-optimized coder is far superior to that of H.264/AVC for source material with dominant global motion, across a range of bit rates. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Subjective Quality Evaluation of Foveated Video Coding Using Audio-Visual Focus of Attention

    Page(s): 1322 - 1331
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1039 KB) |  | HTML iconHTML  

    This paper presents a foveated coding method using audio-visual focus of attention and its evaluation through extensive subjective experiments on both standard-definition and high-definition sequences. Regarding a sound-emitting region as the location drawing the human attention, the method applies varying quality levels in an image frame according to the distance of a pixel to the identified sound source. Two experiments are presented to prove the efficiency of the method. Experiment 1 examines the validity and effectiveness of the method in comparison to the constant quality coding for high-quality conditions. In Experiment 2, the method is compared to the fixed bit rate coding for low quality conditions where coding artifacts are noticeable. The results demonstrate that the foveated coding method provides considerable coding gain without significant quality degradation, but uneven distributions of the coding artifacts (blockiness) by the method are often less preferred than the uniform distribution of the artifacts. Additional interesting findings are also discussed, such as content dependence of the performance of the method, the memory effect in multiple viewings, and the difference in the quality perception for frame size variations. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Towards a New Quality Metric for 3-D Synthesized View Assessment

    Page(s): 1332 - 1343
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1934 KB) |  | HTML iconHTML  

    3DTV technology has brought out new challenges such as the question of synthesized views evaluation. Synthesized views are generated through a depth image-based rendering (DIBR) process. This process induces new types of artifacts whose impact on visual quality has to be identified considering various contexts of use. While visual quality assessment has been the subject of many studies in the last 20 years, there are still some unanswered questions regarding new technological improvement. DIBR is bringing new challenges mainly because it deals with geometric distortions. This paper considers the DIBR-based synthesized view evaluation problem. Different experiments have been carried out. They question the protocols of subjective assessment and the reliability of the objective quality metrics in the context of 3DTV, in these specific conditions (DIBR-based synthesized views), and they consist in assessing seven different view synthesis algorithms through subjective and objective measurements. Results show that usual metrics are not sufficient for assessing 3-D synthesized views, since they do not correctly render human judgment. Synthesized views contain specific artifacts located around the disoccluded areas, but usual metrics seem to be unable to express the degree of annoyance perceived in the whole image. This study provides hints for a new objective measure. Two approaches are proposed: the first one is based on the analysis of the shifts of the contours of the synthesized view; the second one is based on the computation of a mean SSIM score of the disoccluded areas. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Depth Map Coding Based on Synthesized View Distortion Function

    Page(s): 1344 - 1352
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1222 KB) |  | HTML iconHTML  

    This paper presents an efficient depth map coding method based on a newly defined rendering view distortion function. As compared to the conventional depth map coding in which distortion is measured by only investigating the coding error in depth map, the proposed scheme focuses on virtually synthesized view quality by involving co-located color information. In detail, the proposed distortion function estimates rendered view quality, where area-based scheme is provided in order to mimic the warping/view-rendering process accurately. Moreover, the coding performance of the proposed distortion metric is even improved by involving the additional SKIP mode derived by co-located color coding information. The simulation results show the proposed scheme could achieve approximately 30% bit-rate saving for depth data, and about 10% bit-rate saving for overall multi-view data. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Models for Static and Dynamic Texture Synthesis in Image and Video Compression

    Page(s): 1353 - 1365
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (3926 KB) |  | HTML iconHTML  

    In this paper, we investigate the use of linear, parametric models of static and dynamic texture in the context of conventional transform coding of images and video. We propose a hybrid approach incorporating both conventional transform coding and texture-specific methods for improvement of coding efficiency. Regarding static (i.e., purely spatial) texture, we show that Gaussian Markov random fields (GMRFs) can be used for analysis/synthesis of a certain class of texture. The properties of this model allow us to derive optimal methods for classification, analysis, quantization and synthesis. For video containing dynamic textures, a linear dynamic model can be derived from frames encoded in a conventional fashion. We show that after removing effects from camera motion, this model can be used to synthesize further frames. Beyond that, we show that using synthesized frames in an appropriate fashion for prediction leads to significant bitrate savings while preserving the same peak signal-to-noise ratio (PSNR) for sequences containing dynamic textures. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Segmentation-Based Video Compression Using Texture and Motion Models

    Page(s): 1366 - 1377
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1000 KB) |  | HTML iconHTML  

    In recent years, there has been a growing interest in developing novel techniques for increasing the coding efficiency of video compression methods. One approach is to use texture and motion models of the content in a scene. Based on these models parts of the video frame are not coded or “skipped” by a classical motion compensated coder. The models are then used at the decoder to reconstruct the missing or skipped regions. In this paper, we describe several spatial-texture models for video coding. We investigate several texture features in combination with two segmentation strategies in order to detect texture regions in a video sequence. These detected areas are not encoded using motion compensated coding. The model parameters are sent to the decoder as side information. After the decoding process, frame reconstruction is done by inserting the skipped texture areas into the decoded frames. Using similar approach, we consider motion models based on human visual motion perception. We describe a motion classification model to separate foreground objects containing noticeable motion from the background. This motion model is then used in the encoder to again allow regions to be skipped and not coded using a motion compensated encoder. Our results indicate significant increase in terms of coding efficiency in comparison to the spatial texture-based methods. Finally, we discuss the effects and tradeoffs of these techniques based on perceptual experiments and show that in many cases the coding efficiency can be increased by up to 25% given a fixed perceptual quality. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A Parametric Framework for Video Compression Using Region-Based Texture Models

    Page(s): 1378 - 1392
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (2871 KB) |  | HTML iconHTML  

    This paper presents a novel means of video compression based on texture warping and synthesis. Instead of encoding whole images or prediction residuals after translational motion estimation, our algorithm employs a perspective motion model to warp static textures and utilizes texture synthesis to create dynamic textures. Texture regions are segmented using features derived from the complex wavelet transform and further classified according to their spatial and temporal characteristics. Moreover, a compatible artifact-based video metric (AVM) is proposed with which to evaluate the quality of the reconstructed video. This is also employed in-loop to prevent warping and synthesis artifacts. The proposed algorithm has been integrated into an H.264 video coding framework. The results show significant bitrate savings, of up to 60% compared with H.264 at the same objective quality (based on AVM) and subjective scores. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • IEEE Journal of Selected Topics in Signal Processing Information for authors

    Page(s): 1393 - 1394
    Save to Project icon | Request Permissions | PDF file iconPDF (34 KB)  
    Freely Available from IEEE
  • IEEE Foundation [advertisement]

    Page(s): 1395
    Save to Project icon | Request Permissions | PDF file iconPDF (320 KB)  
    Freely Available from IEEE
  • Have you visited lately? www.ieee.org [advertisement]

    Page(s): 1396
    Save to Project icon | Request Permissions | PDF file iconPDF (210 KB)  
    Freely Available from IEEE
  • IEEE Signal Processing Society Information

    Page(s): C3
    Save to Project icon | Request Permissions | PDF file iconPDF (34 KB)  
    Freely Available from IEEE
  • Blank page [back cover]

    Page(s): C4
    Save to Project icon | Request Permissions | PDF file iconPDF (5 KB)  
    Freely Available from IEEE

Aims & Scope

The Journal of Selected Topics in Signal Processing (J-STSP) solicits special issues on topics that cover the entire scope of the IEEE Signal Processing Society including the theory and application of filtering, coding, transmitting, estimating, detecting, analyzing, recognizing, synthesizing, recording, and reproducing signals by digital or analog devices or techniques.

Full Aims & Scope

Meet Our Editors

Editor-in-Chief
Fernando Pereira
Instituto Superior Técnico