By Topic

Circuits and Systems for Video Technology, IEEE Transactions on

Issue 8 • Date Aug. 2004

Filter Results

Displaying Results 1 - 16 of 16
  • Table of contents

    Page(s): c1
    Save to Project icon | Request Permissions | PDF file iconPDF (68 KB)  
    Freely Available from IEEE
  • IEEE Transactions on Circuits and Systems for Video Technology publication information

    Page(s): c2
    Save to Project icon | Request Permissions | PDF file iconPDF (40 KB)  
    Freely Available from IEEE
  • Channel-adaptive resource allocation for scalable video transmission over 3G wireless network

    Page(s): 1049 - 1063
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1024 KB) |  | HTML iconHTML  

    The paper addresses the important issues of resource allocation for scalable video transmission over third generation (3G) wireless networks. By taking the time-varying wireless channel/network condition and scalable video codec characteristic into account, we allocate resources between source and channel coders based on the minimum-distortion or minimum-power consumption criterion. Specifically, we first present how to estimate the time-varying wireless channel/network condition through measurements of throughput and error rate in a 3G wireless network. Then, we propose a new distortion-minimized bit allocation scheme with hybrid unequal error protection (UEP) and delay-constrained automatic repeat request (ARQ), which dynamically adapts to the estimated time-varying network conditions. Furthermore, a novel power-minimized bit allocation scheme with channel-adaptive hybrid UEP and delay-constrained ARQ is proposed for mobile devices. In our proposed distortion/power-minimized bit-allocation scheme, bits are optimally distributed among source coding, forward error correction, and ARQ according to the varying channel/network condition. Simulation and analysis are performed using a progressive fine granularity scalability video codec. The simulation results show that our proposed schemes can significantly improve the reconstructed video quality under the same network conditions. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Real-time error protection of embedded codes for packet erasure and fading channels

    Page(s): 1064 - 1072
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1064 KB) |  | HTML iconHTML  

    Reliable real-time transmission of packetized embedded multimedia data over noisy channels requires the design of fast error control algorithms. For packet erasure channels, efficient forward error correction is obtained by using systematic Reed-Solomon (RS) codes across packets. For fading channels, state-of-the-art performance is given by a product channel code where each column code is an RS code and each row code is a concatenation of an outer cyclic redundancy check code and an inner rate-compatible punctured convolutional code. For each of these two systems, we propose a low-memory linear-time iterative improvement algorithm to compute an error protection solution. Experimental results for the two-dimensional and three-dimensional set partitioning in hierarchical trees coders showed that our algorithms provide close to optimal average peak signal-to-noise ratio performance, and that their running time is significantly lower than that of all previously proposed solutions. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Content-based movie analysis and indexing based on audiovisual cues

    Page(s): 1073 - 1085
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (880 KB) |  | HTML iconHTML  

    A content-based movie parsing and indexing approach is presented; it analyzes both audio and visual sources and accounts for their interrelations to extract high-level semantic cues. Specifically, the goal of this work is to extract meaningful movie events and assign them semantic labels for the purpose of content indexing. Three types of key events, namely, 2-speaker dialogs, multiple-speaker dialogs, and hybrid events, are considered. Moreover, speakers present in the detected movie dialogs are further identified based on the audio source parsing. The obtained audio and visual cues are then integrated to index the movie content. Our experiments have shown that an effective integration of the audio and visual sources can lead to a higher level of video content understanding, abstraction and indexing. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Segmentation of the face and hands in sign language video sequences using color and motion cues

    Page(s): 1086 - 1097
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1520 KB) |  | HTML iconHTML  

    We present a hand and face segmentation methodology using color and motion cues for the content-based representation of sign language video sequences. The methodology consists of three stages: skin-color segmentation; change detection; face and hand segmentation mask generation. In skin-color segmentation, a universal color-model is derived and image pixels are classified as skin or nonskin based on their Mahalanobis distance. We derive a segmentation threshold for the classifier. The aim of change detection is to localize moving objects in a video sequences. The change detection technique is based on the F test and block-based motion estimation. Finally, the results from skin-color segmentation and change detection are analyzed to segment the face and hands. The performance of the algorithm is illustrated by simulations carried out on standard test sequences. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A real-time chip implementation for adaptive video coding control

    Page(s): 1098 - 1104
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (256 KB) |  | HTML iconHTML  

    The paper presents an adaptive coding control for real-time video coding systems. Based on temporal correlations, the group-of-pictures (GOP) of a video sequence is split into one basic GOP (BGOP) and many adaptive GOPs (AGOPs) and then processed accordingly. The advantage of this method is to improve the coding efficiency, particularly solving the scene change problem. Even if the coding bit rate is over the budget, the coding scheme does not require re-encoding, hence it is especially suitable for real-time coding. Based on the adaptive algorithm, the chip is realized to process the functions of picture type decision, coding rate estimation, quantization scale decision, scene-change detection and coding mode decision in the operation. The gate count of this chip is only about 2000 and its silicon area is 2.85 mm2 using a 0.35 μm CMOS process. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A novel VLSI architecture for multidimensional discrete wavelet transform

    Page(s): 1105 - 1110
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (216 KB) |  | HTML iconHTML  

    A novel VLSI architecture for multidimensional discrete wavelet transform (mD DWT) based on a systolic array is proposed. We divide the input mD image data into 2m independent data streams, and then simultaneously pipeline them into a multi-filter chip, and finally obtain 2m samples which are from different DWT subbands per clock cycles (ccs). The proposed architecture performs a decomposition of an N1×N2×...×Nm image in about N1N2...Nm/(2m-1) ccs and requires relatively lower hardware cost than previous architectures. Besides, the advantages of the proposed architecture include very simple hardware complexity, regular data flow and low control complexity. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Motion estimation using spatio-temporal contextual information

    Page(s): 1111 - 1115
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (248 KB) |  | HTML iconHTML  

    Motion estimation is an important and computationally intensive task in video coding applications. Block-matching-based fast algorithms reduce the computational complexity of motion estimation at the expense of accuracy. An analysis of computation and performance trade-offs in motion estimation algorithms helps in choosing a suitable algorithm for video/visual communication applications. Fast motion estimation algorithms often assume a monotonic error surface in order to speed up the computations involved in motion estimation. The argument against this assumption is that the search might be trapped in local minima resulting in inaccurate motion estimates. The paper investigates state-of-the-art techniques for block-based motion estimation and presents an approach to improving the performance of block-based motion estimation algorithms. Specifically, the paper suggests a simple scheme that includes spatio-temporal neighborhood information for obtaining better estimates of the motion vectors. The paper also investigates the effects of the monotonic error surface assumption and suggests a remedy to reduce its impact on the motion field estimates. The presented experiments demonstrate the efficiency of the proposed approach. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Erratum

    Page(s): 1116 - 1
    Save to Project icon | Request Permissions | PDF file iconPDF (144 KB)  
    Full text access may be available. Click article title to sign in or learn about subscription options.
  • 2005 IEEE International Symposium on Circuits and Systems (ISCAS 2005)

    Page(s): 1117
    Save to Project icon | Request Permissions | PDF file iconPDF (521 KB)  
    Freely Available from IEEE
  • IEEE 2005 International Conference on Image Processing

    Page(s): 1118
    Save to Project icon | Request Permissions | PDF file iconPDF (484 KB)  
    Freely Available from IEEE
  • Quality without compromise [advertisement]

    Page(s): 1119
    Save to Project icon | Request Permissions | PDF file iconPDF (319 KB)  
    Freely Available from IEEE
  • Explore IEL IEEE's most comprehensive resource [advertisement]

    Page(s): 1120
    Save to Project icon | Request Permissions | PDF file iconPDF (341 KB)  
    Freely Available from IEEE
  • IEEE Circuits and Systems Society Information

    Page(s): c3
    Save to Project icon | Request Permissions | PDF file iconPDF (35 KB)  
    Freely Available from IEEE
  • IEEE Transactions on Circuits and Systems for Video Technology Information for authors

    Page(s): c4
    Save to Project icon | Request Permissions | PDF file iconPDF (32 KB)  
    Freely Available from IEEE

Aims & Scope

The emphasis is focused on, but not limited to:
1. Video A/D and D/ A
2. Video Compression Techniques and Signal Processing
3. Multi-Dimensional Filters and Transforms
4. High Speed Real-Tune Circuits
5. Multi-Processors Systems—Hardware and Software
6. VLSI Architecture and Implementation for Video Technology 

 

Full Aims & Scope

Meet Our Editors

Editor-in-Chief
Dan Schonfeld
Multimedia Communications Laboratory
ECE Dept. (M/C 154)
University of Illinois at Chicago (UIC)
Chicago, IL 60607-7053
tcsvt-eic@tcad.polito.it

Managing Editor
Jaqueline Zelkowitz
tcsvt@tcad.polito.it