By Topic

Circuits and Systems for Video Technology, IEEE Transactions on

Issue 3 • Date Mar 2003

Filter Results

Displaying Results 1 - 7 of 7
  • Color quantization of compressed video sequences

    Page(s): 270 - 276
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1272 KB)  

    This paper presents a novel color quantization algorithm for compressed video data. The proposed algorithm extracts discrete cosine (DC) transform coefficients and motion vectors of blocks in a shot to estimate a cumulative color histogram of the shot and, based on the estimated histogram, design a color palette for displaying the video sequence in the shot. It significantly reduces the complexity of the generation of a palette by effectively reducing the number of training vectors used in training a palette without sacrificing the quality. The palette obtained can provide a good display quality even if zooming and panning exist in a shot. The experimental results show that the proposed method can achieve a significant signal-to-noise ratio improvement as compared with conventional video color-quantization schemes when zooming and panning are encountered in a shot. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Design and implementation of a fuzzy hardware structure for morphological color image processing

    Page(s): 277 - 288
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (875 KB) |  | HTML iconHTML  

    A hardware implementation of a fuzzy processor suitable for morphological color image processing applications is presented for the first time. From the hardware point of view, only a small number of algorithms for hardware implementation of soft gray-scale morphological filters have been reported in the literature, since research in mathematical morphology focuses mainly on possible extensions of the standard definitions (e.g., color and fuzzy mathematical morphology). The proposed digital hardware structure is based on a sequence of pipeline stages, and parallel processing is used in order to minimize computational times. It is capable of performing the basic morphological operations of standard and soft erosion/dilation for color images of 24-bit resolution. For the computation of morphological operations, a 3 × 3-pixel image neighborhood and the corresponding structuring element are used. However, the system can be easily expanded to accommodate windows of larger sizes. The architecture of the processor is generic; the units that perform the fuzzy inference can be utilized for other fuzzy applications. It was designed, compiled, and simulated using the MAX+PLUS II Programmable Logic Development System by Altera Corporation. The fuzzy processor exhibits a level of inference performance of 601 KFLIPS with 54 rules, and can be used for real-time applications where the need for short processing times is of the utmost importance. The selection of a latest technology computer system (Pentium 4/3 GHz with SSE-2) can speed up image processing applications, but the time required still cannot be compared to the corresponding time using this hardware structure. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Vector SPIHT for embedded wavelet video and image coding

    Page(s): 231 - 246
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (913 KB) |  | HTML iconHTML  

    The set partitioning in hierarchical trees (SPIHT) approach for still-image compression proposed by Said and Pearlman (1996) is one of the most efficient embedded monochrome image compression schemes known to date. The algorithm relies on a very efficient scanning and bit-allocation scheme for quantizing the coefficients obtained by a wavelet decomposition of an image. In this paper, we adopt this approach to scan groups (vectors) of wavelet coefficients, and use successive refinement vector quantization (VQ) techniques with staggered bit-allocation to quantize the groups at once. The scheme is named vector SPIHT (VSPIHT). We present discussions on possible models for the distributions of the coefficient vectors, and show how trained classified tree-multistage VQ techniques can be used to efficiently quantize them. Extensive coding results comparing VSPIHT to scalar SPIHT in the mean-squared-error sense, are presented for monochrome images. VSPIHT is found to yield superior performance for most images, especially those with high detail content. The method is also applied to color video coding, where a partially scalable bitstream is generated. We present the coding results on QCIF sequences as compared against H.263. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Analysis and architecture design of block-coding engine for EBCOT in JPEG 2000

    Page(s): 219 - 230
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1431 KB) |  | HTML iconHTML  

    Embedded block coding with optimized truncation (EBCOT) is the most important technology in the latest image-coding standard, JPEG 2000. The hardware design of the block-coding engine in EBCOT is critical because the operations are bit-level processing and occupy more than half of the computation time of the whole compression process. A general purpose processor (GPP) is, therefore, very inefficient to process these operations. We present detailed analysis and dedicated hardware architecture of the block-coding engine to execute the EBCOT algorithm efficiently. The context formation process in EBCOT is analyzed to get an insight into the characteristics of the operation. A column-based architecture and two speed-up methods, sample skipping (SS) and group-of-column skipping (GOCS), for the context generation are then proposed. As for arithmetic encoder design, the pipeline and look-ahead techniques are used to speed up the processing. It is shown that about 60% of the processing time is reduced compared with sample-based straightforward implementation. A test chip is designed and the simulation results show that it can process 4.6 million pixels image within 1 s, corresponding to 2400 × 1800 image size, or CIF (352 × 288) 4 : 2 : 0 video sequence with 30 frames per second at 50-MHz working frequency. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • QoS-adaptive proxy caching for multimedia streaming over the Internet

    Page(s): 257 - 269
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (967 KB) |  | HTML iconHTML  

    This paper proposes a quality-of-service (QoS)-adaptive proxy-caching scheme for multimedia streaming over the Internet. Considering the heterogeneous network conditions and media characteristics, we present an end-to-end caching architecture for multimedia streaming. First, a media-characteristic-weighted replacement policy is proposed to improve the cache hit ratio of mixed media including continuous and noncontinuous media. Secondly, a network-condition- and media-quality-adaptive resource-management mechanism is introduced to dynamically re-allocate cache resource for different types of media according to their request patterns. Thirdly, a pre-fetching scheme is described based on the estimated network bandwidth, and a miss strategy to decide what to request from the server in case of cache miss based on real-time network conditions is presented. Lastly, request and send-back scheduling algorithms, integrating with unequal loss protection (ULP), are proposed to dynamically allocate network resource among different types of media. Simulation results demonstrate effectiveness of our proposed schemes. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • An efficient low-cost antialiasing method based on adaptive postfiltering

    Page(s): 247 - 256
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1096 KB) |  | HTML iconHTML  

    Aliasing in computer-synthesized images not only limits the realism of the images, but also affects the user's concentration. Many antialiasing methods have been proposed to solve this problem, but almost all of them are computation intensive, and some of them are also memory intensive. While these may not be limitations for high-end applications such as medical visualization and architectural design, this kind of antialiasing methods may still be far too costly for the low-cost applications. In this paper, we propose an antialiasing method that operates in the image domain. It is based on fitting curves to the discontinuity edges extracted from the aliased images to reshade those edge pixels. (Note that a curve may be considered as a general form of lines.) To improve the performance and the simplicity of the method, we propose to preprocess all possible edge patterns and fit curves in advance. During runtime, we only need to construct an index to obtain the filtering information from a lookup table. The new method is extremely simple and efficient. It provides a very good compromise between hardware cost and output image quality. In addition, because the new method has a very low computational cost, and hence low power consumption for hardware implementation, it is particularly suitable for low-cost mobile applications such as computer game consoles and palm computers, where low implementation cost and low power consumption are important design factors. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A high-performance JPEG2000 architecture

    Page(s): 209 - 218
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (424 KB) |  | HTML iconHTML  

    JPEG2000 is an upcoming compression standard for still images that has a feature set well tuned for diverse data dissemination. These features are possible due to adaptation of the discrete wavelet transform, intra-subband bit-plane coding, and binary arithmetic coding in the standard. We propose a system-level architecture capable of encoding and decoding the JPEG2000 core algorithm that has been defined in Part I of the standard. The key components include dedicated architectures for wavelet, bit plane, and arithmetic coders and memory interfacing between the coders. The system architecture has been implemented in VHDL and its performance evaluated for a set of images. The estimated area of the architecture, in 0.18-μ technology, is 3-mm square and the estimated frequency of operation is 200 MHz. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.

Aims & Scope

The emphasis is focused on, but not limited to:
1. Video A/D and D/ A
2. Video Compression Techniques and Signal Processing
3. Multi-Dimensional Filters and Transforms
4. High Speed Real-Tune Circuits
5. Multi-Processors Systems—Hardware and Software
6. VLSI Architecture and Implementation for Video Technology 

 

Full Aims & Scope

Meet Our Editors

Editor-in-Chief
Dan Schonfeld
Multimedia Communications Laboratory
ECE Dept. (M/C 154)
University of Illinois at Chicago (UIC)
Chicago, IL 60607-7053
tcsvt-eic@tcad.polito.it

Managing Editor
Jaqueline Zelkowitz
tcsvt@tcad.polito.it