By Topic

Circuits and Systems for Video Technology, IEEE Transactions on

Issue 2 • Date Apr 1995

Filter Results

Displaying Results 1 - 13 of 13
  • New systolic array implementation of the 2-D discrete cosine transform and its inverse

    Publication Year: 1995 , Page(s): 150 - 157
    Cited by:  Papers (19)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (640 KB)  

    A new systolic array without matrix transposition hardware is proposed to compute the two-dimensional discrete cosine transform (2-D DCT) based on the row-column decomposition. This architecture uses N2 multipliers to evaluate N×N-point DCTs at a rate of one complete transform per N clock cycles, where N is even. It possesses the features of regularity and modularity, and is thus well suited to VLSI implementation. As compared to existing pipelined regular architectures for the 2-D DCT, the proposed one has better throughput performance, smaller area-time complexity, and lower communication complexity. The new idea for the 2-D DCT is also extended to derive a similar systolic array for the 2-D inverse discrete cosine transform (IDCT). Simulation results demonstrate that the proposed 2-D DCT and IDCT architectures have good fixed-point error performance for both real image and random data. As a consequence, they are useful for applications where very high throughput rates are required View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A clustering algorithm for entropy-constrained vector quantizer design with applications in coding image pyramids

    Publication Year: 1995 , Page(s): 83 - 95
    Cited by:  Papers (8)  |  Patents (4)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1308 KB)  

    A clustering algorithm for the design of efficient vector quantizers to be followed by entropy coding is proposed. The algorithm, called entropy-constrained pairwise nearest neighbor (ECPNN), designs codebooks by merging the pair of Voronoi regions which gives the least increase in distortion for a given decrease in entropy. The algorithm can be used as an alternative to the entropy-constrained vector quantizer design (ECVQ) proposed by Chou, Lookabaugh, and Gray (1989). By a natural extension of the ECPNN algorithm the authors develop another algorithm that designs alphabet and entropy-constrained vector quantizers and call it alphabet- and entropy-constrained pairwise nearest neighbor (AECPNN) design. Through simulations on synthetic sources, it is shown that ECPNN and ECVQ have indistinguishable mean-square-error versus rate performance and that the ECPNN and AECPNN algorithms obtain as close performance by the same measure as the ECVQ and AECVQ (Rao and Pearlman, 1993) algorithms. The advantages over ECVQ are that the ECPNN approach enables much faster codebook design and uses smaller codebooks. A single pass through the ECPNN (or AECPNN) design algorithm, which progresses from larger to successively smaller rates, allows the storage of any desired number of intermediate codebooks. In the context of multirate subband (or transform) coders, this feature is especially desirable. The performance of coding image pyramids using ECPNN and AECPNN codebooks at rates from 1/3 to 1.0 bit/pixel is discussed View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Perceptually based directional classified gain-shape vector quantization

    Publication Year: 1995 , Page(s): 96 - 108
    Cited by:  Papers (2)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1320 KB)  

    A new image coding system, termed directional classified gain-shape vector quantization (DCGSVQ), is introduced in this paper. A content-classifier, operating in the spatial domain, is employed to classify each image block of 8×8 pixels into one of several classes which represent various image patterns (edges in various directions, monotone areas, complex texture, etc.). Then a classified gain-shape vector quantizer is employed in the cosine domain to encode directional vectors of AC transform coefficients while using either a scalar quantizer or a gain-shape vector quantizer to encode the DC coefficients. A new vector configuration scheme is proposed in order to better adapt the system to the local statistics of the image blocks. In addition, various properties of the human visual system like frequency sensitivity, the masking effect, and orientation sensitivity are incorporated into the proposed system to improve further the subjective quality of the reconstructed images. A new algorithm for designing the various shape codebooks, needed for the DCGSVQ, is proposed based on the classified nearest neighbor clustering (CNNC) algorithm of Kubrick and Ellis (1990). Finally, an optional simple method for feature enhancement, based on inherent properties of the proposed system, is proposed enabling further image processing at the receiver. Coding results are presented showing a very good subjective quality of reconstructed images for bit-rates in the range 0.48-0.625 b per pixel View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Cheops: a reconfigurable data-flow system for video processing

    Publication Year: 1995 , Page(s): 140 - 149
    Cited by:  Papers (20)  |  Patents (9)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (924 KB)  

    The Cheops Imaging System is a compact, modular platform for acquisition, processing, and display of digital video sequences and model-based representations of moving scenes, and is intended as both a laboratory tool and a prototype architecture for future programmable video decoders. Rather than using a large number of general-purpose processors and dividing up image processing tasks spatially, Cheops abstracts out a set of basic, computationally intensive stream operations that may be performed in parallel and embodies them in specialized hardware. We review the Cheops architecture, describe the software system that has been developed to perform resource management, and present the results of some performance tests View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Efficient H.261-based two-layer video codecs for ATM networks

    Publication Year: 1995 , Page(s): 171 - 175
    Cited by:  Papers (16)  |  Patents (1)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (468 KB)  

    Two methods for bit-rate reduction of two-layer video codecs, without impairing their robustness to cell loss are introduced. Both methods employ an H.261 compatible coder at the base layer and an interframe coder at the second layer. In the first method a second H.261 is used to code the residual errors of input pixels. Parameters of this Twin-H.261 codec in terms of effect of motion estimation, resilience to cell loss and forced updating are investigated. The second method employs an interframe coder among the enhancement DCT coefficients (Inter-Enhance) of successive frames of similar frequencies. Introduction of a leaky prediction in the interframe loop preserves the coder resilience to cell loss. Optimum value of the leak factor is found to be in the range of 0.85-0.95. It is shown that while both Twin-H.261 and Inter-Enhance codecs generate bit rates close to the one-layer coder, their resilience to cell loss closely follows the normal two-layer coder View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Detection and correction of transmission errors in DPCM images

    Publication Year: 1995 , Page(s): 166 - 171
    Cited by:  Papers (5)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (608 KB)  

    A new approach to detection and correction of transmission errors in DPCM images is proposed. Here each transmission error (streak noise) is detected via a sequence of the Wilcoxon-Mann-Whitney (WMW) rank-sum tests performed on a sliding window using four particularly designed grouping schemes. Instead of four possible patterns used by Kundu and Wu (1990), in the proposed approach 18 possible detection patterns represented as a decision tree are identified. The transmission error detection procedure is equivalent to tracing the decision tree. The estimated gray level shift of a detected streak noise is used to locate the streak origin. Then instead of using previous line replacement, each streak noise can be almost deleted by compensating the corrupted pixels with the estimated gray level shift. Experimental results show the feasibility of the proposed approach View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • An optimization approach for removing blocking effects in transform coding

    Publication Year: 1995 , Page(s): 74 - 82
    Cited by:  Papers (81)  |  Patents (59)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1184 KB)  

    One drawback of the discrete cosine transform (DCT) is visible block boundaries due to coarse quantization of the coefficients. Most restoration techniques for the removing blocking effect are variations of low-pass filtering, and as such, result in unnecessary blurring. The authors propose a new approach for reducing the blocking effect which can be applied to conventional transform coding without introducing additional information or significant blurring. The method exploits the correlation between the intensity values of boundary pixels of two neighboring blocks. It is based on the theoretical and empirical observation that under mild assumptions, quantization of the DCT coefficients of two neighboring blocks increases the expected value of the mean squared difference of slope (MSDS) between the slope across two adjacent blocks, and the average between the boundary slopes of each of the two blocks. The amount of this increase is dependent upon the width of quantization intervals of the transform coefficients. Therefore, among all permissible inverse quantized coefficients, the set which reduces the expected value of this MSDS by an appropriate amount is most likely to decrease the blocking effect. To estimate the set of unquantized coefficients, the authors solve a constrained quadratic programming problem. The approach is based on the gradient projection method. It is shown that from a subjective viewpoint, the blocking effect is less noticeable in the author' processed images than in the ones using existing filtering techniques View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A 100 MHz 2-D 8×8 DCT/IDCT processor for HDTV applications

    Publication Year: 1995 , Page(s): 158 - 165
    Cited by:  Papers (71)  |  Patents (1)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (576 KB)  

    This paper discusses the design of a combined DCT/IDCT CMOS integrated circuit for real time processing of HDTV signals. The processor operates on 8×8 blocks. Inputs include the blocked pixels that are scanned one pixel at a time, and external control signals that control the forward or inverse modes of operation. Input pixels have a precision of 9-b for the DCT and 12-b for the IDCT. The layout has been generated with a 0.8 μm CMOS library using the Mentor Graphics GDT tools and measures under 10 mm2. Critical path simulation indicates a maximum input sample rate of 100 MHz View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A parallel decoder of programmable Huffman codes

    Publication Year: 1995 , Page(s): 175 - 178
    Cited by:  Papers (19)  |  Patents (1)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (352 KB)  

    Huffman coding, a variable-length entropy coding scheme, is an integral component of international standards on image and video compression including high-definition television (HDTV). The high-bandwidth HDTV systems of data rate in excess of 100 Mpixels/s presents a challenge for designing a fast and economic circuit for intrinsically sequential Huffman decoding operations. This paper presents an algorithm and a circuit implementation for parallel decoding of programmable Huffman codes by using the numerical properties of Huffman codes. The 1.2 μm CMOS implementation for a single JPEG AC table of 256 codewords of up to 16-b codeword lengths is estimated to run at 10 MHz with a chip area of 11 mm2, decoding one codeword per cycle. The design can be pipelined to deliver a throughput of 80 MHz for decoding input streams of consecutive Huffman codes. Furthermore, our programmable scheme can be easily integrated into data paths of video processors to support different Huffman tables used in image/video applications View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Delay-shared N-path structures for video-rate SC FIR filters

    Publication Year: 1995 , Page(s): 109 - 118
    Cited by:  Papers (4)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (828 KB)  

    Switched-capacitor technology implemented by 2.4 μm CMOS offers considerable reliability/cost advantages. Its limited clock-rate values are the traditional obstacle to its introduction in high frequency applications, such as video. This obstacle can be removed by an architectural scheme broadening the internal clock rate by an integer factor with respect to the input clock rate. The feasibility of this concept is demonstrated by the switched-capacitor realization of a color difference prefilter complying with Recommendation CCIR 601 View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Dynamic range analysis for the implementation of fast transform

    Publication Year: 1995 , Page(s): 178 - 180
    Cited by:  Papers (4)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (216 KB)  

    An optimal shortest word length implementation of the fast transform based upon mathematical analysis is presented. The flow graph of any fast transform can be expressed as the product of several sparse matrices, where each matrix represents a single pass butterfly operation (i.e., multiplication and accumulation). Each decomposed sparse matrix is analyzed to determine whether a butterfly operation would result in a bit overflow. Additional bits are allocated only to the matrices in which an overflow is likely to occur so that the shortest bit-length implementation is maintained. This methodology is applicable to the shortest bit-length implementation of any fast transform. The application of the proposed method to an existing FDCT algorithm is demonstrated for fixed-point computation View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Dependent scalar quantization of color images

    Publication Year: 1995 , Page(s): 124 - 139
    Cited by:  Papers (7)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1244 KB)  

    Many image display devices can allow only a limited number of colors, called color palette, to be simultaneously displayed. In order to have a faithful color reproduction of an image, the associated color palette must be suitably designed. This paper presents a dependent scalar quantization algorithm to design the color palette effectively. The dependent scalar quantization algorithm consists of two procedures, bit allocation and recursive binary moment preserving thresholding. The experimental results show that the dependent scalar quantization can reduce the computation complexity and its output images quality is acceptable to the human eyes. A rule of the quantization order is also deduced under the MSE criterion to obtain a dependent scalar quantizer which has as good a performance as compared with some other algorithms. In addition, an adaptive neighborhood-clustering algorithm, which searches the neighboring color indices of input pixels iteratively, is proposed to further improve the performance of the dependent scalar quantization algorithm. Finally, we introduce a color mapping method to reduce the contouring effect when the color palette size generated by the dependent scalar quantizer is small View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A fast vector quantization encoding method for image compression

    Publication Year: 1995 , Page(s): 119 - 123
    Cited by:  Papers (13)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (452 KB)  

    This paper presents a general search method to speed up the encoding process for vector quantization. The method exploits the topological structure of the codebook to dynamically eliminate the code vectors for encoding a particular input vector and thus decrease the number of distance calculations which require very intensive computations. The relations between the proposed method and several existing fast algorithms are discussed. Based on the proposed method, a new fast encoding algorithm for vector quantization is developed. Simulation results demonstrate that with little preprocessing and memory cost, the encoding time of the new algorithm has been reduced significantly while encoding quality remains the same with respect to exhaustive search View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.

Aims & Scope

The emphasis is focused on, but not limited to:
1. Video A/D and D/ A
2. Video Compression Techniques and Signal Processing
3. Multi-Dimensional Filters and Transforms
4. High Speed Real-Tune Circuits
5. Multi-Processors Systems—Hardware and Software
6. VLSI Architecture and Implementation for Video Technology 

 

Full Aims & Scope

Meet Our Editors

Editor-in-Chief
Dan Schonfeld
Multimedia Communications Laboratory
ECE Dept. (M/C 154)
University of Illinois at Chicago (UIC)
Chicago, IL 60607-7053
tcsvt-eic@tcad.polito.it

Managing Editor
Jaqueline Zelkowitz
tcsvt@tcad.polito.it