By Topic

Circuits and Systems for Video Technology, IEEE Transactions on

Issue 1 • Date March 1992

Filter Results

Displaying Results 1 - 12 of 12
  • Comments on "Interpolative multiresolution coding of advanced television with compatible subchannels" (and reply and additional comments)

    Publication Year: 1992 , Page(s): 95 - 100
    Cited by:  Papers (1)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (460 KB)  

    Recently, Uz et al. (ibid., vol.1., no.1, p.86-99, 1991) analyzed the propagation of quantization noise in a pyramid (with feedback) and subband decomposition schemes. In this study each band was independently quantized by a scalar quantizer of equal step size. The resulting reconstruction error spectrum indicated that in both the pyramid (without feedback) and subband coding schemes, noise was building up in lower frequencies. The commenters show that a quantizer assignment method, using mean-square-error (MSE) optimal bit allocation, avoids the problem. The authors argue that the problem is more involved than just MSE optimal bit allocation, and that compatible coding was the focus, as well as guaranteed quality, effects of numerical computations, and perceptual effects. The commenters further support their argument.<> View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • High-order entropy coding for images

    Publication Year: 1992 , Page(s): 87 - 89
    Cited by:  Papers (6)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (268 KB)  

    A preliminary study shows the effectiveness of high-order entropy coding for 2-D data. The incremental conditioning tree extension method is the key element for reducing the complexity of high-order statistical coding. The determination of the conditioning state in the nonfull tree for an underlying sample is, in functionality, similar to the extraction of a codeword from a variable length coded bit string. Therefore, the hardware structure used for decoding variable length codes can be applied to determine the conditioning state based on data in the causal region View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Statistical analysis and simulation study of video teleconference traffic in ATM networks

    Publication Year: 1992 , Page(s): 49 - 59
    Cited by:  Papers (193)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (936 KB)  

    Source modeling and performance issues are studied using a long (30 min) sequence of real video teleconference data. It is found that traffic periodicity can cause different sources with identical statistical characteristics to experience differing cell-loss rates. For a single-stage multiplexer model, some of this source-periodicity effect can be mitigated by appropriate buffer scheduling and one effective scheduling policy is presented. For the sequence analyzed, the number of cells per frame follows a gamma (or negative binomial) distribution. The number of cells per frame is a stationary stochastic process. For traffic studies, neither an autoregressive model of order two nor a two-state Markov chain model is good because they do not model correctly the occurrence of frames with a large number of cells, which are a primary factor in determining cell-loss rates. The order two autoregressive model, however, fits the data well in a statistical sense. A multistate Markov chain model that can be derived from three traffic parameters is sufficiently accurate for use in traffic studies View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Design of 2-D FIR filter possessing purely imaginary frequency response by the transformation method

    Publication Year: 1992 , Page(s): 89 - 91
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (252 KB)  

    Proposes a novel method for the design of a 2-D FIR filter possessing a purely imaginary frequency response involving the McClellan transformation and the transformation method proposed by Y.L. Tai and T.P. Lin (1989). The McClellan transformation can be used only for the design of symmetric 2-D FIR filters. The proposed method can be used not only for the design of 2-D FIR filters possessing a purely imaginary frequency response, but also for the design of symmetric 2-D FIR filters (possessing a purely real frequency response) with the same structures that are proposed for the McClellan transformation method with some modifications View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Algorithms and systolic architectures for multidimensional adaptive filtering via McClellan transformations

    Publication Year: 1992 , Page(s): 60 - 71
    Cited by:  Papers (5)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1084 KB)  

    Algorithms are developed simultaneously with systolic architectures for multidimensional adaptive filtering Because of the extremely high data rate required for real-time video processing, there is a strong motivation to limit the size of any adaptation problem. Combining the McClellan transformations with systolic arrays to adapt and implement the least-squares filter yields a novel solution to the problem of adapting a large zero-phase finite impulse response (FIR) multidimensional filter, having arbitrary directional biases, with only a few parameters. These filters can be adapted abruptly on a block-by-block basis without causing blocking effects. After developing a basic processing element for a systolic array realization of the Chebyshev structure for the McClellan transformation, it is shown that for a given 2-D transformation function, the adaptation of the 1-D prototype filter becomes a small multichannel adaptation problem similar to adaptive array problems. A similar approach is also taken in developing algorithms to adapt the 2-D transformation function View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Iterative procedures for reduction of blocking effects in transform image coding

    Publication Year: 1992 , Page(s): 91 - 95
    Cited by:  Papers (173)  |  Patents (152)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (468 KB)  

    The authors propose an iterative block reduction technique based on the theory of a projection onto convex sets. The idea is to impose a number of constraints on the coded image in such a way as to restore it to its original artifact-free form. One such constraint can be derived by exploiting the fact that the transform-coded image suffering from blocking effects contains high-frequency vertical and horizontal artifacts corresponding to vertical and horizontal discontinuities across boundaries of neighboring blocks. Another constraint has to be with the quantization intervals of the transform coefficients. Specifically, the decision levels associated with transform coefficient quantizers can be used as lower and upper bounds on transform coefficients, which in turn define boundaries of the convex set for projection. A few examples of the proposed approach are presented View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Real-time parallel and fully pipelined two-dimensional DCT lattice structures with application to HDTV systems

    Publication Year: 1992 , Page(s): 25 - 37
    Cited by:  Papers (36)  |  Patents (3)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (972 KB)  

    The authors propose a fully pipelined architecture to compute the 2D discrete cosine transform (DCT) from a frame-recursive point of view. Based on this approach, two real-time parallel lattice structures for successive frame and block 2D DCT are developed. These structures are fully pipelined with throughput rate N clock cycles for an N×N successive input data frame. Moreover, the resulting 2D DCT architectures are modular, regular, and locally connected and require only two 1D DCT blocks that are extended directly from the 1D DCT structure without transposition. It is therefore suitable for VLSI implementation for high-speed HDTV systems. A parallel 2D DCT architecture and a scanning pattern for HDTV systems to achieve higher performance is proposed. The VLSI implementation of the 2D DCT using distributed arithmetic to increase computational efficiency and reduce round-off error is discussed View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Flexible architectures for morphological image processing and analysis

    Publication Year: 1992 , Page(s): 72 - 83
    Cited by:  Papers (12)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1020 KB)  

    An architecture for the efficient and high-speed realization of morphological filters is presented. Since morphological filtering can be described in terms of erosion and dilation, two basic building units performing these functions are required for the realization of any morphological filter. Dual architectures for erosion and dilation are proposed and their operations are described. Their structure, similar to the systolic array architecture as used in the implementation of linear digital filters, is highly modular and suitable for efficient very-large-scale integration (VLSI) implementation. A decomposition scheme is proposed to facilitate the implementation of two-dimensional morphological filters based on one-dimensional structuring elements constructed using the dual architectures. The proposed architectures, which also allow the processing of gray-scale images, are appropriate for applications where speed, size, and cost are of critical significance View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Two-variable modularized fast polynomial transform algorithm for 2-D discrete Fourier transforms

    Publication Year: 1992 , Page(s): 84 - 87
    Cited by:  Papers (1)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (284 KB)  

    A novel two-variable modularized fast polynomial transform (FPT) algorithm is presented. In this method, only fast polynomial transforms and fast Fourier transforms of the same length are required. The modularity, regularity, and easy extensibility of the proposed algorithm make it of great practical value in computing multidimensional discrete Fourier transforms (DFTs) View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A real-time column array processor architecture for images

    Publication Year: 1992 , Page(s): 38 - 48
    Cited by:  Papers (1)  |  Patents (3)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (976 KB)  

    A column array processor (CAP) architecture for real-time (video rate) morphological image processing is proposed. The basic idea for this structure is that of a serial-to-parallel data input format and circular data stream processing. As compared to other structures, the column array processor appears to be more economical and flexible compared to a cellular array processor and more functional than a pipelined systolic array. The authors describe the concept and the structure of the processor and compare its operations to that of other processors. The primary motivation behind such an array processor is to implement morphological image analysis and processing algorithms in real-time for applications to video signals. Use of the processor is not limited to morphological operations View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Image sequence coding using adaptive finite-state vector quantization

    Publication Year: 1992 , Page(s): 15 - 24
    Cited by:  Papers (9)  |  Patents (3)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (740 KB)  

    A coding algorithm must have the ability to adapt to changing image characteristics for image sequences. An adaptive finite-state vector quantization (FSVQ) in which the bit rate and the encoding time can be reduced is described. In order to improve the image quality and avoid producing a wrong state for an input vector, a threshold is used in FSVQ to decide whether to switch to a full searching VQ. The codebook is conditionally replenished according to a distortion threshold at a later time to reflect the local statistics of the current frame. After the codebook is replenished, one can quickly reconstruct the state codebooks of FSVQ using the state codebook selection algorithm. In the experiments, the improvement over the static SMVQ is up to 2.40 dB at nearly the same bit rate and the encoding time is only one-ninth the time required by the static SMVQ. Moreover, the improvement over the static VQ is up to 2.91 dB, and the encoding time is only three-fifths the time required by the static VQ for the image sequence `Claire' View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A transform domain classified vector quantizer for image coding

    Publication Year: 1992 , Page(s): 3 - 14
    Cited by:  Papers (20)  |  Patents (6)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1024 KB)  

    An image-coding technique, in which the discrete cosine transform (DCT) is combined with a classified vector quantization (CVQ), is presented. A DCT-transformed input block is classified according to the perceptual feature, partitioned into several smaller vectors, and then vector quantized. An efficient edge-oriented classifier employing the DCT coefficients as feature for classification is used to maintain the edge integrity in the reconstructed image. Based on a smaller geometric mean vector variance, a partition scheme in which 2-D DCT coefficients are divided into several smaller size vectors is also investigated. Because the distortion rate function (DRF) used is essential for the bit allocation algorithm to perform well, attempts have been made to modify the asymptotic DRF to estimate the performance of real VQs at low bit rates, and the modification is shown to be in good agreement with experimental results. Simulation results indicate that a good visual quality of the coded image in the range of 0.4~0.7 b/pixel is obtained View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.

Aims & Scope

The emphasis is focused on, but not limited to:
1. Video A/D and D/ A
2. Video Compression Techniques and Signal Processing
3. Multi-Dimensional Filters and Transforms
4. High Speed Real-Tune Circuits
5. Multi-Processors Systems—Hardware and Software
6. VLSI Architecture and Implementation for Video Technology 

 

Full Aims & Scope

Meet Our Editors

Editor-in-Chief
Dan Schonfeld
Multimedia Communications Laboratory
ECE Dept. (M/C 154)
University of Illinois at Chicago (UIC)
Chicago, IL 60607-7053
tcsvt-eic@tcad.polito.it

Managing Editor
Jaqueline Zelkowitz
tcsvt@tcad.polito.it