By Topic

Image Processing, IEEE Transactions on

Issue 2 • Date Feb 1995

Filter Results

Displaying Results 1 - 10 of 10
  • Three-dimensional subband coding of video

    Publication Year: 1995 , Page(s): 125 - 139
    Cited by:  Papers (74)  |  Patents (2)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1612 KB)  

    We describe and show the results of video coding based on a three-dimensional (3-D) spatio-temporal subband decomposition. The results include a 1-Mbps coder based on a new adaptive differential pulse code modulation scheme (ADPCM) and adaptive bit allocation. This rate is useful for video storage on CD-ROM. Coding results are also shown for a 384-kbps rate that are based on ADPCM for the lowest frequency band and a new form of vector quantization (geometric vector quantization (GVQ)) for the data in the higher frequency bands. GVQ takes advantage of the inherent structure and sparseness of the data in the higher bands. Results are also shown for a 128-kbps coder that is based on an unbalanced tree-structured vector quantizer (UTSVQ) for the lowest frequency band and GVQ for the higher frequency bands. The results are competitive with traditional video coding techniques and provide the motivation for investigating the 3-D subband framework for different coding schemes and various applications View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Halftone to continuous-tone conversion of error-diffusion coded images

    Publication Year: 1995 , Page(s): 208 - 216
    Cited by:  Papers (44)  |  Patents (2)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1128 KB)  

    Considers the problem of reconstructing a continuous-tone (contone) image from its halftoned version, where the halftoning process is done by error diffusion. The authors present an iterative nonlinear decoding algorithm for halftone-to-contone conversion and show simulation results that compare the performance of the algorithm to that of conventional linear low-pass filtering. They find that the new technique results in subjectively superior reconstruction. As there is a natural relationship between error diffusion and ΣΔ modulation, the reconstruction algorithm can also be applied to the decoding problem for ΣΔ modulators View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Image coding with uniform and piecewise-uniform vector quantizers

    Publication Year: 1995 , Page(s): 140 - 146
    Cited by:  Papers (10)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (916 KB)  

    New lattice vector quantizer design procedures for nonuniform sources that yield excellent performance while retaining the structure required for fast quantization are described. Analytical methods for truncating and scaling lattices to be used in vector quantization are given, and an analytical technique for piecewise-linear multidimensional companding is presented. The uniform and piecewise-uniform lattice vector quantizers are then used to quantize the discrete cosine transform coefficients of images, and their objective and subjective performance and complexity are contrasted with other lattice vector quantizers and with LBG training-mode designs View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Likelihood calculation for a class of multiscale stochastic models, with application to texture discrimination

    Publication Year: 1995 , Page(s): 194 - 207
    Cited by:  Papers (18)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1436 KB)  

    A class of multiscale stochastic models based on scale-recursive dynamics on trees has previously been introduced. Theoretical and experimental results have shown that these models provide an extremely rich framework for representing both processes which are intrinsically multiscale, e.g., 1/f processes, as well as 1D Markov processes and 2D Markov random fields. Moreover, efficient optimal estimation algorithms have been developed for these models by exploiting their scale-recursive structure. The authors exploit this structure in order to develop a computationally efficient and parallelizable algorithm for likelihood calculation. They illustrate one possible application to texture discrimination and demonstrate that likelihood-based methods using the algorithm achieve performance comparable to that of Gaussian Markov random field based techniques, which in general are prohibitively complex computationally View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Color image coding using morphological pyramid decomposition

    Publication Year: 1995 , Page(s): 177 - 185
    Cited by:  Papers (4)  |  Patents (1)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (3584 KB)  

    Presents a new algorithm that utilizes mathematical morphology for pyramidal coding of color images. The authors obtain lossy color image compression by using block truncation coding at the pyramid levels to attain reduced data rates. The pyramid approach is attractive due to low computational complexity, simple parallel implementation, and the ability to produce acceptable color images at moderate data rates. In many applications, the progressive transmission capability of the algorithm is very useful. The authors show experimental results for color images at data rates of 1.89 bits/pixel View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A multidimensional nonlinear edge-preserving filter for magnetic resonance image restoration

    Publication Year: 1995 , Page(s): 147 - 161
    Cited by:  Papers (14)  |  Patents (1)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1288 KB)  

    The paper presents a multidimensional nonlinear edge-preserving filter for restoration and enhancement of magnetic resonance images (MRI). The filter uses both interframe (parametric or temporal) and intraframe (spatial) information to filter the additive noise from an MRI scene sequence. It combines the approximate maximum likelihood (equivalently, least squares) estimate of the interframe pixels, using MRI signal models, with a trimmed spatial smoothing algorithm, using a Euclidean distance discriminator to preserve partial volume and edge information. (Partial volume information is generated from voxels containing a mixture of different tissues.) Since the filter's structure is parallel, its implementation on a parallel processing computer is straightforward. Details of the filter implementation for a sequence of four multiple spin-echo images is explained, and the effects of filter parameters (neighborhood size and threshold value) on the computation time and performance of the filter is discussed. The filter is applied to MRI simulation and brain studies, serving as a preprocessing procedure for the eigenimage filter. (The eigenimage filter generates a composite image in which a feature of interest is segmented from the surrounding interfering features.) It outperforms conventional pre and post-processing filters, including spatial smoothing, low-pass filtering with a Gaussian kernel, median filtering, and combined vector median with average filtering View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A new parallel binary image shrinking algorithm

    Publication Year: 1995 , Page(s): 224 - 226
    Cited by:  Papers (2)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (264 KB)  

    A new parallel binary image shrinking algorithm that can be considered as an improvement of Levialdi's (1972) parallel shrinking algorithm and can be used in many image labeling algorithms as a basic operation in order to reduce the storage requirements for local memory and speed up the labeling process is presented. This new parallel shrinking algorithm shrinks an n×n binary image to an image with no black pixels in O(n) parallel steps with a multiplicative constant of 1.5, preserving the 8-connectivity in the shrinking process View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • On the modeling of DCT and subband image data for compression

    Publication Year: 1995 , Page(s): 186 - 193
    Cited by:  Papers (70)  |  Patents (7)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (652 KB)  

    Image subband and discrete cosine transform coefficients are modeled for efficient quantization and noiseless coding. Quantizers and codes are selected based on Laplacian, fixed generalized Gaussian, and adaptive generalized Gaussian models. The quantizers and codes based on the adaptive generalized Gaussian models are always superior in mean-squared error distortion performance but, generally, by no more than 0.08 bit/pixel, compared with the much simpler Laplacian model-based quantizers and noiseless codes. This provides strong motivation for the selection of pyramid codes for transform and subband image coding View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Parallel and local feature extraction: a real-time approach to road boundary detection

    Publication Year: 1995 , Page(s): 217 - 223
    Cited by:  Papers (21)  |  Patents (1)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (716 KB)  

    Presents a system for the extraction of road boundaries from an image taken in an out-of-town environment. In this application, computational speed and performance play a fundamental role in the selection of the hardware platform and the design of algorithms. The algorithm has been designed to be implemented on a special-purpose mesh-connected SIR ID architecture, PAPRICA, which will be fitted to the vehicle. This presentation focuses on the algorithms and in particular on processing speed View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • On cosine-modulated wavelet orthonormal bases

    Publication Year: 1995 , Page(s): 162 - 176
    Cited by:  Papers (17)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1052 KB)  

    Multiplicity M, K-regular, orthonormal wavelet bases (that have implications in transform coding applications) have previously been constructed by several authors. The paper describes and parameterizes the cosine-modulated class of multiplicity M wavelet tight frames (WTFs). In these WTFs, the scaling function uniquely determines the wavelets. This is in contrast to the general multiplicity M case, where one has to, for any given application, design the scaling function and the wavelets. Several design techniques for the design of K regular cosine-modulated WTFs are described and their relative merits discussed. Wavelets in K-regular WTFs may or may not be smooth, Since coding applications use WTFs with short length scaling and wavelet vectors (since long filters produce ringing artifacts, which is undesirable in, say, image coding), many smooth designs of K regular WTFs of short lengths are presented. In some cases, analytical formulas for the scaling and wavelet vectors are also given. In many applications, smoothness of the wavelets is more important than K regularity. The authors define smoothness of filter banks and WTFs using the concept of total variation and give several useful designs based on this smoothness criterion. Optimal design of cosine-modulated WTFs for signal representation is also described. All WTFs constructed in the paper are orthonormal bases View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.

Aims & Scope

IEEE Transactions on Image Processing focuses on signal-processing aspects of image processing, imaging systems, and image scanning, display, and printing.

Full Aims & Scope

Meet Our Editors

Editor-in-Chief
Scott Acton
University of Virginia
Charlottesville, VA, USA
E-mail: acton@virginia.edu 
Phone: +1 434-982-2003