By Topic

Image Processing, IEEE Transactions on

Issue 1 • Date Jan 1992

Filter Results

Displaying Results 1 - 14 of 14
  • New studies on adaptive predictive coding of images using multiplicative autoregressive models

    Page(s): 106 - 111
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (852 KB)  

    The authors introduce two new one-dimensional multiplicative autoregressive (MAR) models for adaptive predictive coding of digitized images. The proposed scheme offers a number of advantages. These include easy implementability, a high signal-to-noise ratio at a moderate bit rate, and guaranteed stability of the predictive coder. Results of extensive experimental studies are presented View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A unified approach to lapped orthogonal transforms

    Page(s): 111 - 116
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (404 KB)  

    The general conditions of exact reconstruction and a recursive design procedure for lapped orthogonal transform (LOT) with arbitrary length of overlapping are presented. It is shown that LOT can be realized with any standard block transform, discrete cosine transform (DCT), for example, and an additional processing. This processing must also satisfy the same conditions for exact reconfigurations and it may be pretransform processing in the time domain or post-transform processing in the transform domain. In a few examples it is shown that the LOT has a higher coding gain and smaller blocking effects then DCT. With the proposed LOT design procedure, two optimizations, the coding gain maximization and the blocking effect minimization, are presented and compared View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • On the convergence of the generalized maximum likelihood algorithm for nonuniform image motion estimation

    Page(s): 116 - 119
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (364 KB)  

    The generalized maximum likelihood algorithm is a powerful iterative scheme for waveform estimation. This algorithm seeks for the maximum likelihood estimates of the Karhunen-Loeve expansion coefficients of the waveform. The search for the maximum is performed by the steepest ascent routine. The objective of the paper is to obtain conditions that assure the stability in the mean for frame-to-frame image motion estimation. Sufficient conditions are established for the convergence of the algorithm in the absence of noise. Experimental results are presented that illustrate the behavior of the algorithm in the presence of various noise levels View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Multiplication free vector quantization using L1 distortion measure and its variants

    Page(s): 11 - 17
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (664 KB)  

    The author considers vector quantization that uses the L 1 distortion measure for its implementation. A gradient-based approach for codebook design that does not require any multiplications or median computation is proposed. Convergence of this method is proved rigorously under very mild conditions. Simulation examples comparing the performance of this technique with the LBG algorithm show that the gradient-based method, in spite of its simplicity, produces codebooks with average distortions that are comparable to the LBG algorithm. The codebook design algorithm is then extended to a distortion measure that has piecewise-linear characteristics. Once again, by appropriate selection of the parameters of the distortion measure, the encoding as well as the codebook design can be implemented with zero multiplications. The author applies the techniques in predictive vector quantization of images and demonstrates the viability of multiplication-free predictive vector quantization of image data View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Identification and restoration of spatially variant motion blurs in sequential images

    Page(s): 123 - 126
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (384 KB)  

    Sequential imaging cameras are designed to record objects in motion. When the speed of the objects exceeds the temporal resolution of the shutter, the image is blurred. Because objects in a scene are often moving in different directions at different speeds, the degradation of a recorded image may be characterized by a space-variant point spread function (PSF). The sequential nature of such images can be used to determine the relative motion of various parts of the image. This information can be used to estimate the space-variant PSF. A modification of the Landweber iteration is developed to utilize the space-variant PSF to produce an estimate of the original image View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Image coding based on a fractal theory of iterated contractive image transformations

    Page(s): 18 - 30
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1984 KB)  

    The author proposes an independent and novel approach to image coding, based on a fractal theory of iterated transformations. The main characteristics of this approach are that (i) it relies on the assumption that image redundancy can be efficiently exploited through self-transformability on a block-wise basis, and (ii) it approximates an original image by a fractal image. The author refers to the approach as fractal block coding. The coding-decoding system is based on the construction, for an original image to encode, of a specific image transformation-a fractal code-which, when iterated on any initial image, produces a sequence of images that converges to a fractal approximation of the original. It is shown how to design such a system for the coding of monochrome digital images at rates in the range of 0.5-1.0 b/pixel. The fractal block coder has performance comparable to state-of-the-art vector quantizers View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • High resolution coherent source location using transmit/receive arrays

    Page(s): 88 - 100
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1004 KB)  

    A general approach to super resolution imaging of point sources using active arrays of transmit/receive elements is presented. The usual techniques of high resolution imaging using single transmitters and passive receive arrays fail in the presence of sets of coherent point sources, which often arise due to coherent multipath. However, data obtained from transmit/receive arrays may be arranged into matrices to which eigenspace direction of arrival estimation may be successfully applied, even int he presence of coherent sources. Each such matrix may be thought of as corresponding to a different transmit/receive array; this may be either the actual transmit/receive array or a virtual transmit/receive array whose effect is synthesized. This approach provides great flexibility, since a large number of different synthetic or virtual arrays may be available for a given transmit/receive array, and each can provide a different tradeoff between the total number of resolvable targets and the largest number of mutually coherent targets which can be resolved View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Adaptive entropy coded subband coding of images

    Page(s): 31 - 48
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1896 KB)  

    The authors describe a design approach, called 2-D entropy-constrained subband coding (ECSBC), based upon recently developed 2-D entropy-constrained vector quantization (ECVQ) schemes. The output indexes of the embedded quantizers are further compressed by use of noiseless entropy coding schemes, such as Huffman or arithmetic codes, resulting in variable-rate outputs. Depending upon the specific configurations of the ECVQ and the ECPVQ over the subbands, many different types of SBC schemes can be derived within the generic 2-D ECSBC framework. Among these, the authors concentrate on three representative types of 2-D ECSBC schemes and provide relative performance evaluations. They also describe an adaptive buffer instrumented version of 2-D ECSBC, called 2-D ECSBC/AEC, for use with fixed-rate channels which completely eliminates buffer overflow/underflow problems. This adaptive scheme achieves performance quite close to the corresponding ideal 2-D ECSBC system View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A comparison of deconvolution techniques for the ultrasonic nondestructive evaluation of materials

    Page(s): 3 - 10
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (544 KB)  

    Several major deconvolution techniques commonly used for seismic applications are studied and adapted for ultrasonic NDE (nondestructive evaluation) applications. Comparisons of the relative merits of these techniques are presented based on a complete set of simulations on some real ultrasonic pulse echoes. Methods that rely largely on a reflection seismic model, such as one-at-a-time L1 spike extraction and MVD (minimum variance deconvolution), are not suitable for the NDE applications discussed here because they are limited by their underlying model. L2 and Wiener filtering, on the other hand, do not assume such a model and are, therefore, more flexible and suitable for these applications. The L2 solutions, however, are often noisy due to numerical ill conditions. This problem is partially solved in Wiener filtering, simply by adding a constant desensitizing factor q. The computational complexities of these Wiener filtering-based techniques are relatively moderate and are, therefore, more suitable for potential real-time implementations View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Automatic assessment of constraint sets in image restoration

    Page(s): 119 - 123
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (676 KB)  

    Constraints provide an important means of incorporating a priori information into the image restoration process. However, much of the information available for constructing constraints is of a tentative nature. If the validity of this tentative information can be assessed before it is incorporated into the solution, helpful constraints can be retained while harmful ones can be discarded. Cross validation is introduced as a technique for assessing the validity of such constraint sets. Because the full cross validation procedure is computationally burdensome, a modification is suggested that allows a more feasible implementation without substantially sacrificing the performance of the full procedure. Experimental results demonstrate the excellent performance of both the full and modified procedures View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Backprojection by upsampled Fourier series expansion and interpolated FFT

    Page(s): 77 - 87
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (760 KB)  

    A fast backprojection method through the use of interpolated fast Fourier transform (FFT) is presented. The computerized tomography (CT) reconstruction by the convolution backprojection (CBP) method has produced precise images. However, the backprojection part of the conventional CBP method is not very efficient. The authors propose an alternative approach to interpolating and backprojecting the convolved projections onto the image frame. First, the upsampled Fourier series expansion of the convolved projection is calculated. Then, using a Gaussian function, it is projected by the aliasing-free interpolation of FFT bins onto a rectangular grid in the frequency domain. The total amount of computation in this procedure for a 512×512 image is 1/5 of the conventional backprojection method with linear interpolation. This technique also allows the arbitrary control of the frequency characteristics View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A stabilization algorithm for multichannel multidimensional linear prediction of imagery

    Page(s): 101 - 106
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (556 KB)  

    The authors have investigated the stability problems observed in multichannel multidimensional linear predictive modeling of images. It is known that based on a positive definite autocorrelation matrix, singular values of the matrix Hi+1×Herm (δi+1) must lie inside the unit circle for a stable solution, where δi+1 is the normalized partial correlation matrix and Herm denotes the Hermitian operator. The authors have developed a two-step stabilization method to obtain stabilized linear prediction coefficients for short term analysis windows formed digitized images. The authors have modified the multichannel Levinson recursion algorithm to include this stability procedure. They have tested the algorithm on numerous images commonly used in image coding and the results are very impressive View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A system model and inversion for synthetic aperture radar imaging

    Page(s): 64 - 76
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (936 KB)  

    A system model and its corresponding inversion for synthetic aperture radar (SAR) imaging are presented. The system model incorporates the spherical nature of a radar's radiation pattern at far field. The inverse method based on this model performs a spatial Fourier transform (Doppler processing) on the recorded signals with respect to the available coordinates of a translational radar (SAR) or target (inverse SAR). It is shown that the transformed data provide samples of the spatial Fourier transform of the target's reflectivity function. The inverse method can be modified to incorporate deviations of the radar's motion from its prescribed straight line path. The effects of finite aperture on resolution, reconstruction, and sampling constraints for the imaging problem are discussed View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Image restoration using a modified Hopfield network

    Page(s): 49 - 63
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1352 KB)  

    A modified Hopfield neural network model for regularized image restoration is presented. The proposed network allows negative autoconnections for each neuron. A set of algorithms using the proposed neural network model is presented, with various updating modes: sequential updates; n-simultaneous updates; and partially asynchronous updates. The sequential algorithm is shown to converge to a local minimum of the energy function after a finite number of iterations. Since an algorithm which updates all n neurons simultaneously is not guaranteed to converge, a modified algorithm is presented, which is called a greedy algorithm. Although the greedy algorithm is not guaranteed to converge to a local minimum, the l 1 norm of the residual at a fixed point is bounded. A partially asynchronous algorithm is presented, which allows a neuron to have a bounded time delay to communicate with other neurons. Such an algorithm can eliminate the synchronization overhead of synchronous algorithms View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.

Aims & Scope

IEEE Transactions on Image Processing focuses on signal-processing aspects of image processing, imaging systems, and image scanning, display, and printing.

Full Aims & Scope

Meet Our Editors

Editor-in-Chief
Scott Acton
University of Virginia
Charlottesville, VA, USA
E-mail: acton@virginia.edu 
Phone: +1 434-982-2003