By Topic

Image Processing, IEEE Transactions on

Issue 3 • Date Jul 1992

Filter Results

Displaying Results 1 - 17 of 17
  • Combination median filter

    Page(s): 422 - 429
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (652 KB)  

    A detail- and structure-preserving smoothing filter is introduced. It is called the combination median filter as it uses directional median, multilevel median, and median filters to smooth different regions of the image. The decision about the region is made by robust Dixon's r-test which is well known in statistics for outlier detection. The threshold value of Dixon's test can be kept constant. As a result, the filtering algorithm operates like a quasi-nonadaptive filter, and no computation of local statistics is involved. Some properties of the filter as well as detailed experimental results that demonstrate its superior performance are presented View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Unique tomographic reconstruction of vector fields using boundary data

    Page(s): 406 - 412
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (556 KB)  

    The problem of reconstructing a vector field v(r) from its line integrals (through some domain D) is generally undetermined since v(r) is defined by two component functions. When v(r) is decomposed into its irrotational and solenoidal components, it is shown that the solenoidal part is uniquely determined by the line integrals of v(r). This is demonstrated in a particularly simple manner in the Fourier domain using a vector analog of the well-known projection slice theorem. In addition, under the constraint that v (r) is divergenceless in D, a formula for the scalar potential φ(r) is given in terms of the normal component of v(r) on the boundary D. An important application of vector tomography, i.e., a fluid velocity field from reciprocal acoustic travel time measurements or Doppler backscattering measurements, is considered View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Array imaging with beam-steered data

    Page(s): 379 - 390
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (652 KB)  

    The author presents a system model and inversion for the beam-steered data obtained by linearly varying the relative phase among the elements of an array, also known as phased array scan data. The system model and inversion incorporate the radiation pattern of the array's elements. The inversion method utilizes the time samples of the echoed signals for each scan angle instead of range focusing. It is shown that the temporal Fourier transform of the phased array scan data provides the distribution of the spatial Fourier transform of the reflectivity function for the medium to be imaged. The extent of this coverage is related to the array's length and the temporal frequency bandwidth of the transmitted pulsed signal. Sampling constraints and reconstruction procedure for the imaging system are discussed. It is shown that the imaging information obtained by the inversion of phased array scan data is equivalent to the image reconstructed from its synthesized array counterpart View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Predictive classified vector quantization

    Page(s): 269 - 280
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (904 KB)  

    A vector quantization scheme based on the classified vector quantization (CVQ) concept, called predictive classified vector quantization (PCVQ), is presented. Unlike CVQ where the classification information has to be transmitted, PCVQ predicts it, thus saving valuable bit rate. Two classifiers, one operating in the Hadamard domain and the other in the spatial domain, were designed and tested. The classification information was predicted in the spatial domain. The PCVQ schemes achieved bit rate reductions over the CVQ ranging from 20 to 32% for two commonly used color test images while maintaining the same acceptable image quality. Bit rates of 0.70-0.93 bits per pixel (bpp) were obtained depending on the image and PCVQ scheme used View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Two-dimensional phase unwrapping using a minimum spanning tree algorithm

    Page(s): 355 - 365
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1616 KB)  

    Phase unwrapping refers to the determination of phase from modulo 2π data, some of which may not be reliable. In 2D, this is equivalent to confining the support of the phase function to one or more arbitrarily shaped regions. A phase unwrapping algorithm is presented which works for 2D data known only within a set of nonconnected regions with possibly nonconvex boundaries. The algorithm includes the following steps: segmentation to identify connectivity, phase unwrapping within each segment using a Taylor series expansion, phase unwrapping between disconnected segments along an optimum path, and filling of phase information voids. The optimum path for intersegment unwrapping is determined by a minimum spanning tree algorithm. Although the algorithm is applicable to any 2D data, the main application addressed is magnetic resonance imaging (MRI) where phase maps are useful View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Morphological autocorrelation transform: A new representation and classification scheme for two-dimensional images

    Page(s): 337 - 354
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1212 KB)  

    A methodology based on mathematical morphology is proposed for efficient recognition of two-dimensional (2D) objects or shapes. It is based on the introduction a shape descriptor called the morphological autocorrelation transform (MAT). The MAT of an image is composed of a family of geometrical correlation functions (GCFs) which define its morphological covariance in a specific direction. The MAT is translation-, scale-, and rotation-invariant. It is shown that in most situations, a small subset of the MAT suffices for image representation. The characteristics and performance of a shape recognition system based on the MAT are investigated and analyzed. Computational complexity of the proposed morphological-based recognition system is examined. It is shown that shape properties, such as area, perimeter, and orientation, are readily derived from the MAT representation, and that the proposed system is well suited for shape representation and classification View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • An adaptive recursive 2-D filter for removal of Gaussian noise in images

    Page(s): 431 - 436
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (784 KB)  

    A 2D recursive low-pass filter with adaptive coefficients for restoring images degraded by Gaussian noise is proposed. Some of the ideas developed are also submitted for nonGaussian noise. The adaptation is performed with respect to three local image features-edges, spots, and flat regions-for which detectors are developed by extending some existing methods. It is demonstrated that the filter can easily be extended so that simultaneous noise removal and edge enhancement is possible. A comparison with other approaches is made. Some examples illustrate the performance of the filter View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Segmentation of polarimetric synthetic aperture radar data

    Page(s): 281 - 300
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (2056 KB)  

    A statistical image model is proposed for segmenting polarimetric synthetic aperture radar (SAR) data into regions of homogeneous and similar polarimetric backscatter characteristics. A model for the conditional distribution of the polarimetric complex data is combined with a Markov random field representation for the distribution of the region labels to obtain the posterior distribution. Optimal region labeling of the data is then defined as maximizing the posterior distribution of the region labels given the polarimetric SAR complex data (maximum a posteriori (MAP) estimate). Two procedures for selecting the characteristics of the regions are then discussed. Results using real multilook polarimetric SAR complex data are given to illustrate the potential of the two selection procedures and evaluate the performance of the MAP segmentation technique. It is also shown that dual polarization SAR data can yield segmentation resultS similar to those obtained with fully polarimetric SAR data View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Multidimensional chirp algorithms for computing Fourier transforms

    Page(s): 429 - 431
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (268 KB)  

    Continuous versions of the multidimensional chirp algorithms compute the function G(y)=F(My), where F(y) is the Fourier transform of a function f(x) of a vector variable x and M is an invertible matrix. Discrete versions of the algorithms compute values of F over the lattice L2=ML1 from values of f over a lattice L1, where L2 need not contain the lattice reciprocal to L1. If M is symmetric, the algorithms are multidimensional versions of the Bluestein chirp algorithm, which employs two pointwise multiplication operations (PMOs) and one convolution operation (CO). The discrete version may be efficiently implemented using fast algorithms to compute the convolutions. If M is not symmetric, three modifications are required. First, the Fourier transform is factored as the product of two Fresnel transforms. Second, the matrix M is factored as M=AB, where A and B are symmetric matrices. Third, the Fresnel transforms are modified by the matrices A and B and each modified transform is factored into a product of two PMOs and one CO View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Fast full search equivalent encoding algorithms for image compression using vector quantization

    Page(s): 413 - 416
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (368 KB)  

    Three fast search routines to be used in the encoding phase of vector quantization (VQ) image compression systems are presented. These routines, which are based on geometric considerations, provide the same results as an exhaustive (or full) search. Examples show that the proposed algorithms need only 3-20% of the number of mathematical operations required by a full search and fewer than 50% of the operations required by recently proposed alternatives View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • On 2-D recursive LMS algorithms using ARMA prediction for ADPCM encoding of images

    Page(s): 416 - 422
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (736 KB)  

    A two-dimensional (2D) linear predictor which has an autoregressive moving average (ARMA) representation well as a bias term is adapted for adaptive differential pulse code modulation (ADPCM) encoding of nonnegative images. The predictor coefficients are updated by using a 2D recursive LMS (TRLMS) algorithm. A constraint on optimum values for the convergence factors and an updating algorithm based on the constraint are developed. The coefficient updating algorithm can be modified with a stability control factor. This realization can operate in real time and in the spatial domain. A comparison of three different types of predictors is made for real images. ARMA predictors show improved performance relative to an AR algorithm View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Coarray synthesis with circular and elliptical boundary arrays

    Page(s): 391 - 405
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1172 KB)  

    An elliptical boundary aperture is a collection of points lying on an ellipse from which energy is transmitted and/or received. An important special case is the circular boundary aperture. When these apertures are used with beamforming to produce a narrowband image of a far-field source, the corresponding point spread function (PSF) is characterized by high sidelobes. The concept of the coarray of an imaging system is used here to develop techniques which synthesize the effect of a more desirable PSF with an elliptical boundary aperture. Techniques are given for use in active imaging of spatially coherent sources, as well as passive imaging of spatially incoherent sources. Discrete arrays and continuous apertures are considered separately. The approach shows that the PSF synthesis problem can be solved in many more ways than previously recognized, and this fact is exploited to develop procedures which have a least-squares optimality property View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Methods for choosing the regularization parameter and estimating the noise variance in image restoration and their relation

    Page(s): 322 - 336
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1288 KB)  

    The application of regularization to ill-conditioned problems necessitates the choice of a regularization parameter which trades fidelity to the data with smoothness of the solution. The value of the regularization parameter depends on the variance of the noise in the data. The problem of choosing the regularization parameter and estimating the noise variance in image restoration is examined. An error analysis based on an objective mean-square-error (MSE) criterion is used to motivate regularization. Two approaches for choosing the regularization parameter and estimating the noise variance are proposed. The proposed and existing methods are compared and their relationship to linear minimum-mean-square-error filtering is examined. Experiments are presented that verify the theoretical results View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Two-dimensional joint process lattice for adaptive restoration of images

    Page(s): 366 - 378
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1248 KB)  

    The two dimensional (2D) joint process lattice (TDJPL) and its implementations for image restoration applications are examined. A 2D adaptive lattice algorithm (TDAL) is first developed. Convergence properties of the algorithm are covered for the 2D adaptive lattice least mean squares (TDAL-LMS) case. The complexity of the normalized algorithm is slightly more than that of the TDAL-LMS, but it is a faster-converging algorithm. Implementations of the proposed TDJPL estimator as a 2D adaptive lattice noise canceler and as a 2D adaptive lattice line enhancer are then considered. The performance of both schemes is evaluated using artificially degraded image data at different signal-to-noise ratios (SNRs). The results show that substantial noise reduction has been achieved, and the high improvement in the mean square error, even at very low input SNR, is ensured. The results obtained consistently demonstrate the efficacy of the proposed TDJPL implementations, and illustrate the success in its use for adaptive restoration of images View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A complexity reduction technique for image vector quantization

    Page(s): 312 - 321
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1040 KB)  

    A technique for reducing the complexity of spatial-domain image vector quantization (VQ) is proposed. The conventional spatial domain distortion measure is replaced by a transform domain subspace distortion measure. Due to the energy compaction properties of image transforms, the dimensionality of the subspace distortion measure can be reduced drastically without significantly affecting the performance of the new quantizer. A modified LBG algorithm incorporating the new distortion measure is proposed. Unlike conventional transform domain VQ, the codevector dimension is not reduced and a better image quality is guaranteed. The performance and design considerations of a real-time image encoder using the techniques are investigated. Compared with spatial domain a speed up in both codebook design time and search time is obtained for mean residual VQ, and the size of fast RAM is reduced by a factor of four. Degradation of image quality is less than 0.4 dB in PSNR View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A filter based bit allocation scheme for subband compression of HDTV

    Page(s): 436 - 440
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (372 KB)  

    The authors compare the subband compression capabilities of eight filter sets (consisting of linear-phase quadrature mirror filters (QMFs), perfect reconstruction filters, and nonlinear phase wavelets) at different bit rates, using-a filter-based bit allocation procedure. Using DPCM and PCM in HDTV subband coding, it is found that QMFs have an edge over the rest View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Blur identification by the method of generalized cross-validation

    Page(s): 301 - 311
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (800 KB)  

    The point spread function (PSF) of a blurred image is often unknown a priori; the blur must first be identified from the degraded image data before restoring the image. Generalized cross-validation (GCV) is introduced to address the blur identification problem. The GCV criterion identifies model parameters for the blur, the image, and the regularization parameter, providing all the information necessary to restore the image. Experiments are presented which show that GVC is capable of yielding good identification results. A comparison of the GCV criterion with maximum-likelihood (ML) estimation shows the GCV often outperforms ML in identifying the blur and image model parameters View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.

Aims & Scope

IEEE Transactions on Image Processing focuses on signal-processing aspects of image processing, imaging systems, and image scanning, display, and printing.

Full Aims & Scope

Meet Our Editors

Editor-in-Chief
Scott Acton
University of Virginia
Charlottesville, VA, USA
E-mail: acton@virginia.edu 
Phone: +1 434-982-2003