By Topic

Image Processing, IEEE Transactions on

Issue 2 • Date Feb 1997

Filter Results

Displaying Results 1 - 13 of 13
  • Methods for reconstruction of 2-D sequences from Fourier transform magnitude

    Page(s): 222 - 233
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (604 KB)  

    The Gerchberg-Saxton (GS) algorithm and its generalizations have been the main tools for phase retrieval. Unfortunately, it has been observed that the reconstruction using these algorithms does not always converge to the correct result even if the desired solution satisfies the uniqueness condition. In this paper, we propose a new deautocorrelation algorithm and a few auxiliary techniques. We recommend that a combination of the iterative Fourier transform (IFT) algorithm with our new algorithm and techniques can improve the probability of success of phase retrieval. A pragmatic procedure is illustrated. Different reconstruction examples that are difficult to reconstructed using the single IFT algorithm are used to show the robustness and effectiveness of the new combination of algorithms. If the given Fourier modulus data contain no noise, it is sometimes possible to get a perfect reconstruction. Even when the signal-to-noise ratio (SNR) of the Fourier modulus data is only 10 dB, a meaningful result remains reachable for our examples. A concept concerning the intrinsic ambiguity of phase retrieval is suggested. We emphasize the necessity of verification of the solution, since the available phase retrieval algorithms are incompetent for distinguishing between an intrinsically ambiguous solution and the true solution View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Joint thresholding and quantizer selection for transform image coding: entropy-constrained analysis and applications to baseline JPEG

    Page(s): 285 - 297
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (644 KB)  

    Striving to maximize baseline (Joint Photographers Expert Group-JPEG) image quality without compromising compatibility of current JPEG decoders, we develop an image-adaptive JPEG encoding algorithm that jointly optimizes quantizer selection, coefficient “thresholding”, and Huffman coding within a rate-distortion (R-D) framework. Practically speaking, our algorithm unifies two previous approaches to image-adaptive JPEG encoding: R-D optimized quantizer selection and R-D optimal thresholding. Conceptually speaking, our algorithm is a logical consequence of entropy-constrained vector quantization (ECVQ) design principles in the severely constrained instance of JPEG-compatible encoding. We explore both viewpoints: the practical, to concretely derive our algorithm, and the conceptual, to justify the claim that our algorithm approaches the best performance that a JPEG encoder can achieve. This performance includes significant objective peak signal-to-noise ratio (PSNR) improvement over previous work and at high rates gives results comparable to state-of-the-art image coders. For example, coding the Lena image at 1.0 b/pixel, our JPEG encoder achieves a PSNR performance of 39.6 dB that slightly exceeds the quoted PSNR results of Shapiro's wavelet-based zero-tree coder. Using a visually based distortion metric, we can achieve noticeable subjective improvement as well. Furthermore, our algorithm may be applied to other systems that use run-length encoding, including intraframe MPEG and subband or wavelet coding View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Mathematical methods for the design of color scanning filters

    Page(s): 312 - 320
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (228 KB)  

    The problem of the design of color scanning filters is addressed in this paper. The problem is posed within the framework of the vector space approach to color systems. The measure of the goodness of a set of color scanning filters presented in earlier work is used as an optimization criterion to design color scanning filters modeled in terms of known, smooth, nonnegative functions. The best filters are then trimmed using the gradient of the mean square ΔEab error to obtain filters with a lower value of perceptual error. The results obtained demonstrate the utility of the method View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Deinterlacing by successive approximation

    Page(s): 339 - 344
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (220 KB)  

    We propose an algorithm for deinterlacing of interlaced video sequences. It successively builds approximations to the deinterlaced sequence by weighting various interpolation methods. A particular example given here uses four interpolation methods, weighted according to the errors each one introduces. Due to weighting, it is an adaptive algorithm. It is also time-recursive, since the motion-compensated part uses the previously interpolated frame. Furthermore, bidirectional motion estimation and compensation allow for better performance in the case of scene changes and covering/uncovering of objects. Experiments are run both on “real-world” and computer generated sequences. Finally, subjective testing is performed to evaluate the quality of the algorithm View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A fast recursive shortest spanning tree for image segmentation and edge detection

    Page(s): 328 - 332
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (532 KB)  

    This correspondence presents a fast recursive shortest spanning tree algorithm for image segmentation and edge detection. The conventional algorithm requires a complexity of o(n2) for an image of n pixels, while the complexity of our approach is bounded by O(n), which is a new lower bound for algorithms of this kind. The total memory requirement of our fast algorithm is 20% smaller View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • EM algorithm for image segmentation initialized by a tree structure scheme

    Page(s): 349 - 352
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (152 KB)  

    In this correspondence, the objective is to segment vector images, which are modeled as multivariate finite mixtures. The underlying images are characterized by Markov random fields (MRFs), and the applied segmentation procedure is based on the expectation-maximization (EM) technique. We propose an initialization procedure that does not require any prior information and yet provides excellent initial estimates for the EM method. The performance of the overall segmentation is demonstrated by segmentation of simulated one-dimensional (1D) and multidimensional magnetic resonance (MR) brain images View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Deterministic edge-preserving regularization in computed imaging

    Page(s): 298 - 311
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (536 KB)  

    Many image processing problems are ill-posed and must be regularized. Usually, a roughness penalty is imposed on the solution. The difficulty is to avoid the smoothing of edges, which are very important attributes of the image. In this paper, we first give conditions for the design of such an edge-preserving regularization. Under these conditions, we show that it is possible to introduce an auxiliary variable whose role is twofold. First, it marks the discontinuities and ensures their preservation from smoothing. Second, it makes the criterion half-quadratic. The optimization is then easier. We propose a deterministic strategy, based on alternate minimizations on the image and the auxiliary variable. This leads to the definition of an original reconstruction algorithm, called ARTUR. Some theoretical properties of ARTUR are discussed. Experimental results illustrate the behavior of the algorithm. These results are shown in the field of 2D single photon emission tomography, but this method can be applied in a large number of applications in image processing View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Color image retrieval based on hidden Markov models

    Page(s): 332 - 339
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (296 KB)  

    In this correspondence, a new approach to retrieving images from a color image database is proposed. Each image in the database is represented by a two-dimensional pseudo-hidden Markov model (2-D PHMM), which characterizes the chromatic and spatial information about the image. In addition, a flexible pictorial querying method is used, by which users can paint the rough content of the desired images in a query picture. Image matching is achieved by comparing the query picture with each 2-D PHMM in the database. Experimental results show that the proposed approach is indeed effective View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Mathematical methods for the analysis of color scanning filters

    Page(s): 321 - 327
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (200 KB)  

    The problem of the sensitivity analysis of color scanning filters is addressed in this paper. The second differential of the mean square ΔEab error provides a means of calculating the sensitivity of the mean square ΔEab error to filter fabrication errors. Tolerances on the allowable change in the mean square ΔEab error are used to define bounds on the filter fabrication errors at all wavelengths and at single wavelengths View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Robust detection of skew in document images

    Page(s): 344 - 349
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (308 KB)  

    We describe a robust yet fast algorithm for skew detection in binary document images. The method is based on interline cross-correlation in the scanned image. Instead of finding the correlation for the entire image, it is calculated over small regions selected randomly. The proposed method does not require prior segmentation of the document into text and graphics regions. The maximum median of cross-correlation is used as the criterion to obtain the skew, and a Monte Carlo sampling technique is chosen to determine the number of regions over which the correlations have to be calculated. Experimental results on detecting skews in various types of documents containing different linguistic scripts are presented here View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Multiresolution Gauss-Markov random field models for texture segmentation

    Page(s): 251 - 267
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (784 KB)  

    This paper presents multiresolution models for Gauss-Markov random fields (GMRFs) with applications to texture segmentation. Coarser resolution sample fields are obtained by subsampling the sample field at fine resolution. Although the Markov property is lost under such resolution transformation, coarse resolution non-Markov random fields can be effectively approximated by Markov fields. We present two techniques to estimate the GMRF parameters at coarser resolutions from the fine resolution parameters, one by minimizing the Kullback-Leibler distance and another based on local conditional distribution invariance. We also allude to the fact that different GMRF parameters at the fine resolution can result in the same probability measure after subsampling and present the results for first- and second-order cases. We apply this multiresolution model to texture segmentation. Different texture regions in an image are modeled by GMRFs and the associated parameters are assumed to be known. Parameters at lower resolutions are estimated from the fine resolution parameters. The coarsest resolution data is first segmented and the segmentation results are propagated upward to the finer resolution. We use the iterated conditional mode (ICM) minimization at all resolutions. Our experiments with synthetic, Brodatz texture, and real satellite images show that the multiresolution technique results in a better segmentation and requires lesser computation than the single resolution algorithm View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Object-based estimation of dense motion fields

    Page(s): 234 - 250
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (2392 KB)  

    Motion estimation belongs to key techniques in image sequence processing. Segmentation of the motion fields such that, ideally, each independently moving object uniquely corresponds to one region, is one of the essential elements in object-based image processing. This paper is concerned with unsupervised simultaneous estimation of dense motion fields and their segmentations. It is based on a stochastic model relating image intensities to motion information. Based on the analysis of natural images, a region-based model of motion-compensated prediction error is proposed. In each region the error is modeled by a white stationary generalized Gaussian random process. The motion field and its segmentation are themselves modeled by a compound Gibbs/Markov random field accounting for statistical bindings in spatial direction and along the direction of motion trajectories. The a posteriori distribution of the motion field for a given image sequence is formulated as an objective function, such that its maximization results in the MAP estimate. A deterministic multiscale relaxation technique with regular structure is employed for optimization of the objective function. Simulation results are in a good agreement with human perception for both the motion fields and their segmentations View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Cluster-based probability model and its application to image and texture processing

    Page(s): 268 - 284
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (488 KB)  

    We develop, analyze, and apply a specific form of mixture modeling for density estimation within the context of image and texture processing. The technique captures much of the higher order, nonlinear statistical relationships present among vector elements by combining aspects of kernel estimation and cluster analysis. Experimental results are presented in the following applications: image restoration, image and texture compression, and texture classification View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.

Aims & Scope

IEEE Transactions on Image Processing focuses on signal-processing aspects of image processing, imaging systems, and image scanning, display, and printing.

Full Aims & Scope

Meet Our Editors

Editor-in-Chief
Scott Acton
University of Virginia
Charlottesville, VA, USA
E-mail: acton@virginia.edu 
Phone: +1 434-982-2003