By Topic

Image Processing, IEEE Transactions on

Issue 12 • Date Dec 1995

Filter Results

Displaying Results 1 - 9 of 9
  • Focused image recovery from two defocused images recorded with different camera settings

    Publication Year: 1995 , Page(s): 1613 - 1628
    Cited by:  Papers (24)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (4520 KB)  

    Two new methods are presented for recovering the focused image of an object from only two blurred images recorded with different camera parameter settings. The camera parameters include lens position, focal length, and aperture diameter. First a blur parameter σ is estimated using one of our proposed depth-from-defocus methods. Then one of the two blurred images is deconvolved to recover the focused image. The first method is based on a spatial domain convolution/deconvolution transform. This method requires only the knowledge of σ of the camera's point spread function (PSF). It does not require information about the actual form of the camera's PSF. The second method, in contrast to the first, requires full knowledge of the form of the PSF. As part of the second method, we present a calibration procedure for estimating the camera's PSF for different values of the blur parameter σ. In the second method, the focused image is obtained through deconvolution in the Fourier domain using a Wiener filter. For both methods, the results of experiments on actual defocused images recorded by a CCD camera are given. The first method requires much less computation than the second method. The first method gives satisfactory results for up to medium levels of blur and the second method gives good results for up to relatively high levels of blur View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Circular-Mellin features for texture segmentation

    Publication Year: 1995 , Page(s): 1629 - 1640
    Cited by:  Papers (7)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (2012 KB)  

    Texture is an important cue in region-based segmentation of images. We provide an insight into the development of a new set of distortion-invariant texture operators. These “circular-Mellin” operators are invariant to both scale and orientation of the target and represent the spectral decomposition of the image scene in the polar-log coordinate system. Coupled with the unique shift invariance property of the correlator architecture, we show that these circular-Mellin operators can be used for rotation-and scale-invariant feature extraction. We note that while these feature extractors have a functional form that is similar to the Gabor operators, they have distortion-invariant characteristics unlike the Gabor functions that make them more suitable for texture segmentation. A detailed analytical description of these operators and segmentation results to highlight their salient properties are presented View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A methodology for quantitative performance evaluation of detection algorithms

    Publication Year: 1995 , Page(s): 1667 - 1674
    Cited by:  Papers (32)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1056 KB)  

    We present a methodology for the quantitative performance evaluation of detection algorithms in computer vision. A common method is to generate a variety of input images by varying the image parameters and evaluate the performance of the algorithm, as algorithm parameters vary. Operating curves that relate the probability of misdetection and false alarm are generated for each parameter setting. Such an analysis does not integrate the performance of the numerous operating curves. We outline a methodology for summarizing many operating curves into a few performance curves. This methodology is adapted from the human psychophysics literature and is general to any detection algorithm. The central concept is to measure the effect of variables in terms of the equivalent effect of a critical signal variable, which in turn facilitates the determination of the breakdown point of the algorithm. We demonstrate the methodology by comparing the performance of two-line detection algorithms View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • On the optimality of nonunitary filter banks in subband coders

    Publication Year: 1995 , Page(s): 1585 - 1591
    Cited by:  Papers (31)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (652 KB)  

    This paper investigates the energy compaction capabilities of nonunitary filter banks in subband coding. It is shown that nonunitary filter banks have larger coding gain than unitary filter banks because of the possibility of performing half-whitening in each channel. For long filter unit pulse responses, optimization of subband coding gain for stationary input signals results in a filter bank decomposition, where each channel works as an optimal open-loop DPCM system. We derive a formula giving the optimal filter response for each channel as a function of the input power spectral density (PSD). For shorter filter bank responses, good gain is obtained by suboptimal half-whitening responses, but the impact on the theoretical coding gain is still highly significant. Image coding examples demonstrate that better performance is achieved using nonunitary filter banks when the input images are in correspondence with the signal model View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A multiscale stochastic image model for automated inspection

    Publication Year: 1995 , Page(s): 1641 - 1654
    Cited by:  Papers (7)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (2040 KB)  

    We develop a novel multiscale stochastic image model to describe the appearance of a complex three-dimensional object in a two-dimensional monochrome image. This formal image model is used in conjunction with Bayesian estimation techniques to perform automated inspection. The model is based on a stochastic tree structure in which each node is an important subassembly of the three-dimensional object. The data associated with each node or subassembly is modeled in a wavelet domain. We use a fast multiscale search technique to compute the sequential MAP (SMAP) estimate of the unknown position, scale factor, and 2-D rotation for each subassembly. The search is carried out in a manner similar to a sequential likelihood ratio test, where the process advances in scale rather than time. The results of this search determine whether or not the object passes inspection. A similar search is used in conjunction with the EM algorithm to estimate the model parameters for a given object from a set of training images. The performance of the algorithm is demonstrated on two different real assemblies View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Maximum likelihood parameter estimation of textures using a Wold-decomposition based model

    Publication Year: 1995 , Page(s): 1655 - 1666
    Cited by:  Papers (25)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (2188 KB)  

    We present a solution to the problem of modeling, parameter estimation, and synthesis of natural textures. The texture field is assumed to be a realization of a regular homogeneous random field, which can have a mixed spectral distribution. On the basis of a 2-D Wold-like decomposition, the field is represented as a sum of a purely indeterministic component, a harmonic component, and a countable number of evanescent fields. We present a maximum-likelihood solution to the joint parameter estimation problem of these components from a single observed realization of the texture field. The proposed solution is a two-stage algorithm. In the first stage, we obtain an estimate for the number of harmonic and evanescent components in the field, and a suboptimal initial estimate for the parameters of their spectral supports. In the second stage, we refine these initial estimates by iterative maximization of the likelihood function of the observed data. By introducing appropriate parameter transformations the highly nonlinear least-squares problem that results from the maximization of the likelihood function, is transformed into a separable least-squares problem. The solution for the unknown spectral supports of the harmonic and evanescent components reduces the problem of solving for the transformed parameters of the field to a linear least squares. Solution of the transformation equations then provides a complete solution of the field-model parameter estimation problem. The Wold-based model and the resulting analysis and synthesis algorithms are applicable to a wide variety of texture types found in natural images View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Next-state functions for finite-state vector quantization

    Publication Year: 1995 , Page(s): 1592 - 1601
    Cited by:  Papers (7)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1576 KB)  

    The finite-state vector quantization scheme called dynamic finite-state vector quantization (DFSVQ) is investigated with regard to its subcodebook construction. In the DFSVQ, each input block is encoded by a small codebook called the subcodebook which is created from a much larger codebook called supercodebook. Each subcodebook is constructed by selecting, using a reordering procedure, a set of appropriate code-vectors from the supercodebook. The performance of the DFSVQ depends on this reordering procedure; therefore, several reordering procedures are introduced and their performance are evaluated. The reordering procedures investigated, are based on the conditional histogram of the code-vectors, index prediction, vector prediction, nearest neighbor design, and the frequency usage of the code-vectors. The performance of the reordering procedures are evaluated by comparing their hit ratios (the number of blocks encoded by the subcodebook) and their computational complexity. Experimental results are presented and it is found that the reordering procedure based on the vector prediction performs the best when compared with the other reordering procedures View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Point-source localization in blurred images by a frequency-domain eigenvector-based method

    Publication Year: 1995 , Page(s): 1602 - 1612
    Cited by:  Papers (6)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1388 KB)  

    We address the problem of resolving and localizing blurred point sources in intensity images. Telescopic star-field images blurred by atmospheric turbulence or optical aberrations are typical examples of this class of images, a new approach to image restoration is introduced, which is a generalization of 2-D sensor array processing techniques originating from the field of direction of arrival estimation (DOA). It is shown that in the frequency domain, blurred point source images can be modeled with a structure analogous to the response of linear sensor arrays to coherent signal sources. Thus, the problem may be cast into the form of DOA estimation, and eigenvector based subspace decomposition algorithms, such as MUSIC, may be adapted to search for these point sources. For deterministic point images the signal subspace is degenerate, with rank one, so rank enhancement techniques are required before MUSIC or related algorithms may be used. The presence of blur prohibits the use of existing rank enhancement methods. A generalized array smoothing method is introduced for rank enhancement in the presence of blur, and to regularize the ill posed nature of the image restoration. The new algorithm achieves inter-pixel super-resolution and is computationally efficient. Examples of star image deblurring using the algorithm are presented View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Variable duration hidden Markov model and morphological segmentation for handwritten word recognition

    Publication Year: 1995 , Page(s): 1675 - 1688
    Cited by:  Papers (37)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1248 KB)  

    This paper describes a complete system for the recognition of unconstrained handwritten words using a continuous density variable duration hidden Markov model (CD-VDHMM). First, a new segmentation algorithm based on mathematical morphology is developed to translate the 2-D image into a 1-D sequence of subcharacter symbols. This sequence of symbols is modeled by the CDVDHMM. Thirty-five features are selected to represent the character symbols in the feature space. Generally, there are two information sources associated with written text; the shape information and the linguistic knowledge. While the shape information of each character symbol is modeled as a mixture Gaussian distribution, the linguistic knowledge, i.e., constraint, is modeled as a Markov chain. The variable duration state is used to take care of the segmentation ambiguity among the consecutive characters. A modified Viterbi algorithm, which provides l globally best paths, is adapted to VDHMM by incorporating the duration probabilities for the variable duration state sequence. The general string editing method is used at the postprocessing stage. The detailed experiments are carried out for two postal applications; and successful recognition results are reported View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.

Aims & Scope

IEEE Transactions on Image Processing focuses on signal-processing aspects of image processing, imaging systems, and image scanning, display, and printing.

Full Aims & Scope

Meet Our Editors

Editor-in-Chief
Scott Acton
University of Virginia
Charlottesville, VA, USA
E-mail: acton@virginia.edu 
Phone: +1 434-982-2003