By Topic

Image Processing, IEEE Transactions on

Issue 10 • Date Oct. 1999

Filter Results

Displaying Results 1 - 21 of 21
  • Correction to "Lossless, near-lossless, and refinement coding of bilevel images"

    Publication Year: 1999 , Page(s): 1456
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (844 KB)  

    First Page of the Article
    View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Simulated annealing, acceleration techniques, and image restoration

    Publication Year: 1999 , Page(s): 1374 - 1387
    Cited by:  Papers (13)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (720 KB)  

    Typically, the linear image restoration problem is an ill-conditioned, underdetermined inverse problem. Here, stabilization is achieved via the introduction of a first-order smoothness constraint which allows the preservation of edges and leads to the minimization of a nonconvex functional. In order to carry through this optimization task, we use stochastic relaxation with annealing. We prefer the Metropolis dynamics to the popular, but computationally much more expensive, Gibbs sampler. Still, Metropolis-type annealing algorithms are also widely reported to exhibit a low convergence rate. Their finite-time behavior is outlined and we investigate some inexpensive acceleration techniques that do not alter their theoretical convergence properties; namely, restriction of the state space to a locally bounded image space and increasing concave transform of the cost functional. Successful experiments about space-variant restoration of simulated synthetic aperture imaging data illustrate the performance of the resulting class of algorithms and show significant benefits in terms of convergence speed View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Cosine transform based preconditioners for total variation deblurring

    Publication Year: 1999 , Page(s): 1472 - 1478
    Cited by:  Papers (8)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (628 KB)  

    In PDE image restoration problems, one has to invert operators which is a sum of a blurring operator and an elliptic operator with highly varying coefficient. We present a preconditioner for such operators, which can be used with the conjugate gradient (CG) method, and compare it with Vogel and Oman's (see SIAM J. Sci. Stat. Comput., vol.17, p.227-38, 1996, and IEEE Trans. Image Processing, vol.7, p.813-24, 1998) product preconditioner View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Annular filters for binary images

    Publication Year: 1999 , Page(s): 1330 - 1340
    Cited by:  Papers (2)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (508 KB)  

    A binary annular filter removes isolated points in the foreground and the background of an image. Here, the adjective “isolated” refers to an underlying adjacency relation between pixels, which may be different for foreground and background pixels. In this paper, annular filters are represented in terms of switch pairs. A switch pair consists of two operators which govern the removal of points from foreground and background, respectively. In the case of annular filters, switch pairs are completely determined by foreground and background adjacency. It is shown that a specific triangular condition in terms of both adjacencies is required to establish idempotence of the resulting annular filter. In the case of translation-invariant operators, an annular filter takes the form X→(X⊕A)∩X∪(X⊖B), where A and B are structuring elements satisfying some further conditions: when A∩B∩(A⊕B)≠Ø, it is an (idempotent) morphological filter; when A∪B⊂A⊕B, it is a strong filter and in this case it can be obtained by composing in either order the annular opening X→(X⊕A)∩X and the annular closing X→∪(X⊕B) View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Classification of binary textures using the 1-D Boolean model

    Publication Year: 1999 , Page(s): 1457 - 1462
    Cited by:  Papers (7)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (336 KB)  

    The one-dimensional (1-D) Boolean model is used to calculate features for the description of binary textures. Each two-dimensional (2-D) texture is converted into several 1-D strings by scanning it according to raster vertical, horizontal or Hilbert sequences. Several different probability distributions for the segment lengths created this way are used to model their distribution. Therefore, each texture is described by a set of Boolean models. Classification is performed by calculating the overlapping probability between corresponding models. The method is evaluated with the help of 32 different binary textures, and the pros and cons of the approach are discussed View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Subband domain coding of binary textual images for document archiving

    Publication Year: 1999 , Page(s): 1438 - 1446
    Cited by:  Papers (5)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (508 KB)  

    In this work, a subband domain textual image compression method is developed. The document image is first decomposed into subimages using binary subband decompositions. Next, the character locations in the subbands and the symbol library consisting of the character images are encoded. The method is suitable for keyword search in the compressed data. It is observed that very high compression ratios are obtained with this method. Simulation studies are presented View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Compression of complex-valued SAR images

    Publication Year: 1999 , Page(s): 1483 - 1487
    Cited by:  Papers (11)  |  Patents (8)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (684 KB)  

    Synthetic aperture radars (SAR) are coherent imaging systems that produce complex-valued images of the ground. Because modern systems can generate large amounts of data, there is substantial interest in applying image compression techniques to these products. We examine the properties of complex-valued SAR images relevant to the task of data compression. We advocate the use of transform-based compression methods but employ radically different quantization strategies than those commonly used for incoherent optical images. The theory, methodology, and examples are presented View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Restoration of lossy compressed noisy images

    Publication Year: 1999 , Page(s): 1348 - 1360
    Cited by:  Papers (1)  |  Patents (16)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1448 KB)  

    Noise degrades the performance of any image compression algorithm. However, at very low bit rates, image coders effectively filter noise that may he present in the image, thus, enabling the coder to operate closer to the noise free case. Unfortunately, at these low bit rates the quality of the compressed image is reduced and very distinctive coding artifacts occur. This paper proposes a combined restoration of the compressed image from both the artifacts introduced by the coder along with the additive noise. The proposed approach is applied to images corrupted by data-dependent Poisson noise and to images corrupted by film-grain noise when compressed using a block transform-coder such as JPEG. This approach has proved to be effective in terms of visual quality and peak signal-to-noise ratio (PSNR) when tested on simulated and real images View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Nonlinear operator for oriented texture

    Publication Year: 1999 , Page(s): 1395 - 1407
    Cited by:  Papers (40)  |  Patents (1)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1704 KB)  

    Texture is an important part of the visual world of animals and humans and their visual systems successfully detect, discriminate, and segment texture. Relatively recently progress was made concerning structures in the brain that are presumably responsible for texture processing. Neurophysiologists reported on the discovery of a new type of orientation selective neuron in areas V1 and V2 of the visual cortex of monkeys which they called grating cells. Such cells respond vigorously to a grating of bars of appropriate orientation, position and periodicity. In contrast to other orientation selective cells, grating cells respond very weakly or not at all to single bars which do not make part of a grating. Elsewhere we proposed a nonlinear model of this type of cell and demonstrated the advantages of grating cells with respect to the separation of texture and form information. In this paper, we use grating cell operators to obtain features and compare these operators in texture analysis tasks with commonly used feature extracting operators such as Gabor-energy and co-occurrence matrix operators. For a quantitative comparison of the discrimination properties of the concerned operators a new method is proposed which is based on the Fisher (1923) linear discriminant and the Fisher criterion. The operators are also qualitatively compared with respect to their ability to separate texture from form information and their suitability for texture segmentation View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Optimally isotropic Laplacian operator

    Publication Year: 1999 , Page(s): 1467 - 1472
    Cited by:  Patents (1)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (484 KB)  

    Laplacian operators used in the literature for digital image processing are not rotationally invariant. We examine the anisotropy of 3×3 Laplacian operators for images quantized in square pixels, and find the operator which has the minimum overall anisotropy View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Wavelet-based Rician noise removal for magnetic resonance imaging

    Publication Year: 1999 , Page(s): 1408 - 1419
    Cited by:  Papers (81)  |  Patents (3)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1060 KB)  

    It is well known that magnetic resonance magnitude image data obey a Rician distribution. Unlike additive Gaussian noise, Rician “noise” is signal-dependent, and separating signal from noise is a difficult task. Rician noise is especially problematic in low signal-to-noise ratio (SNR) regimes where it not only causes random fluctuations, but also introduces a signal-dependent bias to the data that reduces image contrast. This paper studies wavelet-domain filtering methods for Rician noise removal. We present a novel wavelet-domain filter that adapts to variations in both the signal and the noise View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Bayesian image reconstruction from partial image and aliased spectral intensity data

    Publication Year: 1999 , Page(s): 1420 - 1434
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1104 KB)  

    An image reconstruction problem motivated by X-ray fiber diffraction analysis is considered. The experimental data are sums of the squares of the amplitudes of particular sets of Fourier coefficients of the electron density, and a part of the electron density is known. The image reconstruction problem is to estimate the unknown part of the electron density, the “image.” A Bayesian approach is taken in which a prior model for the image is based on the fact that it consists of atoms, i.e., the unknown electron density consists of separated, sharp peaks. Currently used heuristic methods are shown to correspond to certain maximum a posteriori estimates of the Fourier coefficients. An analytical solution for the Bayesian minimum mean-square-error estimate is derived. Simulations show that the minimum mean-square-error estimate gives good results, even when there is considerable data loss, and out-performs the maximum a posteriori estimates View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Theory of projection onto the narrow quantization constraint set and its application

    Publication Year: 1999 , Page(s): 1361 - 1373
    Cited by:  Papers (25)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (428 KB)  

    Since the postprocessing of coded images using a priori information depends on the constraints imposed on the coded images, it is important to utilize constraints that are best suited to postprocessing techniques. Among the constraint sets, the quantization constraint set (QCS) is commonly used in the iterative algorithms that are especially based on the theory of projections onto convex sets (POCS). The converged image in the iteration is usually a boundary point of the QCS. But, we can easily conjecture that the possible location of the original image is inside the QCS. In order to obtain an image inside the QCS, we proposed a new convex constraint set, a subset of the QCS called narrow QCS (NQCS) as a substitute for the QCS. In order to demonstrate that the NQCS works better than the QCS on natural images, we present mathematical analysis with examples and simulations by reformulating the iterative algorithm of the constrained minimization problem or of the POCS using the probability theory. Since the initial image of the iteration is the centroid of the QCS, we reach a conclusion that the first iteration is enough to recover the coded image, which implies no need of any theories that guarantee the convergences View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • On independent color space transformations for the compression of CMYK images

    Publication Year: 1999 , Page(s): 1446 - 1451
    Cited by:  Patents (1)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (328 KB)  

    Device and image-independent color space transformations for the compression of CMYK images were studied. A new transformation (to a YYCC color space) was developed and compared to known ones. Several tests were conducted leading to interesting conclusions. Among them, color transformations are not always advantageous over independent compression of CMYK color planes. Another interesting conclusion is that chrominance subsampling is rarely advantageous in this context. Also, it is shown that transformation to YYCC consistently outperforms the transformation to YCbCrK, while being competitive with the image-dependent KLT-based approach View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Semi-fixed-length motion vector coding for H.263-based low bit rate video compression

    Publication Year: 1999 , Page(s): 1451 - 1455
    Cited by:  Papers (1)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (144 KB)  

    We present a semi-fixed-length motion vector coding method for H.263-based low bit rate video compression. The method exploits structural constraints within the motion field. The motion vectors are encoded using semi-fixed-length codes, yielding essentially the same levels of rate-distortion performance and subjective quality achieved by H.263's Huffman-based variable length codes in a noiseless environment. However, such codes provide substantially higher error resilience in a noisy environment View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Real-time DSP implementation for MRF-based video motion detection

    Publication Year: 1999 , Page(s): 1341 - 1347
    Cited by:  Papers (11)  |  Patents (19)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (652 KB)  

    This paper describes the real time implementation of a simple and robust motion detection algorithm based on Markov random field (MRF) modeling, MRF-based algorithms often require a significant amount of computations. The intrinsic parallel property of MRF modeling has led most of implementations toward parallel machines and neural networks, but none of these approaches offers an efficient solution for real-world (i.e., industrial) applications. Here, an alternative implementation for the problem at hand is presented yielding a complete, efficient and autonomous real-time system for motion detection. This system is based on a hybrid architecture, associating pipeline modules with one asynchronous module to perform the whole process, from video acquisition to moving object masks visualization. A board prototype is presented and a processing rate of 15 images/s is achieved, showing the validity of the approach View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Finding shape axes using magnetic fields

    Publication Year: 1999 , Page(s): 1388 - 1394
    Cited by:  Papers (5)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (616 KB)  

    This paper presents a novel method, based on magnetic field principles, for obtaining the axes of shapes. The method is based on directional information of the shape's boundary. By simulating a parallel algorithm, we are able to generate the inner as well as the outer axes (axes of concavities) of the shape. The preprocessing phase for this algorithm involves obtaining the shape's gradient. Each point of the gradient is substituted by a minute magnetic dipole. The cumulative magnetic field due to these dipoles is accumulated at all points in the image in a one-pass algorithm. The magnitude of the final magnetic vector field has valleys that are created from mutual and directionally balanced cancellations of opposing boundary segments. These valleys signify the axes of the shape. The axes are obtained by performing a valley search. The magnetic field modeling (MFM) method has an advantage over previous approaches since it utilizes not only the location information of the boundary, but also its directional information. As demonstrated, experimental results of the MFM method are much improved, compared to other skeletonization algorithms which tend to generate spurious and noisy axes View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Optimization of MPEG-2 SNR scaleable codecs

    Publication Year: 1999 , Page(s): 1435 - 1438
    Cited by:  Papers (4)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (100 KB)  

    It is shown how the signal-to-noise (SNR) scaleable coder can benefit from optimizing quantified discrete cosine transform (DCT) coefficients, in a rate-distortion context, in order to reduce bit-rate overheads. The technique is based on adjusting the quantized coefficients rather than dropping them, since the former gives finer control over rate-distortion trade-offs. The widely used Lagrangian optimization technique is then applied to arrive at the optimally adjusted coefficients block. We show that such an optimization is very efficient for the second layer, but has little effect on the base layer View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Regularization of optic flow estimates by means of weighted vector median filtering

    Publication Year: 1999 , Page(s): 1462 - 1467
    Cited by:  Papers (6)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (644 KB)  

    Vector median filtering has been recently proposed as an effective method to refine estimated velocity fields. Here, the use of a weighted vector median filtering is suggested to improve the regularization of the optic flow field across motion boundaries. Information about the confidence of the estimated pixel velocities is exploited for the choice of the filter weights. Experimental results, on both synthetic and real-world sequences, show the effectiveness of the proposed procedure View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Weighted universal image compression

    Publication Year: 1999 , Page(s): 1317 - 1329
    Cited by:  Papers (11)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (572 KB)  

    We describe a general coding strategy leading to a family of universal image compression systems designed to give good performance in applications where the statistics of the source to be compressed are not available at design time or vary over time or space. The basic approach considered uses a two-stage structure in which the single source code of traditional image compression systems is replaced with a family of codes designed to cover a large class of possible sources. To illustrate this approach, we consider the optimal design and use of two-stage codes containing collections of vector quantizers (weighted universal vector quantization), bit allocations for JPEG-style coding (weighted universal bit allocation), and transform codes (weighted universal transform coding). Further, we demonstrate the benefits to be gained from the inclusion of perceptual distortion measures and optimal parsing. The strategy yields two-stage codes that significantly outperform their single-stage predecessors. On a sequence of medical images, weighted universal vector quantization outperforms entropy coded vector quantization by over 9 dB. On the same data sequence, weighted universal bit allocation outperforms a JPEG-style code by over 2.5 dB. On a collection of mixed test and image data, weighted universal transform coding outperforms a single, data-optimized transform code (which gives performance almost identical to that of JPEG) by over 6 dB View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Inverse halftoning using wavelets

    Publication Year: 1999 , Page(s): 1479 - 1483
    Cited by:  Papers (22)  |  Patents (1)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (388 KB)  

    This work introduces a new approach to inverse halftoning using nonorthogonal wavelets. The distinct features of this wavelet-based approach are: (1) edge information in the highpass wavelet images of a halftone image is extracted and used to assist inverse halftoning, (2) cross-scale correlations in the multiscale wavelet decomposition are used for removing background halftoning noise while preserving important edges in the wavelet lowpass image, and (3) experiments show that our simple wavelet-based approach outperforms the best results obtained from inverse halftoning methods published in the literature, which are iterative in nature View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.

Aims & Scope

IEEE Transactions on Image Processing focuses on signal-processing aspects of image processing, imaging systems, and image scanning, display, and printing.

Full Aims & Scope

Meet Our Editors

Editor-in-Chief
Scott Acton
University of Virginia
Charlottesville, VA, USA
E-mail: acton@virginia.edu 
Phone: +1 434-982-2003