By Topic

Image Processing, IEEE Transactions on

Issue 7 • Date Jul 2000

Filter Results

Displaying Results 1 - 15 of 15
  • Fast and accurate edge-based segmentation with no contour smoothing in 2-D real images

    Page(s): 1232 - 1237
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (132 KB)  

    We propose an edge-based segmentation algorithm built on a new type of active contour which is fast, has a low computational complexity and does not introduce unwanted smoothing on the retrieved contours. The contours are always returned as closed chains of points, resulting in a very useful base for subsequent shape representation techniques View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A multiresolution image segmentation technique based on pyramidal segmentation and fuzzy clustering

    Page(s): 1238 - 1248
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (212 KB)  

    In this paper, an unsupervised image segmentation technique is presented, which combines pyramidal image segmentation with the fuzzy c-means clustering algorithm. Each layer of the pyramid is split into a number of regions by a root labeling technique, and then fuzzy c-means is used to merge the regions of the layer with the highest image resolution. A cluster validity functional is used to find the optimal number of objects automatically. Segmentation of a number of synthetic as well as clinical images is illustrated and two fully automatic segmentation approaches are evaluated, which determine the left ventricular volume (LV) in 140 cardiovascular magnetic resonance (MR) images. First fuzzy c-means is applied without pyramids. In the second approach the regions generated by pyramidal segmentation are merged by fuzzy c-means. The correlation coefficients of manually and automatically defined LV lumen of all 140 and 20 end-diastolic images were equal to 0.86 and 0.79, respectively, when images were segmented with fuzzy c-means alone. These coefficients increased to 0.90 and 0.93 when the pyramidal segmentation was combined with fuzzy c-means. This method can be applied to any dimensional representation and at any resolution level of an image series. The evaluation study shows good performance in detecting LV lumen in MR images View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Memory efficient propagation-based watershed and influence zone algorithms for large images

    Page(s): 1185 - 1199
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (336 KB)  

    Propagation front or grassfire methods are very popular in image processing because of their efficiency and because of their inherent geodesic nature. However, because of their random-access nature, they are inefficient in large images that cannot fit in available random access memory. We explore ways to increase the memory efficiency of two algorithms that use propagation fronts: the skeletonization by influence zones and the watershed transform. Two algorithms are presented for the skeletonization by influence zones. The first computes the skeletonization on surfaces without storing the enclosing volume. The second performs the skeletonization without any region reference, by using only the propagation fronts. The watershed transform algorithm that was developed keeps in memory the propagation fronts and only one greylevel of the image. All three algorithms use much less memory than the ones presented in the literature so far. Several techniques have been developed in this work in order to minimize the effect of these set operations. These include fast search methods, double propagation fronts, directional propagation, and others View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Texture segmentation using modulated wavelet transform

    Page(s): 1299 - 1302
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (156 KB)  

    The wavelet (packet) transform has been widely used for texture analysis; however, the extracted features of similar textures with symmetric orientations are indistinguishable. Motivated by the AM-FM representation, the so called modulated wavelet (packet) transform that can be implemented efficiently by the conventional pyramid (tree) structured algorithms is developed. The performance of this new transform is demonstrated on the segmentation of Brodatz (1966) textures and an aerial image of San Francisco View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Region-based fractal image compression

    Page(s): 1171 - 1184
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (364 KB)  

    A fractal coder partitions an image into blocks that are coded via self-references to other parts of the image itself. We present a fractal coder that derives highly image-adaptive partitions and corresponding fractal codes in a time-efficient manner using a region-merging approach. The proposed merging strategy leads to improved rate-distortion performance compared to previously reported pure fractal coders, and it is faster than other state-of-the-art fractal coding methods View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A JPEG variable quantization method for compound documents

    Page(s): 1282 - 1287
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (184 KB)  

    We present a JPEC-compliant method for the efficient compression of compound documents using variable quantization, based on the DCT activity of each 8×8 block, our scheme automatically adjusts the quantization scaling factors so that test blocks are compressed at higher quality than image blocks. Results from three different quantization mappings are also reported View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Reconstruction of baseline JPEG coded images in error prone environments

    Page(s): 1292 - 1299
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (252 KB)  

    A two-stage method for the reconstruction of missing data in the transmission of baseline JPEG coded images in error prone environments is proposed. In the first stage, we estimate the values of the missing DC coefficients. As effects of errors in estimating the missing DC values will appear as a number of stripes across the image, a technique for removing such stripes is also developed. In the second stage, the data of missing blocks is reconstructed by exploiting the correlation between adjacent blocks. Simulation results intricate that our reconstruction method performs very well. The two key contributions of our method are that it does not assume nondifferential encoding of the DC coefficients, and that it performs well in the reconstruction of diagonal edges View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A generalized interpolative vector quantization method for jointly optimal quantization, interpolation, and binarization of text images

    Page(s): 1272 - 1281
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (256 KB)  

    This paper presents an approach for the effective combination of interpolation with binarization of gray level text images to reconstruct a high resolution binary image from a lower resolution gray level one. We study two nonlinear interpolative techniques for text image interpolation. These nonlinear interpolation methods map quantized low dimensional 2×2 image blocks to higher dimensional 4×4 (possibly binary) blocks using a table lookup operation. The first method performs interpolation of text images using context-based, nonlinear, interpolative, vector quantization (NLIVQ). This system has a simple training procedure and has performance (for gray-level high resolution images) that is comparable to our more sophisticated generalized interpolative VQ (GIVQ) approach, which is the second method. In it, we jointly optimize the quantizer and interpolator to find matched codebooks for the low and high resolution images. Then, to obtain the binary codebook that incorporates binarization with interpolation, we introduce a binary constrained optimization method using GIVQ. In order to incorporate the nearest neighbor constraint on the quantizer while minimizing the distortion in the interpolated image, a deterministic-annealing-based optimization technique is applied. With a few interpolation examples, we demonstrate the superior performance of this method over the NLIVQ method (especially for binary outputs) and other standard techniques e.g., bilinear interpolation and pixel replication View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Frequency domain volume rendering by the wavelet X-ray transform

    Page(s): 1249 - 1261
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (392 KB)  

    We describe a wavelet based X-ray rendering method in the frequency domain with a smaller time complexity than wavelet splatting. Standard Fourier volume rendering is summarized and interpolation and accuracy issues are briefly discussed. We review the implementation of the fast wavelet transform in the frequency domain. The wavelet X-ray transform is derived, and the corresponding Fourier-wavelet volume rendering algorithm (FWVR) is introduced, FWVR uses Haar or B-spline wavelets and linear or cubic spline interpolation. Various combinations are tested and compared with wavelet splatting (WS). We use medical MR and CT scan data, as well as a 3-D analytical phantom to assess the accuracy, time complexity, and memory cost of both FWVR and WS. The differences between both methods are enumerated View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Video compression with binary tree recursive motion estimation and binary tree residue coding

    Page(s): 1288 - 1292
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (104 KB)  

    Binary tree predictive coding (BTPC) is an efficient general-purpose still-image compression scheme, competitive with JPEG for natural image coding and with GIF for graphics. We report the extension of BTPC to video compression using motion estimation and compensation techniques which are simple, efficient, nonlinear and predictive. The new methods, binary tree recursive motion estimation coding (BTRMEC), and binary tree residue coding (BTRC) exploit the hierarchical structure of BTPC, in the first case giving progressively refined motion estimates for increasing numbers of pels and in the second case providing efficient residue coding. Compression results for BTRMEC and BTBC are compared against conventional block-based motion compensated coding as provided by MPEG. They show that both BTRMEC and BTRC are efficient methods to code video sequences View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • High performance scalable image compression with EBCOT

    Page(s): 1158 - 1170
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (264 KB)  

    A new image compression algorithm is proposed, based on independent embedded block coding with optimized truncation of the embedded bit-streams (EBCOT). The algorithm exhibits state-of-the-art compression performance while producing a bit-stream with a rich set of features, including resolution and SNR scalability together with a “random access” property. The algorithm has modest complexity and is suitable for applications involving remote browsing of large compressed images. The algorithm lends itself to explicit optimization with respect to MSE as well as more realistic psychovisual metrics, capable of modeling the spatially varying visual masking phenomenon View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Sonar image segmentation using an unsupervised hierarchical MRF model

    Page(s): 1216 - 1231
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (560 KB)  

    This paper is concerned with hierarchical Markov random field (MRP) models and their application to sonar image segmentation. We present an original hierarchical segmentation procedure devoted to images given by a high-resolution sonar. The sonar image is segmented into two kinds of regions: shadow (corresponding to a lack of acoustic reverberation behind each object lying on the sea-bed) and sea-bottom reverberation. The proposed unsupervised scheme takes into account the variety of the laws in the distribution mixture of a sonar image, and it estimates both the parameters of noise distributions and the parameters of the Markovian prior. For the estimation step, we use an iterative technique which combines a maximum likelihood approach (for noise model parameters) with a least-squares method (for MRF-based prior). In order to model more precisely the local and global characteristics of image content at different scales, we introduce a hierarchical model involving a pyramidal label field. It combines coarse-to-fine causal interactions with a spatial neighborhood structure. This new method of segmentation, called the scale causal multigrid (SCM) algorithm, has been successfully applied to real sonar images and seems to be well suited to the segmentation of very noisy images. The experiments reported in this paper demonstrate that the discussed method performs better than other hierarchical schemes for sonar image segmentation View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A new reconstruction approach for reflection mode diffraction tomography

    Page(s): 1262 - 1271
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (332 KB)  

    Reflection mode diffraction tomography (RM DT) is an inversion scheme used to reconstruct the acoustical refractive index distribution of a scattering object. In this work, we reveal the existence of statistically complementary information inherent in the backscattered data and propose reconstruction algorithms that exploit this information for achieving a bias-free reduction of image variance in RM DT images. Such a reduction of image variance can potentially enhance the detectability of subtle image features when the signal-to-noise ratio of the measured scattered data is low in RM DT. The proposed reconstruction algorithms are mathematically identical, but they propagate noise and numerical errors differently. We investigate theoretically, and validate numerically, the noise properties of images reconstructed using one of the reconstruction algorithms for several different multifrequency sources and uncorrelated data noise View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Motion-vector optimization of control grid interpolation and overlapped block motion compensation using iterated dynamic programming

    Page(s): 1145 - 1157
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (360 KB)  

    The application of advanced motion compensation techniques-control grid interpolation (CGI) and overlapped block motion compensation (OBMC)-to video coding systems provides significant performance advantages, terms of compression ratio and visual quality, over traditional block-matching motion compensation. However, the two-dimensional (2-D) interdependence among motion vectors introduced by these compensation frameworks makes the problem of finding rate-distortion optimal motion vectors, computationally prohibitive. Thus, iterative optimization techniques are often used to achieve good compensation performance. While most reported optimization algorithms adopt an approach that uses a block-matching algorithm to obtain an initial estimate and then successively optimize each motion vector, the over-relaxed motion-vector dependency relations often result in considerable performance degradation. In view of this problem, we present a new optimization scheme for dependent motion-vector optimization problems, one based on dynamic programming. Our approach efficiently decomposes 2-D dependency problems into a series of one-dimensional (1-D) dependency problems. We show that a reliable initial estimate of motion vectors can be obtained efficiently by only considering the dependency in the rate term. We also show that at the iterative optimization stage an effective logarithmic search strategy can be used with dynamic programming to reduce the necessary complexity involved in distortion computation. Compared to conventional iterative approaches, our experimental results demonstrate that our algorithm provides superior rate and distortion performance while maintaining reasonable complexity View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A Bayesian approach for the estimation and transmission of regularization parameters for reducing blocking artifacts

    Page(s): 1200 - 1215
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (524 KB)  

    With block-based compression approaches for both still images and sequences of images annoying blocking artifacts are exhibited, primarily at high compression ratios. They are due to the independent processing (quantization) of the block transformed values of the intensity or the displaced frame difference. We propose the application of the hierarchical Bayesian paradigm to the reconstruction of block discrete cosine transform (BDCT) compressed images and the estimation of the required parameters. We derive expressions for the iterative evaluation of these parameters applying the evidence analysis within the hierarchical Bayesian paradigm. The proposed method allows for the combination of parameters estimated at the coder and decoder. The performance of the proposed algorithms is demonstrated experimentally View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.

Aims & Scope

IEEE Transactions on Image Processing focuses on signal-processing aspects of image processing, imaging systems, and image scanning, display, and printing.

Full Aims & Scope

Meet Our Editors

Editor-in-Chief
Scott Acton
University of Virginia
Charlottesville, VA, USA
E-mail: acton@virginia.edu 
Phone: +1 434-982-2003