By Topic

Image Processing, IEEE Transactions on

Issue 3 • Date Mar 1999

Filter Results

Displaying Results 1 - 15 of 15
  • A low-complexity region-based video coder using backward morphological motion field segmentation

    Publication Year: 1999 , Page(s): 332 - 345
    Cited by:  Papers (7)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (744 KB)  

    We introduce a novel region-based video compression framework based on morphology to efficiently capture motion correspondences between consecutive frames in an image sequence. Our coder is built on the observation that the motion field associated with typical image sequences can be segmented into component motion subfield “clusters” associated with distinct objects or regions in the scene, and further, that these clusters can be efficiently captured using morphological operators in a “backward” framework that avoids the need to send region shape information. Region segmentation is performed directly on the motion field by introducing a small “core” for each cluster that captures the essential features of the cluster and reliably represents its motion behavior. Cluster matching is used in lieu of the conventional block matching methods of standard video coders to define a cluster motion representation paradigm. Furthermore, a region-based pel-recursive approach is applied to find the refinement motion field for each cluster and the cluster motion prediction error image is coded by a novel adaptive scalar quantization method. Experimental results reveal a 10-20% reduction in prediction error energy and 1-3 dB gain in the final reconstructed peak signal-to-noise ratio (PSNR) over the standard MPEG-1 coder at typical bit rates of 500 Kb/s to 1 Mb/s on standard test sequences, while also requiring lower computational complexity View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Object matching algorithms using robust Hausdorff distance measures

    Publication Year: 1999 , Page(s): 425 - 429
    Cited by:  Papers (69)  |  Patents (2)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (148 KB)  

    A Hausdorff distance (HD) is one of commonly used measures for object matching. This work analyzes the conventional HD measures and proposes two robust HD measures based on m-estimation and least trimmed square (LTS) which are more efficient than the conventional HD measures. By computer simulation, the matching performance of the conventional and proposed HD measures is compared with synthetic and real images View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Two-dimensional matched filtering for motion estimation

    Publication Year: 1999 , Page(s): 438 - 444
    Cited by:  Papers (19)  |  Patents (18)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (948 KB)  

    In this work, we describe a frequency domain technique for the estimation of multiple superimposed motions in an image sequence. The least-squares optimum approach involves the computation of the three-dimensional (3-D) Fourier transform of the sequence, followed by the detection of one or more planes in this domain with high energy concentration. We present a more efficient algorithm, based on the properties of the Radon transform and the two-dimensional (2-D) fast Fourier transform, which can sacrifice little performance for significant computational savings. We accomplish the motion detection and estimation by designing appropriate matched filters. The performance is demonstrated on two image sequences View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Segmentation of textured images using a multiresolution Gaussian autoregressive model

    Publication Year: 1999 , Page(s): 408 - 420
    Cited by:  Papers (44)  |  Patents (1)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (888 KB)  

    We present a new algorithm for segmentation of textured images using a multiresolution Bayesian approach. The new algorithm uses a multiresolution Gaussian autoregressive (MGAR) model for the pyramid representation of the observed image, and assumes a multiscale Markov random field model for the class label pyramid. The models used in this paper incorporate correlations between different levels of both the observed image pyramid and the class label pyramid. The criterion used for segmentation is the minimization of the expected value of the number of misclassified nodes in the multiresolution lattice. The estimate which satisfies this criterion is referred to as the “multiresolution maximization of the posterior marginals” (MMPM) estimate, and is a natural extension of the single-resolution “maximization of the posterior marginals” (MPM) estimate. Previous multiresolution segmentation techniques have been based on the maximum a posterior (MAP) estimation criterion, which has been shown to be less appropriate for segmentation than the MPM criterion. It is assumed that the number of distinct textures in the observed image is known. The parameters of the MGAR model-the means, prediction coefficients, and prediction error variances of the different textures-are unknown. A modified version of the expectation-maximization (EM) algorithm is used to estimate these parameters. The parameters of the Gibbs distribution for the label pyramid are assumed to be known. Experimental results demonstrating the performance of the algorithm are presented View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Operational rate-distortion performance for joint source and channel coding of images

    Publication Year: 1999 , Page(s): 305 - 320
    Cited by:  Papers (35)  |  Patents (1)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (968 KB)  

    This paper describes a methodology for evaluating the operational rate-distortion behavior of combined source and channel coding schemes with particular application to images. In particular, we demonstrate use of the operational rate-distortion function to obtain the optimum tradeoff between source coding accuracy and channel error protection under the constraint of a fixed transmission bandwidth for the investigated transmission schemes. Furthermore, we develop information-theoretic bounds on performance for specific source and channel coding systems and demonstrate that our combined source-channel coding methodology applied to different schemes results in operational rate-distortion performance which closely approach these theoretical limits. We concentrate specifically on a wavelet-based subband source coding scheme and the use of binary rate-compatible punctured convolutional (RCPC) codes for transmission over the additive white Gaussian noise (AWGN) channel. Explicit results for real-world images demonstrate the efficacy of this approach View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Fast algorithms for the estimation of motion vectors

    Publication Year: 1999 , Page(s): 435 - 438
    Cited by:  Papers (40)  |  Patents (18)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (104 KB)  

    In this correspondence, a fast motion estimation algorithm, based on the successive elimination algorithm (SEA) of Li and Salari (1995), is studied. This fast motion estimation algorithm finds the same displacement vectors as the exhaustive search algorithm with a reduced computational load. A modified fast motion estimation algorithm introducing negligible distortion into a transform coder, but providing for a further computational load reduction, is developed. Implementation issues are also discussed and compared. Results show that the number of searching operations can be reduced dramatically with the help of fast motion estimation algorithms View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Two-dimensional phase unwrapping using a block least-squares method

    Publication Year: 1999 , Page(s): 375 - 386
    Cited by:  Papers (25)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (660 KB)  

    We present a block least-squares (BLS) method for two-dimensional (2-D) phase unwrapping. The method works by tessellating the input image into small square blocks with only one phase wrap. These blocks are unwrapped using a simple procedure, and the unwrapped blocks are merged together using one of two proposed block merging algorithms. By specifying a suitable mask, the method can easily handle objects of any shape. This approach is compared with the Ghiglia-Romero (1994) method and the Marroquin-Rivera (1995) method. On synthetic images with different noise levels, the BLS method is shown to be superior, both with respect to the resulting gray values in the unwrapped image as well as visual inspection. The method is also shown to successfully unwrap synthetic and real images with shears, fiber-optic interferometry images, and medical magnetic resonance images. We believe the new method has the potential to improve the present quality of phase unwrapped images of several different image modalities View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Superresolution restoration of an image sequence: adaptive filtering approach

    Publication Year: 1999 , Page(s): 387 - 395
    Cited by:  Papers (95)  |  Patents (25)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1064 KB)  

    This paper presents a new method based on adaptive filtering theory for superresolution restoration of continuous image sequences. The proposed methodology suggests least squares (LS) estimators which adapt in time, based on adaptive filters, least mean squares (LMS) or recursive least squares (RLS). The adaptation enables the treatment of linear space and time-variant blurring and arbitrary motion, both of them assumed known. The proposed new approach is shown to be of relatively low computational requirements. Simulations demonstrating the superresolution restoration algorithms are presented View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Reliable transmission of high-quality video over ATM networks

    Publication Year: 1999 , Page(s): 361 - 374
    Cited by:  Papers (18)  |  Patents (4)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (300 KB)  

    The development of broadband networks has led to the possibility of a wide variety of new and improved service offerings. Packetized video is likely to be one of the most significant high-bandwidth users of such networks. The transmission of variable bit-rate (VBR) video offers the potential promise of constant video quality but is generally accompanied by packet loss which significantly diminishes this potential. We study a class of error recovery schemes employing forward error-control (FEC) coding to recover from such losses. In particular, we show that a hybrid error recovery strategy involving the use of active FEC in tandem with simple passive error concealment schemes offers very robust performance even under high packet losses. We discuss two different methods of applying FEC to alleviate the problem of packet loss. The conventional method of applying FEC generally allocates additional bandwidth for channel coding while maintaining a specified average video coding rate. Such an approach suffers performance degradations at high loads since the bandwidth expansion associated with the use of FEC creates additional congestion that negates the potential benefit in using FEC. In contrast, we study a more efficient FEC application technique in our hybrid approach, which allocates bandwidth for channel coding by throttling the source coder rate (i.e., performing higher compression) while maintaining a fixed overall transmission rate. More specifically, we consider the performance of the hybrid approach where the bandwidth to accommodate the FEC overhead is made available by throttling the source coder rate sufficiently so that the overall rate after application of FEC is identical to that of the original unprotected system. We obtain the operational rate-distortion characteristics of such a scheme employing selected FEC codes. In doing so, we demonstrate the robust performance achieved by appropriate use of FEC under moderate-to-high packet losses in comparison to the unprotected system View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Blind image restoration by anisotropic regularization

    Publication Year: 1999 , Page(s): 396 - 407
    Cited by:  Papers (59)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (988 KB)  

    This paper presents anisotropic regularization techniques to exploit the piecewise smoothness of the image and the point spread function (PSF) in order to mitigate the severe lack of information encountered in blind restoration of shift-invariantly and shift-variantly blurred images. The new techniques, which are derived from anisotropic diffusion, adapt both the degree and direction of regularization to the spatial activities and orientations of the image and the PSF. This matches the piecewise smoothness of the image and the PSF which may be characterized by sharp transitions in magnitude and by the anisotropic nature of these transitions. For shift-variantly blurred images whose underlying PSFs may differ from one pixel to another, we parameterize the PSF and then apply the anisotropic regularization techniques. This is demonstrated for linear motion blur and out-of-focus blur. Alternating minimization is used to reduce the computational load and algorithmic complexity View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Hybrid estimation of navigation parameters from aerial image sequence

    Publication Year: 1999 , Page(s): 429 - 435
    Cited by:  Papers (7)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (544 KB)  

    This work presents a hybrid method for navigation parameter estimation using sequential aerial images, where navigation parameters represent the position and velocity information of an aircraft for autonomous navigation. The proposed hybrid system is composed of two parts: relative position estimation and absolute position estimation. Computer simulation with two different sets of real aerial image sequences shows the effectiveness of the proposed hybrid parameter estimation algorithm View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Intrinsic multiscale representation using optical flow in the scale-space

    Publication Year: 1999 , Page(s): 444 - 447
    Cited by:  Papers (5)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (292 KB)  

    There exists an optical flow in the scale-space if the multiscale representation of an image is viewed as an ordinary image sequence in the time domain. This technique can be used to solve the ill-posed tracking problem in the scale-space View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Texture anisotropy in 3-D images

    Publication Year: 1999 , Page(s): 346 - 360
    Cited by:  Papers (19)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1992 KB)  

    Two approaches to the characterization of three-dimensional (3-D) textures are presented: one based on gradient vectors and one on generalized co-occurrence matrices. They are investigated with the help of simulated data for their behavior in the presence of noise and for various values of the parameters they depend on. They are also applied to several medical volume images characterized by the presence of microtextures and their potential as diagnostic tools and tools for quantifying and monitoring the progress of various pathologies is discussed. No firm medical conclusions can be drawn as not enough clinical data are available. The gradient based method appears to be more appropriate for the characterization of microtextures. It also shows more consistent behavior as a descriptor of pathologies than the generalized co-occurrence matrix approach View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • The application of the enhanced Hoshen-Kopelman algorithm for processing unbounded images

    Publication Year: 1999 , Page(s): 421 - 425
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (188 KB)  

    The enhanced Hoshen-Kopelman (EHK) algorithm for the analysis of connected components in images that are unbounded in one of their dimensions is introduced. The algorithm characterizes in a single pass the shapes of all the connected components in a multiple class image by computing the spatial moments, the area, the boundary, and the bounding boxes of the connected components. The algorithm is applied to a real-time surface defect simulation and to a Landsat image analysis. The algorithm's performance is compared to the performances of related algorithms View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Variable-length constrained-storage tree-structured vector quantization

    Publication Year: 1999 , Page(s): 321 - 331
    Cited by:  Papers (4)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (508 KB)  

    Constrained storage vector quantization, (CSVQ), introduced by Chan and Gersho (1990, 1991) allows for the stagewise design of balanced tree-structured residual vector quantization codebooks with low encoding and storage complexities. On the other hand, it has been established by Makhoul et al. (1985), Riskin et al. (1991), and by Mahesh et al. (see IEEE Trans. Inform. Theory, vol.41, p.917-30, 1995) that variable-length tree-structured vector quantizer (VLTSVQ) yields better coding performance than a balanced tree-structured vector quantizer and may even outperform a full-search vector quantizer due to the nonuniform distribution of rate among the subsets of its input space. The variable-length constrained storage tree-structured vector quantization (VLCS-TSVQ) algorithm presented in this paper utilizes the codebook sharing by multiple vector sources concept as in CSVQ to greedily grow an unbalanced tree structured residual vector quantizer with constrained storage. It is demonstrated by simulations on test sets from various synthetic one dimensional (1-D) sources and real-world images that the performance of VLCS-TSVQ, whose codebook storage complexity varies linearly with rate, can come very close to the performance of greedy growth VLTSVQ of Riskin et al. and Mahesh et al. The dramatically reduced size of the overall codebook allows the transmission of the code vector probabilities as side information for source adaptive entropy coding View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.

Aims & Scope

IEEE Transactions on Image Processing focuses on signal-processing aspects of image processing, imaging systems, and image scanning, display, and printing.

Full Aims & Scope

Meet Our Editors

Editor-in-Chief
Scott Acton
University of Virginia
Charlottesville, VA, USA
E-mail: acton@virginia.edu 
Phone: +1 434-982-2003