By Topic

Image Processing, IEEE Transactions on

Issue 4 • Date April 2008

Filter Results

Displaying Results 1 - 22 of 22
  • Table of contents

    Publication Year: 2008 , Page(s): C1 - C4
    Save to Project icon | Request Permissions | PDF file iconPDF (123 KB)  
    Freely Available from IEEE
  • IEEE Transactions on Image Processing publication information

    Publication Year: 2008 , Page(s): C2
    Save to Project icon | Request Permissions | PDF file iconPDF (37 KB)  
    Freely Available from IEEE
  • The Golden Age of Imaging

    Publication Year: 2008 , Page(s): 441 - 442
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (259 KB) |  | HTML iconHTML  

    Imaging research, development, and applications are growing at an astounding rate, and image-processing researchers can take credit for having created much of the enabling technologies that have fueled this growth. The development of image and video coding standards, such as JPEG and MPEG, has enabled the web as a center for commerce and entertainment. Ubiquitous technologies, such as Direct TV, DVDs, BlueRay, and TiVo, depend on these standards; streaming Internet video services, like Tunes' recently announced movie rental feature, are well on their way to replacing traditional analog broadcast video. Other consumer products, such as home printers, digital cameras, and mobile video devices, have each been a major disruptive product enabled by fundamental innovation from image-processing researchers. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A Discriminative Approach for Wavelet Denoising

    Publication Year: 2008 , Page(s): 443 - 457
    Cited by:  Papers (22)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (3886 KB) |  | HTML iconHTML  

    This paper suggests a discriminative approach for wavelet denoising where a set of mapping functions (MFs) are applied to the transform coefficients in an attempt to produce a noise free image. As opposed to the descriptive approaches, modeling image or noise priors is not required here and the MFs are learned directly from an ensemble of example images using least-squares fitting. The suggested scheme generates a novel set of MFs that are essentially different from the traditional soft/hard thresholding in the over-complete case. These MFs are demonstrated to obtain comparable performance to the state-of-the-art denoising approaches. Additionally, this framework enables a seamless customization of the shrinkage operation to a new set of restoration problems that were not addressed previously with shrinkage techniques, such as deblurring, JPEG artifact removal, and various types of additive noise that are not necessarily Gaussian white noise. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Maximum Likelihood Wavelet Density Estimation With Applications to Image and Shape Matching

    Publication Year: 2008 , Page(s): 458 - 468
    Cited by:  Papers (11)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1331 KB) |  | HTML iconHTML  

    Density estimation for observational data plays an integral role in a broad spectrum of applications, e.g., statistical data analysis and information-theoretic image registration. Of late, wavelet-based density estimators have gained in popularity due to their ability to approximate a large class of functions, adapting well to difficult situations such as when densities exhibit abrupt changes. The decision to work with wavelet density estimators brings along with it theoretical considerations (e.g., non-negativity, integrability) and empirical issues (e.g., computation of basis coefficients) that must be addressed in order to obtain a bona fide density. In this paper, we present a new method to accurately estimate a non-negative density which directly addresses many of the problems in practical wavelet density estimation. We cast the estimation procedure in a maximum likelihood framework which estimates the square root of the density , allowing us to obtain the natural non-negative density representation . Analysis of this method will bring to light a remarkable theoretical connection with the Fisher information of the density and, consequently, lead to an efficient constrained optimization procedure to estimate the wavelet coefficients. We illustrate the effectiveness of the algorithm by evaluating its performance on mutual information-based image registration, shape point set alignment, and empirical comparisons to known densities. The present method is also compared to fixed and variable bandwidth kernel density estimators. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Wavelet-Based Bayesian Image Estimation: From Marginal and Bivariate Prior Models to Multivariate Prior Models

    Publication Year: 2008 , Page(s): 469 - 481
    Cited by:  Papers (2)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1820 KB) |  | HTML iconHTML  

    Prior models play an important role in the wavelet-based Bayesian image estimation problem. Although it is well known that a residual dependency structure always remains among natural image wavelet coefficients, only few multivariate prior models with a closed parametric form are available in the literature. In this paper, we develop new multivariate prior models that not only match well with the observed statistics of the wavelet coefficients of natural images, but also have a simple parametric form. These prior models are very effective for Bayesian image estimation and lead to an improved estimation performance over related earlier techniques. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • SURE-LET Multichannel Image Denoising: Interscale Orthonormal Wavelet Thresholding

    Publication Year: 2008 , Page(s): 482 - 492
    Cited by:  Papers (32)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (3532 KB) |  | HTML iconHTML  

    We propose a vector/matrix extension of our denoising algorithm initially developed for grayscale images, in order to efficiently process multichannel (e.g., color) images. This work follows our recently published SURE-LET approach where the denoising algorithm is parameterized as a linear expansion of thresholds (LET) and optimized using Stein's unbiased risk estimate (SURE). The proposed wavelet thresholding function is pointwise and depends on the coefficients of same location in the other channels, as well as on their parents in the coarser wavelet subband. A nonredundant, orthonormal, wavelet transform is first applied to the noisy data, followed by the (subband-dependent) vector-valued thresholding of individual multichannel wavelet coefficients which are finally brought back to the image domain by inverse wavelet transform. Extensive comparisons with the state-of-the-art multiresolution image denoising algorithms indicate that despite being nonredundant, our algorithm matches the quality of the best redundant approaches, while maintaining a high computational efficiency and a low CPU/memory consumption. An online Java demo illustrates these assertions. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Nonrigid Registration of 3-D Multichannel Microscopy Images of Cell Nuclei

    Publication Year: 2008 , Page(s): 493 - 499
    Cited by:  Papers (10)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (2615 KB) |  | HTML iconHTML  

    We present an intensity-based nonrigid registration approach for the normalization of 3-D multichannel microscopy images of cell nuclei. A main problem with cell nuclei images is that the intensity structure of different nuclei differs very much; thus, an intensity-based registration scheme cannot be used directly. Instead, we first perform a segmentation of the images from the cell nucleus channel, smooth the resulting images by a Gaussian filter, and then apply an intensity-based registration algorithm. The obtained transformation is applied to the images from the nucleus channel as well as to the images from the other channels. To improve the convergence rate of the algorithm, we propose an adaptive step length optimization scheme and also employ a multiresolution scheme. Our approach has been successfully applied using 2-D cell-like synthetic images, 3-D phantom images as well as 3-D multichannel microscopy images representing different chromosome territories and gene regions. We also describe an extension of our approach, which is applied for the registration of (4-D) image series of moving cell nuclei. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Weighted Adaptive Lifting-Based Wavelet Transform for Image Coding

    Publication Year: 2008 , Page(s): 500 - 511
    Cited by:  Papers (28)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (2228 KB) |  | HTML iconHTML  

    In this paper, a new weighted adaptive lifting (WAL)-based wavelet transform is presented. The proposed WAL approach is designed to solve the problems existing in the previous adaptive directional lifting (ADL) approach, such as mismatch between the predict and update steps, interpolation favoring only horizontal or vertical direction, and invariant interpolation filter coefficients for all images. The main contribution of the proposed approach consists of two parts: one is the improved weighted lifting, which maintains the consistency between the predict and update steps as far as possible and preserves the perfect reconstruction at the same time; another is the directional adaptive interpolation, which improves the orientation property of the interpolated image and adapts to statistical property of each image. Experimental results show that the proposed WAL-based wavelet transform for image coding outperforms the conventional lifting-based wavelet transform up to 3.06 dB in PSNR and significant improvement in subjective quality is also observed. Compared with the ADL-based wavelet transform, up to 1.22-dB improvement in PSNR is reported. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Universal Image Compression Using Multiscale Recurrent Patterns With Adaptive Probability Model

    Publication Year: 2008 , Page(s): 512 - 527
    Cited by:  Papers (9)  |  Patents (1)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (2647 KB) |  | HTML iconHTML  

    In this work, we further develop the multidimensional multiscale parser (MMP) algorithm, a recently proposed universal lossy compression method which has been successfully applied to images as well as other types of data, as video and ECG signals. The MMP is based on approximate multiscale pattern matching, encoding blocks of an input signal using expanded and contracted versions of patterns stored in a dictionary. The dictionary is updated using expanded and contracted versions of concatenations of previously encoded blocks. This implies that MMP builds its own dictionary while the input data is being encoded, using segments of the input itself, which lends it a universal flavor. It presents a flexible structure, which allows for easily adding data-specific extensions to the base algorithm. Often, the signals to be encoded belong to a narrow class, as the one of smooth images. In these cases, one expects that some improvement can be achieved by introducing some knowledge about the source to be encoded. In this paper, we use the assumption about the smoothness of the source in order to create good context models for the probability of blocks in the dictionary. Such probability models are estimated by considering smoothness constraints around causal block boundaries. In addition, we refine the obtained probability models by also exploiting the existing knowledge about the original scale of the included blocks during the dictionary updating process. Simulation results have shown that these developments allow significant improvements over the original MMP for smooth images, while keeping its state-of-the-art performance for more complex, less smooth ones, thus improving MMP's universal character. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Fast Full-Search Equivalent Template Matching by Enhanced Bounded Correlation

    Publication Year: 2008 , Page(s): 528 - 538
    Cited by:  Papers (24)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1995 KB) |  | HTML iconHTML  

    We propose a novel algorithm, referred to as enhanced bounded correlation (EBC), that significantly reduces the number of computations required to carry out template matching based on normalized cross correlation (NCC) and yields exactly the same result as the full search algorithm. The algorithm relies on the concept of bounding the matching function: finding an efficiently computable upper bound of the NCC rapidly prunes those candidates that cannot provide a better NCC score with respect to the current best match. In this framework, we apply a succession of increasingly tighter upper bounding functions based on Cauchy-Schwarz inequality. Moreover, by including an online parameter prediction step into EBC, we obtain a parameter free algorithm that, in most cases, affords computational advantages very similar to those attainable by optimal offline parameter tuning. Experimental results show that the proposed algorithm can significantly accelerate a full-search equivalent template matching process and outperforms state-of-the-art methods. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A Fast Thresholded Landweber Algorithm for Wavelet-Regularized Multidimensional Deconvolution

    Publication Year: 2008 , Page(s): 539 - 549
    Cited by:  Papers (50)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (2377 KB) |  | HTML iconHTML  

    We present a fast variational deconvolution algorithm that minimizes a quadratic data term subject to a regularization on the -norm of the wavelet coefficients of the solution. Previously available methods have essentially consisted in alternating between a Landweber iteration and a wavelet-domain soft-thresholding operation. While having the advantage of simplicity, they are known to converge slowly. By expressing the cost functional in a Shannon wavelet basis, we are able to decompose the problem into a series of subband-dependent minimizations. In particular, this allows for larger (subband-dependent) step sizes and threshold levels than the previous method. This improves the convergence properties of the algorithm significantly. We demonstrate a speed-up of one order of magnitude in practical situations. This makes wavelet-regularized deconvolution more widely accessible, even for applications with a strong limitation on computational complexity. We present promising results in 3-D deconvolution microscopy, where the size of typical data sets does not permit more than a few tens of iterations. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Deblurring Using Regularized Locally Adaptive Kernel Regression

    Publication Year: 2008 , Page(s): 550 - 563
    Cited by:  Papers (39)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (10365 KB) |  | HTML iconHTML  

    Kernel regression is an effective tool for a variety of image processing tasks such as denoising and interpolation . In this paper, we extend the use of kernel regression for deblurring applications. In some earlier examples in the literature, such nonparametric deblurring was suboptimally performed in two sequential steps, namely denoising followed by deblurring. In contrast, our optimal solution jointly denoises and deblurs images. The proposed algorithm takes advantage of an effective and novel image prior that generalizes some of the most popular regularization techniques in the literature. Experimental results demonstrate the effectiveness of our method. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • GAFFE: A Gaze-Attentive Fixation Finding Engine

    Publication Year: 2008 , Page(s): 564 - 573
    Cited by:  Papers (40)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (2757 KB) |  | HTML iconHTML  

    The ability to automatically detect visually interesting regions in images has many practical applications, especially in the design of active machine vision and automatic visual surveillance systems. Analysis of the statistics of image features at observers' gaze can provide insights into the mechanisms of fixation selection in humans. Using a foveated analysis framework, we studied the statistics of four low-level local image features: luminance, contrast, and bandpass outputs of both luminance and contrast, and discovered that image patches around human fixations had, on average, higher values of each of these features than image patches selected at random. Contrast-bandpass showed the greatest difference between human and random fixations, followed by luminance-bandpass, RMS contrast, and luminance. Using these measurements, we present a new algorithm that selects image regions as likely candidates for fixation. These regions are shown to correlate well with fixations recorded from human observers. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Topology Preserving Non-negative Matrix Factorization for Face Recognition

    Publication Year: 2008 , Page(s): 574 - 584
    Cited by:  Papers (26)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1824 KB) |  | HTML iconHTML  

    In this paper, a novel topology preserving non-negative matrix factorization (TPNMF) method is proposed for face recognition. We derive the TPNMF model from original NMF algorithm by preserving local topology structure. The TPNMF is based on minimizing the constraint gradient distance in the high-dimensional space. Compared with L2 distance, the gradient distance is able to reveal latent manifold structure of face patterns. By using TPNMF decomposition, the high-dimensional face space is transformed into a local topology preserving subspace for face recognition. In comparison with PCA, LDA, and original NMF, which search only the Euclidean structure of face space, the proposed TPNMF finds an embedding that preserves local topology information, such as edges and texture. Theoretical analysis and derivation given also validate the property of TPNMF. Experimental results on three different databases, containing more than 12 000 face images under varying in lighting, facial expression, and pose, show that the proposed TPNMF approach provides a better representation of face patterns and achieves higher recognition rates than NMF. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Snakuscules

    Publication Year: 2008 , Page(s): 585 - 593
    Cited by:  Papers (6)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1563 KB) |  | HTML iconHTML  

    A snakuscule (a minuscule snake) is the simplest active contour that we were able to design while keeping the quintessence of traditional snakes: an energy term governed by the data, and a regularization term. Our construction is an area-based snake, as opposed to curve-based snakes. It is parameterized by just two points, thus further easing requirements on the optimizer. Despite their ultimate simplicity, snakuscules retain enough versatility to be employed for solving various problems such as cell counting and segmentation of approximately circular features. In this paper, we detail the design process of a snakuscule and illustrate its usefulness through practical examples. We claim that our didactic intentions are well served by the simplicity of snakuscules. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Activity Modeling Using Event Probability Sequences

    Publication Year: 2008 , Page(s): 594 - 607
    Cited by:  Papers (18)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (892 KB) |  | HTML iconHTML  

    Changes in motion properties of trajectories provide useful cues for modeling and recognizing human activities. We associate an event with significant changes that are localized in time and space, and represent activities as a sequence of such events. The localized nature of events allows for detection of subtle changes or anomalies in activities. In this paper, we present a probabilistic approach for representing events using the hidden Markov model (HMM) framework. Using trained HMMs for activities, an event probability sequence is computed for every motion trajectory in the training set. It reflects the probability of an event occurring at every time instant. Though the parameters of the trained HMMs depend on viewing direction, the event probability sequences are robust to changes in viewing direction. We describe sufficient conditions for the existence of view invariance. The usefulness of the proposed event representation is illustrated using activity recognition and anomaly detection. Experiments using the indoor University of Central Florida human action dataset, the Carnegie Mellon University Credo Intelligence, Inc., Motion Capture dataset, and the outdoor Transportation Security Administration airport tarmac surveillance dataset show encouraging results. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Bayesian Foreground and Shadow Detection in Uncertain Frame Rate Surveillance Videos

    Publication Year: 2008 , Page(s): 608 - 621
    Cited by:  Papers (31)  |  Patents (1)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (3495 KB) |  | HTML iconHTML  

    In in this paper, we propose a new model regarding foreground and shadow detection in video sequences. The model works without detailed a priori object-shape information, and it is also appropriate for low and unstable frame rate video sources. Contribution is presented in three key issues: 1) we propose a novel adaptive shadow model, and show the improvements versus previous approaches in scenes with difficult lighting and coloring effects; 2) we give a novel description for the foreground based on spatial statistics of the neighboring pixel values, which enhances the detection of background or shadow-colored object parts; 3) we show how microstructure analysis can be used in the proposed framework as additional feature components improving the results. Finally, a Markov random field model is used to enhance the accuracy of the separation. We validate our method on outdoor and indoor sequences including real surveillance videos and well-known benchmark test sets. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Customizing Kernel Functions for SVM-Based Hyperspectral Image Classification

    Publication Year: 2008 , Page(s): 622 - 629
    Cited by:  Papers (36)  |  Patents (1)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (513 KB) |  | HTML iconHTML  

    Previous research applying kernel methods such as support vector machines (SVMs) to hyperspectral image classification has achieved performance competitive with the best available algorithms. However, few efforts have been made to extend SVMs to cover the specific requirements of hyperspectral image classification, for example, by building tailor-made kernels. Observation of real-life spectral imagery from the AVIRIS hyperspectral sensor shows that the useful information for classification is not equally distributed across bands, which provides potential to enhance the SVM's performance through exploring different kernel functions. Spectrally weighted kernels are, therefore, proposed, and a set of particular weights is chosen by either optimizing an estimate of generalization error or evaluating each band's utility level. To assess the effectiveness of the proposed method, experiments are carried out on the publicly available 92AV3C dataset collected from the 220-dimensional AVIRIS hyperspectral sensor. Results indicate that the method is generally effective in improving performance: spectral weighting based on learning weights by gradient descent is found to be slightly better than an alternative method based on estimating ";relevance"; between band information and ground truth. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • IEEE Transactions on Image Processing EDICS

    Publication Year: 2008 , Page(s): 630
    Save to Project icon | Request Permissions | PDF file iconPDF (18 KB)  
    Freely Available from IEEE
  • IEEE Transactions on Image Processing Information for authors

    Publication Year: 2008 , Page(s): 631 - 632
    Save to Project icon | Request Permissions | PDF file iconPDF (45 KB)  
    Freely Available from IEEE
  • IEEE Signal Processing Society Information

    Publication Year: 2008 , Page(s): C3
    Save to Project icon | Request Permissions | PDF file iconPDF (31 KB)  
    Freely Available from IEEE

Aims & Scope

IEEE Transactions on Image Processing focuses on signal-processing aspects of image processing, imaging systems, and image scanning, display, and printing.

Full Aims & Scope

Meet Our Editors

Editor-in-Chief
Scott Acton
University of Virginia
Charlottesville, VA, USA
E-mail: acton@virginia.edu 
Phone: +1 434-982-2003