Scheduled System Maintenance:
On May 6th, single article purchases and IEEE account management will be unavailable from 8:00 AM - 12:00 PM ET (12:00 - 16:00 UTC). We apologize for the inconvenience.
By Topic

Image Processing, IEEE Transactions on

Issue 1 • Date Jan. 2004

Filter Results

Displaying Results 1 - 18 of 18
  • Table of contents

    Publication Year: 2004 , Page(s): 01
    Save to Project icon | Request Permissions | PDF file iconPDF (36 KB)  
    Freely Available from IEEE
  • IEEE Transactions on Image Processing Society Information

    Publication Year: 2004 , Page(s): 0_2
    Save to Project icon | Request Permissions | PDF file iconPDF (35 KB)  
    Freely Available from IEEE
  • Texture decomposition by harmonics extraction from higher order statistics

    Publication Year: 2004 , Page(s): 1 - 14
    Cited by:  Papers (15)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1485 KB) |  | HTML iconHTML  

    In this paper, a method of harmonics extraction from Higher Order Statistics (HOS) is developed for texture decomposition. We show that the diagonal slice of the fourth-order cumulants is proportional to the autocorrelation of a related noiseless sinusoidal signal with identical frequencies. We propose to use this fourth-order cumulants slice to estimate a power spectrum from which the harmonic frequencies can be easily extracted. Hence, a texture can be decomposed into deterministic components and indeterministic components as in a unified texture model through a Wold-like decomposition procedure. The simulation and experimental results demonstrated that this method is effective for texture decomposition and it performs better than traditional lower order statistics based decomposition methods. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Gridline: automatic grid alignment DNA microarray scans

    Publication Year: 2004 , Page(s): 15 - 25
    Cited by:  Papers (27)  |  Patents (11)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1314 KB) |  | HTML iconHTML  

    We present a new automatic grid alignment algorithm for detecting two-dimensional (2-D) arrays of spots in DNA microarray images. Our motivation for this work is the lack of automation in high-throughput microarray data analysis that leads to a) spatial inaccuracy of located spots and hence inaccuracy of extracted information from a spot and b) inconsistency of extracted features due to manual selection of grid alignment parameters. The proposed grid alignment algorithm is novel in the sense that 1) it can detect irregularly row- and column-spaced spots in a 2-D array, 2) it is independent of spot color and size, 3) it is general to localize a grid of other primitive shapes than the spot shapes, 4) it can perform grid alignment on any number of input channels, 5) it reduces the number of free parameters to minimum by data driven optimization of most algorithmic parameters, and 6) it has a built-in speed versus accuracy tradeoff mechanism to accommodate user's requirements on performance time and accuracy of the results. The developed algorithm also automatically identifies multiple blocks of 2-D arrays, as it is the case in microarray images, and compensates for grid rotations in addition to grid translations. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Context modeling based on context quantization with application in wavelet image coding

    Publication Year: 2004 , Page(s): 26 - 32
    Cited by:  Papers (16)  |  Patents (1)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (191 KB) |  | HTML iconHTML  

    Context modeling is widely used in image coding to improve the compression performance. However, with no special treatment, the expected compression gain will be cancelled by the model cost introduced by high order context models. Context quantization is an efficient method to deal with this problem. In this paper, we analyze the general context quantization problem in detail and show that context quantization is similar to a common vector quantization problem. If a suitable distortion measure is defined, the optimal context quantizer can be designed by a Lloyd style iterative algorithm. This context quantization strategy is applied to an embedded wavelet coding scheme in which the significance map symbols and sign symbols are directly coded by arithmetic coding with context models designed by the proposed quantization algorithm. Good coding performance is achieved. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Super-resolution reconstruction of compressed video using transform-domain statistics

    Publication Year: 2004 , Page(s): 33 - 43
    Cited by:  Papers (26)  |  Patents (1)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (516 KB)  

    Considerable attention has been directed to the problem of producing high-resolution video and still images from multiple low-resolution images. This multiframe reconstruction, also known as super-resolution reconstruction, is beginning to be applied to compressed video. Super-resolution techniques that have been designed for raw (i.e., uncompressed) video may not be effective when applied to compressed video because they do not incorporate the compression process into their models. The compression process introduces quantization error, which is the dominant source of error in some cases. In this paper, we propose a stochastic framework where quantization information as well as other statistical information about additive noise and image prior can be utilized effectively. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A generalized model for scratch detection

    Publication Year: 2004 , Page(s): 44 - 50
    Cited by:  Papers (33)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (451 KB)  

    This paper presents a generalization of Kokaram's model for scratch lines detection on digital film materials. It is based on the assumption that scratch is not purely additive on a given image but shows also a destroying effect. This result allows us to design a more efficacious scratch detector which performs on a hierarchical representation of a degraded image, i.e., on its cross section local extrema. Thanks to Weber's law, the proposed detector even works well on slight scratches resulting completely automatic, except for the scratch color (black or white). The experimental results show that the proposed detector works better in terms of good detection and false alarms rejection with a lower computing time. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Lip image segmentation using fuzzy clustering incorporating an elliptic shape function

    Publication Year: 2004 , Page(s): 51 - 62
    Cited by:  Papers (37)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (659 KB) |  | HTML iconHTML  

    Recently, lip image analysis has received much attention because its visual information is shown to provide improvement for speech recognition and speaker authentication. Lip image segmentation plays an important role in lip image analysis. In this paper, a new fuzzy clustering method for lip image segmentation is presented. This clustering method takes both the color information and the spatial distance into account while most of the current clustering methods only deal with the former. In this method, a new dissimilarity measure, which integrates the color dissimilarity and the spatial distance in terms of an elliptic shape function, is introduced. Because of the presence of the elliptic shape function, the new measure is able to differentiate the pixels having similar color information but located in different regions. A new iterative algorithm for the determination of the membership and centroid for each class is derived, which is shown to provide good differentiation between the lip region and the nonlip region. Experimental results show that the new algorithm yields better membership distribution and lip shape than the standard fuzzy c-means algorithm and four other methods investigated in the paper. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Nonlinear color space and spatiotemporal MRF for hierarchical segmentation of face features in video

    Publication Year: 2004 , Page(s): 63 - 71
    Cited by:  Papers (28)  |  Patents (1)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (584 KB) |  | HTML iconHTML  

    This paper deals with the low-level joint processing of color and motion for robust face analysis within a feature-based approach. To gain robustness and contrast under unsupervised viewing conditions, a nonlinear color transform relevant for hue segmentation is derived from a logarithmic model. A hierarchical segmentation scheme is based on Markov random field modeling, that combines hue and motion detection within a spatiotemporal neighborhood. Relevant face regions are segmented without parameter tuning. The accuracy of the label fields enables not only face detection and tracking but also geometrical measurements on facial feature edges, such as lips or eyes. Results are shown both on typical test sequences and on various sequences acquired from micro- or mobile-cameras. The efficiency of the method makes it suitable for real-time applications aiming at audiovisual communication in unsupervised environments. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Maximum likelihood localization of 2-D patterns in the Gauss-Laguerre transform domain: theoretic framework and preliminary results

    Publication Year: 2004 , Page(s): 72 - 86
    Cited by:  Papers (16)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (595 KB)  

    Usual approaches to localization, i.e., joint estimation of position, orientation and scale of a bidimensional pattern employ suboptimum techniques based on invariant signatures, which allow for position estimation independent of scale and orientation. In this paper a Maximum Likelihood method for pattern localization working in the Gauss-Laguerre Transform (GLT) domain is presented. The GLT is based on an orthogonal family of Circular Harmonic Functions with specific radial profiles, which permits optimum joint estimation of position and scale/rotation parameters looking at the maxima of a "Gauss-Laguerre Likelihood Map." The Fisher information matrix for any given pattern is given and the theoretical asymptotic accuracy of the parameter estimates is calculated through the Cramer Rao Lower Bound. Application of the ML estimation method is discussed and an example is provided. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Automatic detection and recognition of signs from natural scenes

    Publication Year: 2004 , Page(s): 87 - 99
    Cited by:  Papers (62)  |  Patents (4)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1248 KB)  

    In this paper, we present an approach to automatic detection and recognition of signs from natural scenes, and its application to a sign translation task. The proposed approach embeds multiresolution and multiscale edge detection, adaptive searching, color analysis, and affine rectification in a hierarchical framework for sign detection, with different emphases at each phase to handle the text in different sizes, orientations, color distributions and backgrounds. We use affine rectification to recover deformation of the text regions caused by an inappropriate camera view angle. The procedure can significantly improve text detection rate and optical character recognition (OCR) accuracy. Instead of using binary information for OCR, we extract features from an intensity image directly. We propose a local intensity normalization method to effectively handle lighting variations, followed by a Gabor transform to obtain local features, and finally a linear discriminant analysis (LDA) method for feature selection. We have applied the approach in developing a Chinese sign translation system, which can automatically detect and recognize Chinese signs as input from a camera, and translate the recognized text into English. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Discussion of "On the Specification of Repair Time Requirements"

    Publication Year: 2004 , Page(s): 100 - 105
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1756 KB)  

    First Page of the Article
    View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • IEEE Transactions on Image Processing Society Information EDICS

    Publication Year: 2004 , Page(s): 106
    Save to Project icon | Request Permissions | PDF file iconPDF (30 KB)  
    Freely Available from IEEE
  • IEEE Transactions on Image Processing Society Information for authors

    Publication Year: 2004 , Page(s): 107 - 108
    Save to Project icon | Request Permissions | PDF file iconPDF (43 KB) |  | HTML iconHTML  
    Freely Available from IEEE
  • IEEE Transactions on Speech and Audio Processing Special Issue on Data Mining of Speech, Audio and Dialog

    Publication Year: 2004 , Page(s): 109
    Save to Project icon | Request Permissions | PDF file iconPDF (134 KB)  
    Freely Available from IEEE
  • Call for Nominations: Paper Awards of the IEEE Signal Processing Society

    Publication Year: 2004 , Page(s): 110
    Save to Project icon | Request Permissions | PDF file iconPDF (142 KB)  
    Freely Available from IEEE
  • IEEE copyright form

    Publication Year: 2004 , Page(s): 111 - 112
    Save to Project icon | Request Permissions | PDF file iconPDF (1057 KB)  
    Freely Available from IEEE
  • IEEE Signal Processing Society Information

    Publication Year: 2004 , Page(s): 0_3
    Save to Project icon | Request Permissions | PDF file iconPDF (29 KB)  
    Freely Available from IEEE

Aims & Scope

IEEE Transactions on Image Processing focuses on signal-processing aspects of image processing, imaging systems, and image scanning, display, and printing.

Full Aims & Scope

Meet Our Editors

Editor-in-Chief
Scott Acton
University of Virginia
Charlottesville, VA, USA
E-mail: acton@virginia.edu 
Phone: +1 434-982-2003