By Topic

Image Processing, IEEE Transactions on

Issue 12 • Date Dec. 2004

Filter Results

Displaying Results 1 - 18 of 18
  • Table of contents

    Page(s): c1 - c4
    Save to Project icon | Request Permissions | PDF file iconPDF (37 KB)  
    Freely Available from IEEE
  • IEEE Transactions on Image Processing publication information

    Page(s): c2
    Save to Project icon | Request Permissions | PDF file iconPDF (42 KB)  
    Freely Available from IEEE
  • A multiple-substream unequal error-protection and error-concealment algorithm for SPIHT-coded video bitstreams

    Page(s): 1547 - 1553
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1320 KB) |  | HTML iconHTML  

    This work presents a coordinated multiple-substream unequal error-protection and error-concealment algorithm for SPIHT-coded bitstreams transmitted over lossy channels. In the proposed scheme, we divide the video sequence corresponding to a group of pictures into two subsequences and independently encode each subsequence using a three-dimensional SPIHT algorithm. We use two different partitioning schemes to generate the substreams, each of which offers some advantages under the appropriate channel condition. Each substream is protected by an FEC-based unequal error-protection algorithm, which assigns unequal forward error correction codes to each bit plane. Any information that is lost during the transmission for any substream is estimated at the receiver by using the correlation between the substreams and the smoothness of the video signal. Simulation results show that the proposed multiple-substream UEP algorithm is simple, fast, and robust in hostile network conditions, and that the proposed error-concealment algorithm can achieve 2-3-dB PSNR gain over the case when error concealment is not used at high packet-loss rates. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Fast-searching algorithm for vector quantization using projection and triangular inequality

    Page(s): 1554 - 1558
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (316 KB) |  | HTML iconHTML  

    In this paper, a new and fast-searching algorithm for vector quantization is presented. Two inequalities, one used for terminating the searching process and the other used to delete impossible codewords, are presented to reduce the distortion computations. Our algorithm makes use of a vector's features (mean value, edge strength, and texture strength) to reject many unlikely codewords that cannot be rejected by other available approaches. Experimental results show that our algorithm is superior to other algorithms in terms of computing time and the number of distortion calculations. Compared with available approaches, our method can reduce the computing time and the number of distortion computations significantly. Compared with the best method of reducing distortion computation, our algorithm can further reduce the number of distortion calculations by 29% to 58.4%. Compared with the best encoding algorithm for vector quantization, our approach also further reduces the computing time by 8% to 47.7%. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Finding axes of symmetry from potential fields

    Page(s): 1559 - 1566
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1129 KB) |  | HTML iconHTML  

    This paper addresses the problem of detecting axes of bilateral symmetry in images. In order to achieve robustness to variation in illumination, only edge-gradient information is used. To overcome the problem of edge breaks, a potential field is developed from the edge map which spreads the information in the image plane. Pairs of points in the image plane are made to vote for their axes of symmetry with some confidence values. To make the method robust to overlapping objects, only local features in the form of Taylor coefficients are used for quantifying symmetry. We define an axis of symmetry histogram, which is used to accumulate the weighted votes for all possible axes of symmetry. To reduce the computational complexity of voting, a hashing scheme is proposed, wherein pairs of points, whose potential fields are too asymmetric, are pruned by not being counted for the vote. Experimental results indicate that the proposed method is fairly robust to edge breaks and is able to detect symmetries even when only 0.05% of the possible pairs are used for voting. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Grayscale level connectivity: theory and applications

    Page(s): 1567 - 1580
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1828 KB) |  | HTML iconHTML  

    A novel notion of connectivity for grayscale images is introduced, defined by means of a binary connectivity assigned at image-level sets. In this framework, a grayscale image is connected if all level sets below a prespecified threshold are connected. The proposed notion is referred to as grayscale level connectivity and includes, as special cases, other well-known notions of grayscale connectivity, such as fuzzy grayscale connectivity and grayscale blobs. In contrast to those approaches, the present framework does not require all image-level sets to be connected. Moreover, a connected grayscale object may contain more than one regional maximum. Grayscale level connectivity is studied in the rigorous framework of connectivity classes. The use of grayscale level connectivity in image analysis applications, such as object extraction, image segmentation, object-based filtering, and hierarchical image representation, is discussed and illustrated. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Efficient computation of the Hutchinson metric between digitized images

    Page(s): 1581 - 1588
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1157 KB) |  | HTML iconHTML  

    The Hutchinson metric is a natural measure of the discrepancy between two images for use in fractal image processing. An efficient solution to the problem of computing the Hutchinson metric between two arbitrary digitized images is considered. The technique proposed here, based on the shape of the objects as projected on the digitized screen, can be used as an effective way to establish the error between the original and the, possibly compressed, decoded image. To test the performance of our method, we apply it to compare pairs of fractal objects, as well as to compare real-world images with the corresponding reconstructed ones. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Design of signal-adapted multidimensional lifting scheme for lossy coding

    Page(s): 1589 - 1603
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1683 KB) |  | HTML iconHTML  

    This paper proposes a new method for the design of lifting filters to compute a multidimensional nonseparable wavelet transform. Our approach is stated in the general case, and is illustrated for the 2-D separable and for the quincunx images. Results are shown for the JPEG2000 database and for satellite images acquired on a quincunx sampling grid. The design of efficient quincunx filters is a difficult challenge which has already been addressed for specific cases. Our approach enables the design of less expensive filters adapted to the signal statistics to enhance the compression efficiency in a more general case. It is based on a two-step lifting scheme and joins the lifting theory with Wiener's optimization. The prediction step is designed in order to minimize the variance of the signal, and the update step is designed in order to minimize a reconstruction error. Application for lossy compression shows the performances of the method. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Locally optimum nonlinearities for DCT watermark detection

    Page(s): 1604 - 1617
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (522 KB) |  | HTML iconHTML  

    The issue of copyright protection of digital multimedia data has attracted a lot of attention during the last decade. An efficient copyright protection method that has been gaining popularity is watermarking, i.e., the embedding of a signature in a digital document that can be detected only by its rightful owner. Watermarks are usually blindly detected using correlating structures, which would be optimal in the case of Gaussian data. However, in the case of DCT-domain image watermarking, the data is more heavy-tailed and the correlator is clearly suboptimal. Nonlinear receivers have been shown to be particularly well suited for the detection of weak signals in heavy-tailed noise, as they are locally optimal. This motivates the use of the Gaussian-tailed zero-memory nonlinearity, as well as the locally optimal Cauchy nonlinearity for the detection of watermarks in DCT transformed images. We analyze the performance of these schemes theoretically and compare it to that of the traditionally used Gaussian correlator, but also to the recently proposed generalized Gaussian detector, which outperforms the correlator. The theoretical analysis and the actual performance of these systems is assessed through experiments, which verify the theoretical analysis and also justify the use of nonlinear structures for watermark detection. The performance of the correlator and the nonlinear detectors in the presence of quantization is also analyzed, using results from dither theory, and also verified experimentally. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • An efficient and anonymous buyer-seller watermarking protocol

    Page(s): 1618 - 1626
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (364 KB) |  | HTML iconHTML  

    For the purpose of deterring unauthorized duplication and distribution of multimedia contents, a seller may insert a unique digital watermark into each copy of the multimedia contents to be sold. When an illegal replica is found in the market sometime later, the seller can determine the responsible distributor by examining the watermark embedded. However, the accusation against the charged distributor, who was the buyer in some earlier transaction, is objectionable because the seller also has access to the watermarked copies and, hence, is able to release such a replica on her own. In this paper, a watermarking protocol is proposed to avoid such difficulties, known as the customer's right problem, in the phase of arbitration. The proposed watermarking protocol also provides a fix to Memon and Wong's scheme by solving the unbinding problem. In addition, the buyer is no longer required to contact the watermark certification authority during transactions, and the anonymity of the buyer can be retained through a trusted third party. The result is an efficient and anonymous buyer-seller watermarking protocol. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Robust image-adaptive data hiding using erasure and error correction

    Page(s): 1627 - 1639
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1806 KB) |  | HTML iconHTML  

    Information-theoretic analyses for data hiding prescribe embedding the hidden data in the choice of quantizer for the host data. We propose practical realizations of this prescription for data hiding in images, with a view to hiding large volumes of data with low perceptual degradation. The hidden data can be recovered reliably under attacks, such as compression and limited amounts of image tampering and image resizing. The three main findings are as follows. 1) In order to limit perceivable distortion while hiding large amounts of data, hiding schemes must use image-adaptive criteria in addition to statistical criteria based on information theory. 2) The use of local criteria to choose where to hide data can potentially cause desynchronization of the encoder and decoder. This synchronization problem is solved by the use of powerful, but simple-to-implement, erasures and errors correcting codes, which also provide robustness against a variety of attacks. 3) For simplicity, scalar quantization-based hiding is employed, even though information-theoretic guidelines prescribe vector quantization-based methods. However, an information-theoretic analysis for an idealized model is provided to show that scalar quantization-based hiding incurs approximately only a 2-dB penalty in terms of resilience to attack. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Edge detection in ultrasound imagery using the instantaneous coefficient of variation

    Page(s): 1640 - 1655
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1894 KB) |  | HTML iconHTML  

    The instantaneous coefficient of variation (ICOV) edge detector, based on normalized gradient and Laplacian operators, has been proposed for edge detection in ultrasound images. In this paper, the edge detection and localization performance of the ICOV-squared (ICOVS) detector are examined. First, a simplified version of the ICOVS detector, the normalized gradient magnitude squared, is scrutinized in order to reveal the statistical performance of edge detection and localization in speckled ultrasound imagery. Both the probability of detection and the probability of false alarm are evaluated for the detector. Edge localization is characterized by the position of the peak and the 3-dB width of the detector response. Then, the speckle-edge response of the ICOVS as applied to a realistic edge model is studied. Through theoretical analysis, we reveal the compensatory effects of the normalized Laplacian operator in the ICOV edge detector for edge-localization error. An ICOV-based edge-detection algorithm is implemented in which the ICOV detector is embedded in a diffusion coefficient in an anisotropic diffusion process. Experiments with real ultrasound images have shown that the proposed algorithm is effective in extracting edges in the presence of speckle. Quantitatively, the ICOVS provides a lower localization error, and qualitatively, a dramatic improvement in edge-detection performance over an existing edge-detection method for speckled imagery. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • List of Reviewers

    Page(s): 1656 - 1661
    Save to Project icon | Request Permissions | PDF file iconPDF (42 KB)  
    Freely Available from IEEE
  • IEEE Transactions on Image Processing Edics

    Page(s): 1662
    Save to Project icon | Request Permissions | PDF file iconPDF (31 KB)  
    Freely Available from IEEE
  • IEEE Transactions on Image Processing Information for authors

    Page(s): 1663 - 1664
    Save to Project icon | Request Permissions | PDF file iconPDF (45 KB)  
    Freely Available from IEEE
  • 2004 Index

    Page(s): 1665 - 1681
    Save to Project icon | Request Permissions | PDF file iconPDF (160 KB)  
    Freely Available from IEEE
  • Special issue on molecular and cellular bioimaging

    Page(s): 1682
    Save to Project icon | Request Permissions | PDF file iconPDF (124 KB)  
    Freely Available from IEEE
  • IEEE Signal Processing Society Information

    Page(s): c3
    Save to Project icon | Request Permissions | PDF file iconPDF (33 KB)  
    Freely Available from IEEE

Aims & Scope

IEEE Transactions on Image Processing focuses on signal-processing aspects of image processing, imaging systems, and image scanning, display, and printing.

Full Aims & Scope

Meet Our Editors

Editor-in-Chief
Scott Acton
University of Virginia
Charlottesville, VA, USA
E-mail: acton@virginia.edu 
Phone: +1 434-982-2003