By Topic

Image Processing, IEEE Transactions on

Issue 4 • Date April 2006

Filter Results

Displaying Results 1 - 25 of 29
  • Table of contents

    Publication Year: 2006 , Page(s): c1 - c4
    Save to Project icon | Request Permissions | PDF file iconPDF (40 KB)  
    Freely Available from IEEE
  • IEEE Transactions on Image Processing publication information

    Publication Year: 2006 , Page(s): c2
    Save to Project icon | Request Permissions | PDF file iconPDF (35 KB)  
    Freely Available from IEEE
  • Light field compression using disparity-compensated lifting and shape adaptation

    Publication Year: 2006 , Page(s): 793 - 806
    Cited by:  Papers (15)  |  Patents (22)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1355 KB) |  | HTML iconHTML  

    We propose disparity-compensated lifting for wavelet compression of light fields. With this approach, we obtain the benefits of wavelet coding, such as scalability in all dimensions, as well as superior compression performance. Additionally, the proposed approach solves the irreversibility limitations of previous light field wavelet coding approaches, using the lifting structure. Our scheme incorporates disparity compensation into the lifting structure for the transform across the views in the light field data set. Another transform is performed to exploit the coherence among neighboring pixels, followed by a modified SPIHT coder and rate-distortion optimized bitstream assembly. A view-sequencing algorithm is developed to organize the views for encoding. For light fields of an object, we propose to use shape adaptation to improve the compression efficiency and visual quality of the images. The necessary shape information is efficiently coded based on prediction from the existing geometry model. Experimental results show that the proposed scheme exhibits superior compression performance over existing light field compression techniques. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A syntax-preserving error resilience tool for JPEG 2000 based on error correcting arithmetic coding

    Publication Year: 2006 , Page(s): 807 - 818
    Cited by:  Papers (11)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1101 KB) |  | HTML iconHTML  

    JPEG 2000 is the novel ISO standard for image and video coding. Besides its improved coding efficiency, it also provides a few error resilience tools in order to limit the effect of errors in the codestream, which can occur when the compressed image or video data are transmitted over an error-prone channel, as typically occurs in wireless communication scenarios. However, for very harsh channels, these tools often do not provide an adequate degree of error protection. In this paper, we propose a novel error-resilience tool for JPEG 2000, based on the concept of ternary arithmetic coders employing a forbidden symbol. Such coders introduce a controlled degree of redundancy during the encoding process, which can be exploited at the decoder side in order to detect and correct errors. We propose a maximum likelihood and a maximum a posteriori context-based decoder, specifically tailored to the JPEG 2000 arithmetic coder, which are able to carry out both hard and soft decoding of a corrupted codestream. The proposed decoder extends the JPEG 2000 capabilities in error-prone scenarios, without violating the standard syntax. Extensive simulations on video sequences show that the proposed decoders largely outperform the standard in terms of PSNR and visual quality. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Reducing video-quality fluctuations for streaming scalable video using unequal error protection, retransmission, and interleaving

    Publication Year: 2006 , Page(s): 819 - 832
    Cited by:  Papers (13)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (605 KB)  

    Forward error correction based multiple description (MD-FEC) transcoding for transmitting embedded bitstream over the packet erasure networks has been extensively studied in the past. In the existing work, a single embedded source bitstream, e.g., the bitstream of a group of pictures (GOP) encoded using three-dimensional set partitioning in hierarchical trees is optimally protected unequal error protection (UEP) in the rate-distortion sense. However, most of the previous work on transmitting embedded video using MD-FEC assumed that one GOP is transmitted only once, and did not consider the chance of retransmission. This may lead to noticeable video quality variations due to varying channel conditions. In this paper, a novel window-based packetization scheme is proposed, which combats bursty packet loss by combining the following three techniques: UEP, retransmission, and GOP-level interleaving. In particular, two retransmission mechanisms, namely segment-wise retransmission and byte-wise retransmission, are proposed based on different types of receiver feedback. Moreover, two levels of rate allocations are introduced: intra-GOP rate allocation minimizes the distortion of individual GOP; while inter-GOP rate allocation intends to reduce video quality fluctuations by adaptively allocating bandwidth according to video signal characteristics and client buffer status. In this way, more consistent video quality can be achieved under various packet loss probabilities, as demonstrated by our experimental results. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A cross-Layer diversity technique for multicarrier OFDM multimedia networks

    Publication Year: 2006 , Page(s): 833 - 847
    Cited by:  Papers (19)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (653 KB) |  | HTML iconHTML  

    Diversity can be used to combat multipath fading and improve the performance of wireless multimedia communication systems. In this work, by considering transmission of an embedded bitstream over an orthogonal frequency division multiplexing (OFDM) system in a slowly varying Rayleigh faded environment, we develop a cross-layer diversity technique which takes advantage of both multiple description coding and frequency diversity techniques. More specifically, assuming a frequency-selective channel, we study the packet loss behavior of an OFDM system and construct multiple independent descriptions using an FEC-based strategy. We provide some analysis of this cross-layer approach and demonstrate its superior performance using the set partitioning in hierarchical trees image coder. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • An analysis of the efficiency of different SNR-scalable strategies for video coders

    Publication Year: 2006 , Page(s): 848 - 864
    Cited by:  Papers (6)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1249 KB) |  | HTML iconHTML  

    In this paper, we analyze the efficiency of three signal-to-noise scalable strategies for video coders using single-loop motion-compensated prediction (MCP). In our analysis, we assume the video sequences have uniform and constant translational motion and we model MCP as a stochastic filter. We also assume an exponential model for the distortion-rate function of the intraframe coding. The analysis is divided into two parts: the steady-state analysis and the transient analysis. In the first part, only the steady-state response of the coders is taken into account, and, thus, this analysis allows us to asses approximately the efficiency of coders with long input sequences. The transitory analysis considers both the transient and the steady-state responses of the coders, which makes it appropriate to analyze coders using periodic intraframes or with short input sequences. To validate our analysis, theoretical results have been compared to results from encodings of real video sequences using the scalable adaptive motion compensated wavelet video coder. We show that our theoretical analysis effectively describes qualitatively the main trends of every video coding strategy. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Maximum-likelihood estimation of circle parameters via convolution

    Publication Year: 2006 , Page(s): 865 - 876
    Cited by:  Papers (10)  |  Patents (1)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1209 KB) |  | HTML iconHTML  

    The accurate fitting of a circle to noisy measurements of circumferential points is a much studied problem in the literature. In this paper, we present an interpretation of the maximum-likelihood estimator (MLE) and the Delogne-Kasa estimator (DKE) for circle-center and radius estimation in terms of convolution on an image which is ideal in a certain sense. We use our convolution-based MLE approach to find good estimates for the parameters of a circle in digital images. In digital images, it is then possible to treat these estimates as preliminary estimates into various other numerical techniques which further refine them to achieve subpixel accuracy. We also investigate the relationship between the convolution of an ideal image with a "phase-coded kernel" (PCK) and the MLE. This is related to the "phase-coded annulus" which was introduced by Atherton and Kerbyson who proposed it as one of a number of new convolution kernels for estimating circle center and radius. We show that the PCK is an approximate MLE (AMLE). We compare our AMLE method to the MLE and the DKE as well as the Crame´r-Rao Lower Bound in ideal images and in both real and synthetic digital images. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Correcting Curvature-Density Effects in the Hamilton–Jacobi Skeleton

    Publication Year: 2006 , Page(s): 877 - 891
    Cited by:  Papers (14)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1677 KB) |  | HTML iconHTML  

    The Hamilton–Jacobi approach has proven to be a powerful and elegant method for extracting the skeleton of two-dimensional (2-D) shapes. The approach is based on the observation that the normalized flux associated with the inward evolution of the object boundary at nonskeletal points tends to zero as the size of the integration area tends to zero, while the flux is negative at the locations of skeletal points. Nonetheless, the error in calculating the flux on the image lattice is both limited by the pixel resolution and also proportional to the curvature of the boundary evolution front and, hence, unbounded near endpoints. This makes the exact location of endpoints difficult and renders the performance of the skeleton extraction algorithm dependent on a threshold parameter. This problem can be overcome by using interpolation techniques to calculate the flux with subpixel precision. However, here, we develop a method for 2-D skeleton extraction that circumvents the problem by eliminating the curvature contribution to the error. This is done by taking into account variations of density due to boundary curvature. This yields a skeletonization algorithm that gives both better localization and less susceptibility to boundary noise and parameter choice than the Hamilton–Jacobi method. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Multiple contour extraction from graylevel images using an artificial neural network

    Publication Year: 2006 , Page(s): 892 - 899
    Cited by:  Papers (15)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (663 KB) |  | HTML iconHTML  

    For active contour modeling (ACM), we propose a novel self-organizing map (SOM)-based approach, called the batch-SOM (BSOM), that attempts to integrate the advantages of SOM- and snake-based ACMs in order to extract the desired contours from images. We employ feature points, in the form of an edge-map (as obtained from a standard edge-detection operation), to guide the contour (as in the case of SOM-based ACMs) along with the gradient and intensity variations in a local region to ensure that the contour does not "leak" into the object boundary in case of faulty feature points (weak or broken edges). In contrast with the snake-based ACMs, however , we do not use an explicit energy functional (based on gradient or intensity) for controlling the contour movement. We extend the BSOM to handle extraction of contours of multiple objects, by splitting a single contour into as many subcontours as the objects in the image. The BSOM and its extended version are tested on synthetic binary and gray-level images with both single and multiple objects. We also demonstrate the efficacy of the BSOM on images of objects having both convex and nonconvex boundaries. The results demonstrate the superiority of the BSOM over others. Finally, we analyze the limitations of the BSOM. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A nonlinear image contrast sharpening approach based on Munsell's scale

    Publication Year: 2006 , Page(s): 900 - 909
    Cited by:  Papers (5)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (651 KB) |  | HTML iconHTML  

    Contrast is a measure of the variation in intensity or gray value in a specified region of an image. The region can be most or all of the image, giving rise to a global concept of contrast. The region might, on the other hand, be a small window in which case the concept of contrast is a locally defined expression. In this work, we introduce a nonlinear local contrast enhancement method. This method utilizes the Munsell value scale which is based upon human visual perception. Use of the Munsell value scale allows for the partitioning of the gray scale into ten discrete subintervals. Subsequent local processing occurs within each of these subintervals. Inside each subinterval, this method constructs a contrast enhancement function that is a smooth approximation to the threshold step function and which maps a given subinterval into itself. This function then thresholds the gray values in a subinterval in a smooth manner about a locally computed quantity called the mean edge gray value. By enhancing the contrast in this way, the original shades of gray are preserved. That is, the groupings of the gray values by subinterval are preserved. As a result, no gray value distortion is introduced into the image. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • The fuzzy transformation and its applications in image processing

    Publication Year: 2006 , Page(s): 910 - 927
    Cited by:  Papers (18)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (3751 KB) |  | HTML iconHTML  

    The spatial and rank (SR) orderings of samples play a critical role in most signal processing algorithms. The recently introduced fuzzy ordering theory generalizes traditional, or crisp, SR ordering concepts and defines the fuzzy (spatial) samples, fuzzy order statistics, fuzzy spatial indexes, and fuzzy ranks. Here, we introduce a more general concept, the fuzzy transformation (FZT), which refers to the mapping of the crisp samples, order statistics, and SR ordering indexes to their fuzzy counterparts. We establish the element invariant and order invariant properties of the FZT. These properties indicate that fuzzy spatial samples and fuzzy order statistics constitute the same set and, under commonly satisfied membership function conditions, the sample rank order is preserved by the FZT. The FZT also possesses clustering and symmetry properties, which are established through analysis of the distributions and expectations of fuzzy samples and fuzzy order statistics. These properties indicate that the FZT incorporates sample diversity into the ordering operation, which can be utilized in the generalization of conventional filters. Here, we establish the fuzzy weighted median (FWM), fuzzy lower-upper-middle (FLUM), and fuzzy identity filters as generalizations of their crisp counterparts. The advantage of the fuzzy generalizations is illustrated in the applications of DCT coded image deblocking, impulse removal, and noisy image sharpening. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A hybrid neuro-fuzzy filter for edge preserving restoration of images corrupted by impulse noise

    Publication Year: 2006 , Page(s): 928 - 936
    Cited by:  Papers (29)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1689 KB) |  | HTML iconHTML  

    A new operator for restoring digital images corrupted by impulse noise is presented. The proposed operator is a hybrid filter obtained by appropriately combining a median filter, an edge detector, and a neuro-fuzzy network. The internal parameters of the neuro-fuzzy network are adaptively optimized by training. The training is easily accomplished by using simple artificial images that can be generated in a computer. The most distinctive feature of the proposed operator over most other operators is that it offers excellent line, edge, detail, and texture preservation performance while, at the same time, effectively removing noise from the input image. Extensive simulation experiments show that the proposed operator may be used for efficient restoration of digital images corrupted by impulse noise without distorting the useful information in the image. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Bayesian wavelet-based image deconvolution: a GEM algorithm exploiting a class of heavy-tailed priors

    Publication Year: 2006 , Page(s): 937 - 951
    Cited by:  Papers (55)  |  Patents (8)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1751 KB) |  | HTML iconHTML  

    Image deconvolution is formulated in the wavelet domain under the Bayesian framework. The well-known sparsity of the wavelet coefficients of real-world images is modeled by heavy-tailed priors belonging to the Gaussian scale mixture (GSM) class; i.e., priors given by a linear (finite of infinite) combination of Gaussian densities. This class includes, among others, the generalized Gaussian, the Jeffreys , and the Gaussian mixture priors. Necessary and sufficient conditions are stated under which the prior induced by a thresholding/shrinking denoising rule is a GSM. This result is then used to show that the prior induced by the "nonnegative garrote" thresholding/shrinking rule, herein termed the garrote prior, is a GSM. To compute the maximum a posteriori estimate, we propose a new generalized expectation maximization (GEM) algorithm, where the missing variables are the scale factors of the GSM densities. The maximization step of the underlying expectation maximization algorithm is replaced with a linear stationary second-order iterative method. The result is a GEM algorithm of O(NlogN) computational complexity. In a series of benchmark tests, the proposed approach outperforms or performs similarly to state-of-the art methods, demanding comparable (in some cases, much less) computational complexity. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Precise segmentation of multimodal images

    Publication Year: 2006 , Page(s): 952 - 968
    Cited by:  Papers (48)  |  Patents (2)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (2884 KB) |  | HTML iconHTML  

    We propose new techniques for unsupervised segmentation of multimodal grayscale images such that each region-of-interest relates to a single dominant mode of the empirical marginal probability distribution of grey levels. We follow the most conventional approaches in that initial images and desired maps of regions are described by a joint Markov-Gibbs random field (MGRF) model of independent image signals and interdependent region labels. However, our focus is on more accurate model identification. To better specify region borders, each empirical distribution of image signals is precisely approximated by a linear combination of Gaussians (LCG) with positive and negative components. We modify an expectation-maximization (EM) algorithm to deal with the LCGs and also propose a novel EM-based sequential technique to get a close initial LCG approximation with which the modified EM algorithm should start. The proposed technique identifies individual LCG models in a mixed empirical distribution, including the number of positive and negative Gaussians. Initial segmentation based on the LCG models is then iteratively refined by using the MGRF with analytically estimated potentials. The convergence of the overall segmentation algorithm at each stage is discussed. Experiments show that the developed techniques segment different types of complex multimodal medical images more accurately than other known algorithms. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Seamless image stitching by minimizing false edges

    Publication Year: 2006 , Page(s): 969 - 977
    Cited by:  Papers (35)  |  Patents (19)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (2459 KB) |  | HTML iconHTML  

    Various applications such as mosaicing and object insertion require stitching of image parts. The stitching quality is measured visually by the similarity of the stitched image to each of the input images, and by the visibility of the seam between the stitched images. In order to define and get the best possible stitching, we introduce several formal cost functions for the evaluation of the stitching quality. In these cost functions the similarity to the input images and the visibility of the seam are defined in the gradient domain, minimizing the disturbing edges along the seam. A good image stitching will optimize these cost functions, overcoming both photometric inconsistencies and geometric misalignments between the stitched images. We study the cost functions and compare their performance for different scenarios both theoretically and practically. Our approach is demonstrated in various applications including generation of panoramic images, object blending and removal of compression artifacts. Comparisons with existing methods show the benefits of optimizing the measures in the gradient domain. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Optimal temporal interpolation filter for motion-compensated frame rate up conversion

    Publication Year: 2006 , Page(s): 978 - 991
    Cited by:  Papers (16)  |  Patents (1)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1519 KB) |  | HTML iconHTML  

    Frame rate up conversion (FRUC) methods that employ motion have been proven to provide better image quality compared to nonmotion-based methods. While motion-based methods improve the quality of interpolation, artifacts are introduced in the presence of incorrect motion vectors. In this paper, we study the design problem of optimal temporal interpolation filter for motion-compensated FRUC (MC-FRUC). The optimal filter is obtained by minimizing the prediction error variance between the original frame and the interpolated frame. In FRUC applications, the original frame that is skipped is not available at the decoder, so models for the power spectral density of the original signal and prediction error are used to formulate the problem. The closed-form solution for the filter is obtained by Lagrange multipliers and statistical motion vector error modeling. The effect of motion vector errors on resulting optimal filters and prediction error is analyzed. The performance of the optimal filter is compared to nonadaptive temporal averaging filters by using two different motion vector reliability measures. The results confirm that to improve the quality of temporal interpolation in MC, the interpolation filter should be designed based on the reliability of motion vectors and the statistics of the MC prediction error. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Asymptotically optimal blind estimation of multichannel images

    Publication Year: 2006 , Page(s): 992 - 1007
    Cited by:  Papers (8)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (3230 KB) |  | HTML iconHTML  

    Optimal estimation of a two-dimensional (2-D) multichannel signal ideally decorrelates the data in both channel and space and weights the resulting coefficients according to their SNR. Many scenarios exist where the required second-order signal and noise statistics are not known in which the decorrelation is difficult or expensive to calculate. An asymptotically optimal estimation scheme proposed here uses a 2-D discrete wavelet transform to approximately decorrelate the signal in space and the discrete Fourier transform to decorrelate between channels. The coefficient weighting is replaced with a wavelet-domain thresholding operation to result in an efficient estimation scheme for both stationary and nonstationary signals. In contrast to optimal estimation, this new scheme does not require second-order signal statistics, making it well suited to many applications. In addition to providing vastly improved visual quality, the new estimator typically yields signal-to-noise ratio gains 12 dB or higher for hyperspectral imagery and functional magnetic resonance images. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Motion measurement errors and autofocus in bistatic SAR

    Publication Year: 2006 , Page(s): 1008 - 1016
    Cited by:  Papers (14)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (431 KB) |  | HTML iconHTML  

    This paper discusses the effect of motion measurement errors (MMEs) on measured bistatic synthetic aperture radar (SAR) phase history data that has been motion compensated to the scene origin. We characterize the effect of low-frequency MMEs on bistatic SAR images, and, based on this characterization, we derive limits on the allowable MMEs to be used as system specifications. Finally, we demonstrate that proper orientation of a bistatic SAR image during the image formation process allows application of monostatic SAR autofocus algorithms in postprocessing to mitigate image defocus. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Relevance feedback for CBIR: a new approach based on probabilistic feature weighting with positive and negative examples

    Publication Year: 2006 , Page(s): 1017 - 1030
    Cited by:  Papers (23)  |  Patents (1)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (801 KB) |  | HTML iconHTML  

    In content-based image retrieval, understanding the user's needs is a challenging task that requires integrating him in the process of retrieval. Relevance feedback (RF) has proven to be an effective tool for taking the user's judgement into account. In this paper, we present a new RF framework based on a feature selection algorithm that nicely combines the advantages of a probabilistic formulation with those of using both the positive example (PE) and the negative example (NE). Through interaction with the user, our algorithm learns the importance he assigns to image features, and then applies the results obtained to define similarity measures that correspond better to his judgement. The use of the NE allows images undesired by the user to be discarded, thereby improving retrieval accuracy. As for the probabilistic formulation of the problem, it presents a multitude of advantages and opens the door to more modeling possibilities that achieve a good feature selection. It makes it possible to cluster the query data into classes, choose the probability law that best models each class, model missing data, and support queries with multiple PE and/or NE classes. The basic principle of our algorithm is to assign more importance to features with a high likelihood and those which distinguish well between PE classes and NE classes. The proposed algorithm was validated separately and in image retrieval context, and the experiments show that it performs a good feature selection and contributes to improving retrieval effectiveness. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • An RBF-based compression method for image-based relighting

    Publication Year: 2006 , Page(s): 1031 - 1041
    Cited by:  Papers (10)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1407 KB) |  | HTML iconHTML  

    In image-based relighting, a pixel is associated with a number of sampled radiance values. This paper presents a two-level compression method. In the first level, the plenoptic property of a pixel is approximated by a spherical radial basis function (SRBF) network. That means that the spherical plenoptic function of each pixel is represented by a number of SRBF weights. In the second level, we apply a wavelet-based method to compress these SRBF weights. To reduce the visual artifact due to quantization noise, we develop a constrained method for estimating the SRBF weights. Our proposed approach is superior to JPEG, JPEG2000, and MPEG. Compared with the spherical harmonics approach, our approach has a lower complexity, while the visual quality is comparable. The real-time rendering method for our SRBF representation is also discussed. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Lossless watermarking for image authentication: a new framework and an implementation

    Publication Year: 2006 , Page(s): 1042 - 1049
    Cited by:  Papers (66)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (11714 KB)  

    We present a novel framework for lossless (invertible) authentication watermarking, which enables zero-distortion reconstruction of the un-watermarked images upon verification. As opposed to earlier lossless authentication methods that required reconstruction of the original image prior to validation, the new framework allows validation of the watermarked images before recovery of the original image. This reduces computational requirements in situations when either the verification step fails or the zero-distortion reconstruction is not needed. For verified images, integrity of the reconstructed image is ensured by the uniqueness of the reconstruction procedure. The framework also enables public(-key) authentication without granting access to the perfect original and allows for efficient tamper localization. Effectiveness of the framework is demonstrated by implementing the framework using hierarchical image authentication along with lossless generalized-least significant bit data embedding. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Erratum

    Publication Year: 2006 , Page(s): 1050
    Save to Project icon | Request Permissions | PDF file iconPDF (22 KB)  
    Full text access may be available. Click article title to sign in or learn about subscription options.
  • IEEE Transactions on Image Processing Edics

    Publication Year: 2006 , Page(s): 1051
    Save to Project icon | Request Permissions | PDF file iconPDF (51 KB)  
    Freely Available from IEEE
  • IEEE Transactions on Image Processing Information for authors

    Publication Year: 2006 , Page(s): 1052 - 1053
    Save to Project icon | Request Permissions | PDF file iconPDF (54 KB)  
    Freely Available from IEEE

Aims & Scope

IEEE Transactions on Image Processing focuses on signal-processing aspects of image processing, imaging systems, and image scanning, display, and printing.

Full Aims & Scope

Meet Our Editors

Editor-in-Chief
Scott Acton
University of Virginia
Charlottesville, VA, USA
E-mail: acton@virginia.edu 
Phone: +1 434-982-2003