By Topic

Image Processing, IEEE Transactions on

Issue 10 • Date Oct. 2006

Filter Results

Displaying Results 1 - 25 of 41
  • Table of contents

    Page(s): c1 - c4
    Save to Project icon | Request Permissions | PDF file iconPDF (43 KB)  
    Freely Available from IEEE
  • IEEE Transactions on Image Processing publication information

    Page(s): c2
    Save to Project icon | Request Permissions | PDF file iconPDF (36 KB)  
    Freely Available from IEEE
  • Ballistics Projectile Image Analysis for Firearm Identification

    Page(s): 2857 - 2865
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (3406 KB) |  | HTML iconHTML  

    This paper is based upon the observation that, when a bullet is fired, it creates characteristic markings on the cartridge case and projectile. From these markings, over 30 different features can be distinguished, which, in combination, produce a "fingerprint" for a firearm. By analyzing features within such a set of firearm fingerprints, it will be possible to identify not only the type and model of a firearm, but also each and every individual weapon just as effectively as human fingerprint identification. A new analytic system based on the fast Fourier transform for identifying projectile specimens by the line-scan imaging technique is proposed in this paper. This paper develops optical, photonic, and mechanical techniques to map the topography of the surfaces of forensic projectiles for the purpose of identification. Experiments discussed in this paper are performed on images acquired from 16 various weapons. Experimental results show that the proposed system can be used for firearm identification efficiently and precisely through digitizing and analyzing the fired projectiles specimens View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Optimal Spatial Adaptation for Patch-Based Image Denoising

    Page(s): 2866 - 2878
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (11037 KB) |  | HTML iconHTML  

    A novel adaptive and patch-based approach is proposed for image denoising and representation. The method is based on a pointwise selection of small image patches of fixed size in the variable neighborhood of each pixel. Our contribution is to associate with each pixel the weighted sum of data points within an adaptive neighborhood, in a manner that it balances the accuracy of approximation and the stochastic error, at each spatial position. This method is general and can be applied under the assumption that there exists repetitive patterns in a local neighborhood of a point. By introducing spatial adaptivity, we extend the work earlier described by Buades et al. which can be considered as an extension of bilateral filtering to image patches. Finally, we propose a nearly parameter-free algorithm for image denoising. The method is applied to both artificially corrupted (white Gaussian noise) and real images and the performance is very close to, and in some cases even surpasses, that of the already published denoising methods View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Discriminative Analysis of Lip Motion Features for Speaker Identification and Speech-Reading

    Page(s): 2879 - 2891
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1529 KB) |  | HTML iconHTML  

    There have been several studies that jointly use audio, lip intensity, and lip geometry information for speaker identification and speech-reading applications. This paper proposes using explicit lip motion information, instead of or in addition to lip intensity and/or geometry information, for speaker identification and speech-reading within a unified feature selection and discrimination analysis framework, and addresses two important issues: 1) Is using explicit lip motion information useful, and, 2) if so, what are the best lip motion features for these two applications? The best lip motion features for speaker identification are considered to be those that result in the highest discrimination of individual speakers in a population, whereas for speech-reading, the best features are those providing the highest phoneme/word/phrase recognition rate. Several lip motion feature candidates have been considered including dense motion features within a bounding box about the lip, lip contour motion features, and combination of these with lip shape features. Furthermore, a novel two-stage, spatial, and temporal discrimination analysis is introduced to select the best lip motion features for speaker identification and speech-reading applications. Experimental results using an hidden-Markov-model-based recognition system indicate that using explicit lip motion information provides additional performance gains in both applications, and lip motion features prove more valuable in the case of speech-reading application View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Oriented Wavelet Transform for Image Compression and Denoising

    Page(s): 2892 - 2903
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (2489 KB) |  | HTML iconHTML  

    In this paper, we introduce a new transform for image processing, based on wavelets and the lifting paradigm. The lifting steps of a unidimensional wavelet are applied along a local orientation defined on a quincunx sampling grid. To maximize energy compaction, the orientation minimizing the prediction error is chosen adaptively. A fine-grained multiscale analysis is provided by iterating the decomposition on the low-frequency band. In the context of image compression, the multiresolution orientation map is coded using a quad tree. The rate allocation between the orientation map and wavelet coefficients is jointly optimized in a rate-distortion sense. For image denoising, a Markov model is used to extract the orientations from the noisy image. As long as the map is sufficiently homogeneous, interesting properties of the original wavelet are preserved such as regularity and orthogonality. Perfect reconstruction is ensured by the reversibility of the lifting scheme. The mutual information between the wavelet coefficients is studied and compared to the one observed with a separable wavelet transform. The rate-distortion performance of this new transform is evaluated for image coding using state-of-the-art subband coders. Its performance in a denoising application is also assessed against the performance obtained with other transforms or denoising methods View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Asymptotic Global Confidence Regions for 3-D Parametric Shape Estimation in Inverse Problems

    Page(s): 2904 - 2919
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1357 KB) |  | HTML iconHTML  

    This paper derives fundamental performance bounds for statistical estimation of parametric surfaces embedded in Ropf3. Unlike conventional pixel-based image reconstruction approaches, our problem is reconstruction of the shape of binary or homogeneous objects. The fundamental uncertainty of such estimation problems can be represented by global confidence regions, which facilitate geometric inference and optimization of the imaging system. Compared to our previous work on global confidence region analysis for curves [two-dimensional (2-D) shapes], computation of the probability that the entire surface estimate lies within the confidence region is more challenging because a surface estimate is an inhomogeneous random field continuously indexed by a 2-D variable. We derive an asymptotic lower bound to this probability by relating it to the exceedence probability of a higher dimensional Gaussian random field, which can, in turn, be evaluated using the tube formula due to Sun. Simulation results demonstrate the tightness of the resulting bound and the usefulness of the three-dimensional global confidence region approach View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Fusion of Hidden Markov Random Field Models and Its Bayesian Estimation

    Page(s): 2920 - 2935
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1536 KB) |  | HTML iconHTML  

    In this paper, we present a Hidden Markov Random Field (HMRF) data-fusion model. The proposed model is applied to the segmentation of natural images based on the fusion of colors and textons into Julesz ensembles. The corresponding Exploration/Selection/Estimation (ESE) procedure for the estimation of the parameters is presented. This method achieves the estimation of the parameters of the Gaussian kernels, the mixture proportions, the region labels, the number of regions, and the Markov hyper-parameter. Meanwhile, we present a new proof of the asymptotic convergence of the ESE procedure, based on original finite time bounds for the rate of convergence View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Robust and Efficient Image Alignment Based on Relative Gradient Matching

    Page(s): 2936 - 2943
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (2277 KB) |  | HTML iconHTML  

    In this paper, we present a robust image alignment algorithm based on matching of relative gradient maps. This algorithm consists of two stages; namely, a learning-based approximate pattern search and an iterative energy-minimization procedure for matching relative image gradient. The first stage finds some candidate poses of the pattern from the image through a fast nearest-neighbor search of the best match of the relative gradient features computed from training database of feature vectors, which are obtained from the synthesis of the geometrically transformed template image with the transformation parameters uniformly sampled from a given transformation parameter space. Subsequently, the candidate poses are further verified and refined by matching the relative gradient images through an iterative energy-minimization procedure. This approach based on the matching of relative gradients is robust against nonuniform illumination variations. Experimental results on both simulated and real images are shown to demonstrate superior efficiency and robustness of the proposed algorithm over the conventional normalized correlation method View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Color Demosaicing Using Variance of Color Differences

    Page(s): 2944 - 2955
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1660 KB) |  | HTML iconHTML  

    This paper presents an adaptive demosaicing algorithm. Missing green samples are first estimated based on the variances of the color differences along different edge directions. The missing red and blue components are then estimated based on the interpolated green plane. This algorithm can effectively preserve the details in texture regions and, at the same time, it can significantly reduce the color artifacts. As compared with the latest demosaicing algorithms, the proposed algorithm produces the best average demosaicing performance both objectively and subjectively View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • An Optimal Fuzzy System for Color Image Enhancement

    Page(s): 2956 - 2966
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1171 KB) |  | HTML iconHTML  

    A Gaussian membership function is proposed to fuzzify the image information in spatial domain. We introduce a global contrast intensification operator (GINT), which contains three parameters, viz., intensification parameter t, fuzzifier fh, and the crossover point muc, for enhancement of color images. We define fuzzy contrast-based quality factor Qf and entropy-based quality factor Qe and the corresponding visual factors for the desired appearance of images. By minimizing the fuzzy entropy of the image information with respect to these quality factors, the parameters t, fh, and muc are calculated globally. By using the proposed technique, a visible improvement in the image quality is observed for under exposed images, as the entropy of the output image is decreased. The terminating criterion is decided by both the visual and quality factors. For over exposed and under plus over exposed images, the proposed fuzzification function needs to be modified by taking maximum intensity as the fourth parameter. The type of the images is indicated by the visual factor which is less than 1 for under exposed images and more than 1 for over exposed images View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Linear, Worst-Case Estimators for Denoising Quantization Noise in Transform Coded Images

    Page(s): 2967 - 2986
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (4833 KB) |  | HTML iconHTML  

    Transform-coded images exhibit distortions that fall outside of the assumptions of traditional denoising techniques. In this paper, we use tools from robust signal processing to construct linear, worst-case estimators for the denoising of transform compressed images. We show that while standard denoising is fundamentally determined by statistical models for images alone, the distortions induced by transform coding are heavily dependent on the structure of the transform used. Our method, thus, uses simple models for the image and for the quantization error, with the latter capturing the transform dependency. Based on these models, we derive optimal, linear estimators of the original image that are optimal in the mean-squared error sense for the worst-case cross correlation between the original and the quantization error. Our construction is transform agnostic and is applicable to transforms from block discrete cosine transforms to wavelets. Furthermore, our approach is applicable to different types of image statistics and can also serve as an optimization tool for the design of transforms/quantizers. Through the interaction of the source and quantizer models, our work provides useful insights and is instrumental in identifying and removing quantization artifacts from general signals coded with general transforms. As we decouple the modeling and processing steps, we allow for the construction of many different types of estimators depending on the desired sophistication and available computational complexity. In the low end of this spectrum, our lookup table based estimator, which can be deployed in low complexity environments, provides competitive PSNR values with some of the best results in the literature View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Bayesian Restoration Using a New Nonstationary Edge-Preserving Image Prior

    Page(s): 2987 - 2997
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (5683 KB)  

    In this paper, we propose a class of image restoration algorithms based on the Bayesian approach and a new hierarchical spatially adaptive image prior. The proposed prior has the following two desirable features. First, it models the local image discontinuities in different directions with a model which is continuous valued. Thus, it preserves edges and generalizes the on/off (binary) line process idea used in previous image priors within the context of Markov random fields (MRFs). Second, it is Gaussian in nature and provides estimates that are easy to compute. Using this new hierarchical prior, two restoration algorithms are derived. The first is based on the maximum a posteriori principle and the second on the Bayesian methodology. Numerical experiments are presented that compare the proposed algorithms among themselves and with previous stationary and non stationary MRF-based with line process algorithms. These experiments demonstrate the advantages of the proposed prior View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Multidimensional Multichannel FIR Deconvolution Using GrÖbner Bases

    Page(s): 2998 - 3007
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1322 KB) |  | HTML iconHTML  

    We present a new method for general multidimensional multichannel deconvolution with finite impulse response (FIR) convolution and deconvolution filters using Grobner bases. Previous work formulates the problem of multichannel FIR deconvolution as the construction of a left inverse of the convolution matrix, which is solved by numerical linear algebra. However, this approach requires the prior information of the support of deconvolution filters. Using algebraic geometry and Grobner bases, we find necessary and sufficient conditions for the existence of exact deconvolution FIR filters and propose simple algorithms to find these deconvolution filters. The main contribution of our work is to extend the previous Grobner basis results on multidimensional multichannel deconvolution for polynomial or causal filters to general FIR filters. The proposed algorithms obtain a set of FIR deconvolution filters with a small number of nonzero coefficients (a desirable feature in the impulsive noise environment) and do not require the prior information of the support. Moreover, we provide a complete characterization of all exact deconvolution FIR filters, from which good FIR deconvolution filters under the additive white noise environment are found. Simulation results show that our approaches achieve good results under different noise settings View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Estimation of Two-Dimensional Affine Transformations Through Polar Curve Matching and Its Application to Image Mosaicking and Remote-Sensing Data Registration

    Page(s): 3008 - 3019
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (2905 KB) |  | HTML iconHTML  

    This paper presents a new and effective method for estimating two-dimensional affine transformations and its application to image registration. The method is based on matching polar curves obtained from the radial projections of the image energies, defined as the squared magnitudes of their Fourier transforms. Such matching is formulated as a simple minimization problem whose optimal solution is found with the Levenberg-Marquardt algorithm. The analysis of affine transformations in the frequency domain exploits the well-known property whereby the translational displacement in this domain can be factored out and separately estimated through phase correlation after the four remaining degrees of freedom of the affine warping have been determined. Another important contribution of this paper, emphasized through one example of image mosaicking and one example of remote sensing image registration, consists in showing that affine motion can be accurately estimated by applying our algorithm to the shapes of macrofeatures extracted from the images to register. The excellent performance of the algorithm is also shown through a synthetic example of motion estimation and its comparison with another standard registration technique View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Three-Dimensional Nonlinear Invisible Boundary Detection

    Page(s): 3020 - 3032
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (3535 KB) |  | HTML iconHTML  

    The human vision system can discriminate regions which differ up to the second-order statistics only. We present an algorithm designed to reveal "hidden" boundaries in gray level images, by computing gradients in higher order statistics of the data. We demonstrate it by applying it to the identification of possible "hidden" boundaries of glioblastomas as manifest themselves in three-dimensional (3-D) MRI scans, using a model driven approach. We also demonstrate the method using a nonmodel driven approach where we have no prior information about the location of possible boundaries. In this case, we use 3-D MRI data concerning schizophrenic patients and normal controls View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Hierarchical Stochastic Image Grammars for Classification and Segmentation

    Page(s): 3033 - 3052
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (7006 KB) |  | HTML iconHTML  

    We develop a new class of hierarchical stochastic image models called spatial random trees (SRTs) which admit polynomial-complexity exact inference algorithms. Our framework of multitree dictionaries is the starting point for this construction. SRTs are stochastic hidden tree models whose leaves are associated with image data. The states at the tree nodes are random variables, and, in addition, the structure of the tree is random and is generated by a probabilistic grammar. We describe an efficient recursive algorithm for obtaining the maximum a posteriori estimate of both the tree structure and the tree states given an image. We also develop an efficient procedure for performing one iteration of the expectation-maximization algorithm and use it to estimate the model parameters from a set of training images. We address other inference problems arising in applications such as maximization of posterior marginals and hypothesis testing. Our models and algorithms are illustrated through several image classification and segmentation experiments, ranging from the segmentation of synthetic images to the classification of natural photographs and the segmentation of scanned documents. In each case, we show that our method substantially improves accuracy over a variety of existing methods View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Local Image Registration by Adaptive Filtering

    Page(s): 3053 - 3065
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (7432 KB) |  | HTML iconHTML  

    We propose a new adaptive filtering framework for local image registration, which compensates for the effect of local distortions/displacements without explicitly estimating a distortion/displacement field. To this effect, we formulate local image registration as a two-dimensional (2-D) system identification problem with spatially varying system parameters. We utilize a 2-D adaptive filtering framework to identify the locally varying system parameters, where a new block adaptive filtering scheme is introduced. We discuss the conditions under which the adaptive filter coefficients conform to a local displacement vector at each pixel. Experimental results demonstrate that the proposed 2-D adaptive filtering framework is very successful in modeling and compensation of both local distortions, such as Stirmark attacks, and local motion, such as in the presence of a parallax field. In particular, we show that the proposed method can provide image registration to: a) enable reliable detection of watermarks following a Stirmark attack in nonblind detection scenarios, b) compensate for lens distortions, and c) align multiview images with nonparametric local motion View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Image and Texture Segmentation Using Local Spectral Histograms

    Page(s): 3066 - 3077
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (4287 KB) |  | HTML iconHTML  

    We present a method for segmenting images consisting of texture and nontexture regions based on local spectral histograms. Defined as a vector consisting of marginal distributions of chosen filter responses, local spectral histograms provide a feature statistic for both types of regions. Using local spectral histograms of homogeneous regions, we decompose the segmentation process into three stages. The first is the initial classification stage, where probability models for homogeneous texture and nontexture regions are derived and an initial segmentation result is obtained by classifying local windows. In the second stage, we give an algorithm that iteratively updates the segmentation using the derived probability models. The third is the boundary localization stage, where region boundaries are localized by building refined probability models that are sensitive to spatial patterns in segmented regions. We present segmentation results on texture as well as nontexture images. Our comparison with other methods shows that the proposed method produces more accurate segmentation results View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A Fast and Effective Model for Wavelet Subband Histograms and Its Application in Texture Image Retrieval

    Page(s): 3078 - 3088
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1919 KB) |  | HTML iconHTML  

    This paper presents a novel, effective, and efficient characterization of wavelet subbands by bit-plane extractions. Each bit plane is associated with a probability that represents the frequency of 1-bit occurrence, and the concatenation of all the bit-plane probabilities forms our new image signature. Such a signature can be extracted directly from the code-block code-stream, rather than from the de-quantized wavelet coefficients, making our method particularly adaptable for image retrieval in the compression domain such as JPEG2000 format images. Our signatures have smaller storage requirement and lower computational complexity, and yet, experimental results on texture image retrieval show that our proposed signatures are much more cost effective to current state-of-the-art methods including the generalized Gaussian density signatures and histogram signatures View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • The Nonsubsampled Contourlet Transform: Theory, Design, and Applications

    Page(s): 3089 - 3101
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (2337 KB) |  | HTML iconHTML  

    In this paper, we develop the nonsubsampled contourlet transform (NSCT) and study its applications. The construction proposed in this paper is based on a nonsubsampled pyramid structure and nonsubsampled directional filter banks. The result is a flexible multiscale, multidirection, and shift-invariant image decomposition that can be efficiently implemented via the a trous algorithm. At the core of the proposed scheme is the nonseparable two-channel nonsubsampled filter bank (NSFB). We exploit the less stringent design condition of the NSFB to design filters that lead to a NSCT with better frequency selectivity and regularity when compared to the contourlet transform. We propose a design framework based on the mapping approach, that allows for a fast implementation based on a lifting or ladder structure, and only uses one-dimensional filtering in some cases. In addition, our design ensures that the corresponding frame elements are regular, symmetric, and the frame is close to a tight one. We assess the performance of the NSCT in image denoising and enhancement applications. In both applications the NSCT compares favorably to other existing methods in the literature View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Motion Compensation Via Redundant-Wavelet Multihypothesis

    Page(s): 3102 - 3113
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (704 KB) |  | HTML iconHTML  

    Multihypothesis motion compensation has been widely used in video coding with previous attention focused on techniques employing predictions that are diverse spatially or temporally. In this paper, the multihypothesis concept is extended into the transform domain by using a redundant wavelet transform to produce multiple predictions that are diverse in transform phase. The corresponding multiple-phase inverse transform implicitly combines the phase-diverse predictions into a single spatial-domain prediction for motion compensation. The performance advantage of this redundant-wavelet-multihypothesis approach is investigated analytically, invoking the fact that the multiple-phase inverse involves a projection that significantly reduces the power of a dense-motion residual modeled as additive noise. The analysis shows that redundant-wavelet multihypothesis is capable of up to a 7-dB reduction in prediction-residual variance over an equivalent single-phase, single-hypothesis approach. Experimental results substantiate the performance advantage for a block-based implementation View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Embedded Multiple Description Coding of Video

    Page(s): 3114 - 3130
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1290 KB)  

    Real-time delivery of video over best-effort error-prone packet networks requires scalable erasure-resilient compression systems in order to 1) meet the users' requirements in terms of quality, resolution, and frame-rate; 2) dynamically adapt the rate to the available channel capacity; and 3) provide robustness to data losses, as retransmission is often impractical. Furthermore, the employed erasure-resilience mechanisms should be scalable in order to adapt the degree of resiliency against transmission errors to the varying channel conditions. Driven by these constraints, we propose in this paper a novel design for scalable erasure-resilient video coding that couples the compression efficiency of the open-loop architecture with the robustness provided by multiple description coding. In our approach, scalability and packet-erasure resilience are jointly provided via embedded multiple description scalar quantization. Furthermore, a novel channel-aware rate-allocation technique is proposed that allows for shaping on-the-fly the output bit rate and the degree of resiliency without resorting to channel coding. As a result, robustness to data losses is traded for better visual quality when transmission occurs over reliable channels, while erasure resilience is introduced when noisy links are involved. Numerical results clearly demonstrate the advantages of the proposed approach over equivalent codec instantiations employing 1) no erasure-resilience mechanisms, 2) erasure-resilience with nonscalable redundancy, or 3) data-partitioning principles View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Uncertainty Estimation by Convolution Using Spatial Statistics

    Page(s): 3131 - 3137
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (761 KB) |  | HTML iconHTML  

    Kriging has proven to be a useful tool in image processing since it behaves, under regular sampling, as a convolution. Convolution kernels obtained with kriging allow noise filtering and include the effects of the random fluctuations of the experimental data and the resolution of the measuring devices. The uncertainty at each location of the image can also be determined using kriging. However, this procedure is slow since, currently, only matrix methods are available. In this work, we compare the way kriging performs the uncertainty estimation with the standard statistical technique for magnitudes without spatial dependence. As a result, we propose a much faster technique, based on the variogram, to determine the uncertainty using a convolutional procedure. We check the validity of this approach by applying it to one-dimensional images obtained in diffractometry and two-dimensional images obtained by shadow moire View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Improvement of Color Video Demosaicking in Temporal Domain

    Page(s): 3138 - 3151
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1626 KB)  

    Color demosaicking is critical to the image quality of digital still and video cameras that use a single-sensor array. Limited by the mosaic sampling pattern of the color filter array (CFA), color artifacts may occur in a demosaicked image in areas of high-frequency and/or sharp color transition structures. However, a color digital video camera captures a sequence of mosaic images and the temporal dimension of the color signals provides a rich source of information about the scene via camera and object motions. This paper proposes an inter-frame demosaicking approach to take advantage of all three forms of pixel correlations: spatial, spectral, and temporal. By motion estimation and statistical data fusion between adjacent mosaic frames, the new approach can remove much of the color artifacts that survive intra-frame demosaicking and also improve tone reproduction accuracy. Empirical results show that the proposed inter-frame demosaicking approach consistently outperforms its intra-frame counterparts both in peak signal-to-noise measure and subjective visual quality View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.

Aims & Scope

IEEE Transactions on Image Processing focuses on signal-processing aspects of image processing, imaging systems, and image scanning, display, and printing.

Full Aims & Scope

Meet Our Editors

Editor-in-Chief
Scott Acton
University of Virginia
Charlottesville, VA, USA
E-mail: acton@virginia.edu 
Phone: +1 434-982-2003