By Topic

Image Processing, IEEE Transactions on

Issue 11 • Date Nov. 2006

Filter Results

Displaying Results 1 - 25 of 41
  • Table of contents

    Page(s): c1 - c4
    Save to Project icon | Request Permissions | PDF file iconPDF (44 KB)  
    Freely Available from IEEE
  • IEEE Transactions on Image Processing publication information

    Page(s): c2
    Save to Project icon | Request Permissions | PDF file iconPDF (36 KB)  
    Freely Available from IEEE
  • Error-Resilient Video Communications Over CDMA Networks With a Bandwidth Constraint

    Page(s): 3241 - 3252
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (493 KB) |  | HTML iconHTML  

    We present an adaptive video transmission scheme for use in a code-division multiple-access network, which incorporates efficient bandwidth allocation among source coding, channel coding, and spreading under a fixed total bandwidth constraint. We derive the statistics of the received signal, as well as a theoretical bound on the packet drop rate at the receiver. Based on these results, a bandwidth allocation algorithm is proposed at the packet level, which incorporates the effects of both the changing channel conditions and the dynamics of the source content. Detailed simulations are done to evaluate the performance of the system, and the sensitivity of the system to estimation error is presented View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • On the Use of Context-Weighting in Lossless Bilevel Image Compression

    Page(s): 3253 - 3260
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (2674 KB) |  | HTML iconHTML  

    We present a context-weighting algorithm that adaptively weights in real-time three-context models based on their relative accuracy. It can automatically select the better model over different regions of an image, producing better probability estimates than using either one of these models exclusively. Combined with the previously proposed block arithmetic coder for image compression (BACIC), the overall performance is slightly better than JBIG for the eight CCITT business-type test images, outperforms JBIG by 13.8% on halftone images, and by 17.5% for compounded images containing both text and halftones. Furthermore, users no longer need to select models as in JBIG and BACIC to get the better performance View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Reversing Demosaicking and Compression in Color Filter Array Image Processing: Performance Analysis and Modeling

    Page(s): 3261 - 3278
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (3253 KB) |  | HTML iconHTML  

    In the conventional processing chain of single-sensor digital still cameras (DSCs), the images are captured with color filter arrays (CFAs) and the CFA samples are demosaicked into a full color image before compression. To avoid additional data redundancy created by the demosaicking process, an alternative processing chain has been proposed to move the compression process before the demosaicking. Recent empirical studies have shown that the alternative chain can outperform the conventional one in terms of image quality at low compression ratios. To provide a theoretically sound basis for such conclusion, we propose analytical models for the reconstruction errors of the two processing chains. The models developed confirm the results of existing empirical studies and provide better understanding of DSC processing chains. The modeling also allows performance predictions for more advanced compression and demosaicking methods, thus providing important cues for future development in this area View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Precompression Quality-Control Algorithm for JPEG 2000

    Page(s): 3279 - 3293
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (987 KB) |  | HTML iconHTML  

    In this paper, a precompression quality-control algorithm is proposed. It can greatly reduce computational power of the embedded block coding (EBC) and memory requirement to buffer bit streams. By using the propagation property and the randomness property of the EBC algorithm, rate and distortion of coding passes is approximately predicted. Thus, the truncation points are chosen before actual coding by the entropy coder. Therefore, the computational power, which is measured with the number of contexts to be processed, is greatly reduced since most of the computations are skipped. The memory requirement, which is measured with the amount required to buffer bit streams, is also reduced since the skipped contexts do not generate bit streams. Experimental results show that the proposed algorithm reduces the computational power of the EBC by 80% on average at 0.8 bpp compared with the conventional postcompression rate-distortion optimization algorithm . Moreover, the memory requirement is also reduced by 90%. The average PSNR degrades only about 0.1~0.3 dB, on average View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Quadratic Weighted Median Filters for Edge Enhancement of Noisy Images

    Page(s): 3294 - 3310
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (9373 KB) |  | HTML iconHTML  

    Quadratic Volterra filters are effective in image sharpening applications. The linear combination of polynomial terms, however, yields poor performance in noisy environments. Weighted median (WM) filters, in contrast, are well known for their outlier suppression and detail preservation properties. The WM sample selection methodology is naturally extended to the quadratic sample case, yielding a filter structure referred to as quadratic weighted median (QWM) that exploits the higher order statistics of the observed samples while simultaneously being robust to outliers arising in the higher order statistics of environment noise. Through statistical analysis of higher order samples, it is shown that, although the parent Gaussian distribution is light tailed, the higher order terms exhibit heavy-tailed distributions. The optimal combination of terms contributing to a quadratic system, i.e., cross and square, is approached from a maximum likelihood perspective which yields the WM processing of these terms. The proposed QWM filter structure is analyzed through determination of the output variance and breakdown probability. The studies show that the QWM exhibits lower variance and breakdown probability indicating the robustness of the proposed structure. The performance of the QWM filter is tested on constant regions, edges and real images, and compared to its weighted-sum dual, the quadratic Volterra filter. The simulation results show that the proposed method simultaneously suppresses the noise and enhances image details. Compared with the quadratic Volterra sharpener, the QWM filter exhibits superior qualitative and quantitative performance in noisy image sharpening View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • The Multidimensional Phase Unwrapping Integral and Applications to Microwave Tomographical Image Reconstruction

    Page(s): 3311 - 3324
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (2286 KB) |  | HTML iconHTML  

    Spatial unwrapping of the phase component of time varying electromagnetic fields has important implications in a range of disciplines including synthetic aperture radar (SAR) interferometry, MRI, optical confocal microscopy, and microwave tomography. This paper presents a fundamental framework based on the phase unwrapping integral, especially in the complex case where phase singularities are enclosed within the closed path integral. With respect to the phase unwrapping required when utilized in Gauss-Newton iterative microwave image reconstruction, the concept of dynamic phase unwrapping is introduced where the singularity location varies as a function of the iteratively modified property distributions. Strategies for dynamic phase unwrapping in the microwave problem were developed and successfully tested in simulations and clinical experiments utilizing large, high contrast targets to validate the approach View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • An Improved Observation Model for Super-Resolution Under Affine Motion

    Page(s): 3325 - 3337
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (5115 KB) |  | HTML iconHTML  

    Super-resolution (SR) techniques make use of subpixel shifts between frames in an image sequence to yield higher resolution images. We propose an original observation model devoted to the case of nonisometric inter-frame motion as required, for instance, in the context of airborne imaging sensors. First, we describe how the main observation models used in the SR literature deal with motion, and we explain why they are not suited for nonisometric motion. Then, we propose an extension of the observation model by Elad and Feuer adapted to affine motion. This model is based on a decomposition of affine transforms into successive shear transforms, each one efficiently implemented by row-by-row or column-by-column one-dimensional affine transforms. We demonstrate on synthetic and real sequences that our observation model incorporated in a SR reconstruction technique leads to better results in the case of variable scale motions and it provides equivalent results in the case of isometric motions View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Fast IIR Isotropic 2-D Complex Gabor Filters With Boundary Initialization

    Page(s): 3338 - 3348
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (2156 KB) |  | HTML iconHTML  

    Gabor filters are widely applied in image analysis and computer vision applications. This paper describes a fast algorithm for isotropic complex Gabor filtering that outperforms existing implementations. The main computational improvement arises from the decomposition of Gabor filtering into more efficient Gaussian filtering and sinusoidal modulations. Appropriate filter initial conditions are derived to avoid boundary transients, without requiring explicit image border extension. Our proposal reduces up to 39% the number of required operations with respect to state-of-the-art approaches. A full C++ implementation of the method is publicly available View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Face Verification Across Age Progression

    Page(s): 3349 - 3361
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (2453 KB) |  | HTML iconHTML  

    Human faces undergo considerable amounts of variations with aging. While face recognition systems have been proven to be sensitive to factors such as illumination and pose, their sensitivity to facial aging effects is yet to be studied. How does age progression affect the similarity between a pair of face images of an individual? What is the confidence associated with establishing the identity between a pair of age separated face images? In this paper, we develop a Bayesian age difference classifier that classifies face images of individuals based on age differences and performs face verification across age progression. Further, we study the similarity of faces across age progression. Since age separated face images invariably differ in illumination and pose, we propose preprocessing methods for minimizing such variations. Experimental results using a database comprising of pairs of face images that were retrieved from the passports of 465 individuals are presented. The verification system for faces separated by as many as nine years, attains an equal error rate of 8.5% View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Translation-Invariant Contourlet Transform and Its Application to Image Denoising

    Page(s): 3362 - 3374
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (4716 KB) |  | HTML iconHTML  

    Most subsampled filter banks lack the feature of translation invariance, which is an important characteristic in denoising applications. In this paper, we study and develop new methods to convert a general multichannel, multidimensional filter bank to a corresponding translation-invariant (TI) framework. In particular, we propose a generalized algorithme agrave trous, which is an extension of the algorithme agrave trous introduced for 1-D wavelet transforms. Using the proposed algorithm, as well as incorporating modified versions of directional filter banks, we construct the TI contourlet transform (TICT). To reduce the high redundancy and complexity of the TICT, we also introduce semi-translation-invariant contourlet transform (STICT). Then, we employ an adapted bivariate shrinkage scheme to the STICT to achieve an efficient image denoising approach. Our experimental results demonstrate the benefits and potential of the proposed denoising approach. Complexity analysis and efficient realization of the proposed TI schemes are also presented View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Fast Splitting \alpha -Rooting Method of Image Enhancement: Tensor Representation

    Page(s): 3375 - 3384
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1953 KB) |  | HTML iconHTML  

    In the tensor representation, a two-dimensional (2-D) image is represented uniquely by a set of one-dimensional (1-D) signals, so-called splitting-signals, that carry the spectral information of the image at frequency-points of specific sets that cover the whole domain of frequencies. The image enhancement is thus reduced to processing splitting-signals and such process requires a modification of only a few spectral components of the image, for each signal. For instance, the alpha-rooting method of image enhancement can be fulfilled through processing separately a maximum of 3N/2 splitting-signals of an image (NtimesN), where N is a power of two. In this paper, we propose a fast implementation of the alpha-rooting method by using one splitting-signal of the tensor representation with respect to the discrete Fourier transform (DFT). The implementation is described in the frequency and spatial domains. As a result, the proposed algorithms for image enhancement use two 1-D N-point DFTs instead of two 2-D NtimesN-point DFTs in the traditional method of alpha-rooting View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Multiresolution MAP Despeckling of SAR Images Based on Locally Adaptive Generalized Gaussian pdf Modeling

    Page(s): 3385 - 3399
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (7786 KB) |  | HTML iconHTML  

    In this paper, a new despeckling method based on undecimated wavelet decomposition and maximum a posteriori (MAP) estimation is proposed. Such a method relies on the assumption that the probability density function (pdf) of each wavelet coefficient is generalized Gaussian (GG). The major novelty of the proposed approach is that the parameters of the GG pdf are taken to be space-varying within each wavelet frame. Thus, they may be adjusted to spatial image context, not only to scale and orientation. Since the MAP equation to be solved is a function of the parameters of the assumed pdf model, the variance and shape factor of the GG function are derived from the theoretical moments, which depend on the moments and joint moments of the observed noisy signal and on the statistics of speckle. The solution of the MAP equation yields the MAP estimate of the wavelet coefficients of the noise-free image. The restored SAR image is synthesized from such coefficients. Experimental results, carried out on both synthetic speckled images and true SAR images, demonstrate that MAP filtering can be successfully applied to SAR images represented in the shift-invariant wavelet domain, without resorting to a logarithmic transformation View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Ground Target Recognition Using Rectangle Estimation

    Page(s): 3400 - 3408
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1176 KB) |  | HTML iconHTML  

    We propose a ground target recognition method based on 3-D laser radar data. The method handles general 3-D scattered data. It is based on the fact that man-made objects of complex shape can be decomposed to a set of rectangles. The ground target recognition method consists of four steps; 3-D size and orientation estimation, target segmentation into parts of approximately rectangular shape, identification of segments that represent the target's functional/main parts, and target matching with CAD models. The core in this approach is rectangle estimation. The performance of the rectangle estimation method is evaluated statistically using Monte Carlo simulations. A case study on tank recognition is shown, where 3-D data from four fundamentally different types of laser radar systems are used. Although the approach is tested on rather few examples, we believe that the approach is promising View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • The Bayesian Operating Point of the Canny Edge Detector

    Page(s): 3409 - 3416
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (358 KB) |  | HTML iconHTML  

    We have investigated the operating point of the Canny edge detector which minimizes the Bayes risk of misclassification. By considering each of the sequential stages which constitute the Canny algorithm, we conclude that the linear filtering stage of Canny, without postprocessing, performs very poorly by any standard in pattern recognition and achieves error rates which are almost indistinguishable from a priori classification. We demonstrate that the edge detection performance of the Canny detector is due almost entirely to the postprocessing stages of nonmaximal suppression and hysteresis thresholding View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Recognition of Dynamic Video Contents With Global Probabilistic Models of Visual Motion

    Page(s): 3417 - 3430
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (3534 KB) |  | HTML iconHTML  

    The exploitation of video data requires methods able to extract high-level information from the images. Video summarization, video retrieval, or video surveillance are examples of applications. In this paper, we tackle the challenging problem of recognizing dynamic video contents from low-level motion features. We adopt a statistical approach involving modeling, (supervised) learning, and classification issues. Because of the diversity of video content (even for a given class of events), we have to design appropriate models of visual motion and learn them from videos. We have defined original parsimonious global probabilistic motion models, both for the dominant image motion (assumed to be due to the camera motion) and the residual image motion (related to scene motion). Motion measurements include affine motion models to capture the camera motion and low-level local motion features to account for scene motion. Motion learning and recognition are solved using maximum likelihood criteria. To validate the interest of the proposed motion modeling and recognition framework, we report dynamic content recognition results on sports videos View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Unsupervised Variational Image Segmentation/Classification Using a Weibull Observation Model

    Page(s): 3431 - 3439
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (3099 KB) |  | HTML iconHTML  

    Studies have shown that the Weibull distribution can model accurately a wide variety of images. Its parameters index a family of distributions which includes the exponential and approximations of the Gaussian and the Raleigh models widely used in image segmentation. This study investigates the Weibull distribution in unsupervised image segmentation and classification by a variational method. The data term of the segmentation functional measures the conformity of the image intensity in each region to a Weibull distribution whose parameters are determined jointly with the segmentation. Minimization of the functional is implemented by active curves via level sets and consists of iterations of two consecutive steps: curve evolution via Euler-Lagrange descent equations and evaluation of the Weibull distribution parameters. Experiments with synthetic and real images are described which verify the validity of method and its implementation View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A Statistical Evaluation of Recent Full Reference Image Quality Assessment Algorithms

    Page(s): 3440 - 3451
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1234 KB) |  | HTML iconHTML  

    Measurement of visual quality is of fundamental importance for numerous image and video processing applications, where the goal of quality assessment (QA) algorithms is to automatically assess the quality of images or videos in agreement with human quality judgments. Over the years, many researchers have taken different approaches to the problem and have contributed significant research in this area and claim to have made progress in their respective domains. It is important to evaluate the performance of these algorithms in a comparative setting and analyze the strengths and weaknesses of these methods. In this paper, we present results of an extensive subjective quality assessment study in which a total of 779 distorted images were evaluated by about two dozen human subjects. The "ground truth" image quality data obtained from about 25 000 individual human quality judgments is used to evaluate the performance of several prominent full-reference image quality assessment algorithms. To the best of our knowledge, apart from video quality studies conducted by the Video Quality Experts Group, the study presented in this paper is the largest subjective image quality study in the literature in terms of number of images, distortion types, and number of human judgments per image. Moreover, we have made the data from the study freely available to the research community . This would allow other researchers to easily report comparative results in the future View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Perceptual Image Hashing Via Feature Points: Performance Evaluation and Tradeoffs

    Page(s): 3452 - 3465
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (4361 KB)  

    We propose an image hashing paradigm using visually significant feature points. The feature points should be largely invariant under perceptually insignificant distortions. To satisfy this, we propose an iterative feature detector to extract significant geometry preserving feature points. We apply probabilistic quantization on the derived features to introduce randomness, which, in turn, reduces vulnerability to adversarial attacks. The proposed hash algorithm withstands standard benchmark (e.g., Stirmark) attacks, including compression, geometric distortions of scaling and small-angle rotation, and common signal-processing operations. Content changing (malicious) manipulations of image data are also accurately detected. Detailed statistical analysis in the form of receiver operating characteristic (ROC) curves is presented and reveals the success of the proposed scheme in achieving perceptual robustness while avoiding misclassification View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Structure From Planar Motion

    Page(s): 3466 - 3477
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (2648 KB) |  | HTML iconHTML  

    Planar motion is arguably the most dominant type of motion in surveillance videos. The constraints on motion lead to a simplified factorization method for structure from planar motion when using a stationary perspective camera. Compared with methods for general motion , our approach has two major advantages: a measurement matrix that fully exploits the motion constraints is formed such that the new measurement matrix has a rank of at most 3, instead of 4; the measurement matrix needs similar scalings, but the estimation of fundamental matrices or epipoles is not needed. Experimental results show that the algorithm is accurate and fairly robust to noise and inaccurate calibration. As the new measurement matrix is a nonlinear function of the observed variables, a different method is introduced to deal with the directional uncertainty in the observed variables. Differences and the dual relationship between planar motion and planar object are also clarified. Based on our method, a fully automated vehicle reconstruction system has been designed View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • On the Orientability of Shapes

    Page(s): 3478 - 3487
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (733 KB) |  | HTML iconHTML  

    The orientation of a shape is a useful quantity, and has been shown to affect performance of object recognition in the human visual system. Shape orientation has also been used in computer vision to provide a properly oriented frame of reference, which can aid recognition. However, for certain shapes, the standard moment-based method of orientation estimation fails. We introduce as a new shape feature shape orientability, which defines the degree to which a shape has distinct (but not necessarily unique) orientation. A new method is described for measuring shape orientability, and has several desirable properties. In particular, unlike the standard moment-based measure of elongation, it is able to differentiate between the varying levels of orientability of n-fold rotationally symmetric shapes. Moreover, the new orientability measure is simple and efficient to compute (for an n-gon we describe an O(n) algorithm) View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A Wavelet-Based Two-Stage Near-Lossless Coder

    Page(s): 3488 - 3500
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1314 KB) |  | HTML iconHTML  

    In this paper, we present a two-stage near-lossless compression scheme. It belongs to the class of "lossy plus residual coding" and consists of a wavelet-based lossy layer followed by arithmetic coding of the quantized residual to guarantee a given Linfin error bound in the pixel domain. We focus on the selection of the optimum bit rate for the lossy layer to achieve the minimum total bit rate. Unlike other similar lossy plus lossless approaches using a wavelet-based lossy layer, the proposed method does not require iteration of decoding and inverse discrete wavelet transform in succession to locate the optimum bit rate. We propose a simple method to estimate the optimal bit rate, with a theoretical justification based on the critical rate argument from the rate-distortion theory and the independence of the residual error View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • An Optimal Nonorthogonal Separation of the Anisotropic Gaussian Convolution Filter

    Page(s): 3501 - 3513
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1465 KB) |  | HTML iconHTML  

    We give an analytical and geometrical treatment of what it means to separate a Gaussian kernel along arbitrary axes in Ropfn, and we present a separation scheme that allows us to efficiently implement anisotropic Gaussian convolution filters for data of arbitrary dimensionality. Based on our previous analysis we show that this scheme is optimal with regard to the number of memory accesses and interpolation operations needed. The proposed method relies on nonorthogonal convolution axes and works completely in image space. Thus, it avoids the need for a fast Fourier transform (FFT)-subroutine. Depending on the accuracy and speed requirements, different interpolation schemes and methods to implement the one-dimensional Gaussian (finite impulse response and infinite impulse response) can be integrated. Special emphasis is put on analyzing the performance and accuracy of the new method. In particular, we show that without any special optimization of the source code, it can perform anisotropic Gaussian filtering faster than methods relying on the FFT View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Embedding Motion in Model-Based Stochastic Tracking

    Page(s): 3514 - 3530
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (2094 KB) |  | HTML iconHTML  

    Particle filtering is now established as one of the most popular methods for visual tracking. Within this framework, there are two important considerations. The first one refers to the generic assumption that the observations are temporally independent given the sequence of object states. The second consideration, often made in the literature, uses the transition prior as the proposal distribution. Thus, the current observations are not taken into account, requiring the noise process of this prior to be large enough to handle abrupt trajectory changes. As a result, many particles are either wasted in low likelihood regions of the state space, resulting in low sampling efficiency, or more importantly, propagated to distractor regions of the image, resulting in tracking failures. In this paper, we propose to handle both considerations using motion. We first argue that, in general, observations are conditionally correlated, and propose a new model to account for this correlation, allowing for the natural introduction of implicit and/or explicit motion measurements in the likelihood term. Second, explicit motion measurements are used to drive the sampling process towards the most likely regions of the state space. Overall, the proposed model handles abrupt motion changes and filters out visual distractors, when tracking objects with generic models based on shape or color distribution. Results were obtained on head tracking experiments using several sequences with moving camera involving large dynamics. When compared against the Condensation Algorithm, they have demonstrated the superior tracking performance of our approach View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.

Aims & Scope

IEEE Transactions on Image Processing focuses on signal-processing aspects of image processing, imaging systems, and image scanning, display, and printing.

Full Aims & Scope

Meet Our Editors

Editor-in-Chief
Scott Acton
University of Virginia
Charlottesville, VA, USA
E-mail: acton@virginia.edu 
Phone: +1 434-982-2003