By Topic

Image Processing, IEEE Transactions on

Issue 2 • Date Feb. 2007

Filter Results

Displaying Results 1 - 25 of 34
  • Table of contents

    Publication Year: 2007 , Page(s): C1 - C4
    Cited by:  Papers (2)
    Save to Project icon | Request Permissions | PDF file iconPDF (41 KB)  
    Freely Available from IEEE
  • IEEE Transactions on Image Processing publication information

    Publication Year: 2007 , Page(s): C2
    Save to Project icon | Request Permissions | PDF file iconPDF (36 KB)  
    Freely Available from IEEE
  • The Undecimated Wavelet Decomposition and its Reconstruction

    Publication Year: 2007 , Page(s): 297 - 309
    Cited by:  Papers (43)  |  Patents (2)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1822 KB) |  | HTML iconHTML  

    This paper describes the undecimated wavelet transform and its reconstruction. In the first part, we show the relation between two well known undecimated wavelet transforms, the standard undecimated wavelet transform and the isotropic undecimated wavelet transform. Then we present new filter banks specially designed for undecimated wavelet decompositions which have some useful properties such as being robust to ringing artifacts which appear generally in wavelet-based denoising methods. A range of examples illustrates the results View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Accurate Centerline Detection and Line Width Estimation of Thick Lines Using the Radon Transform

    Publication Year: 2007 , Page(s): 310 - 316
    Cited by:  Papers (16)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (886 KB) |  | HTML iconHTML  

    Centerline detection and line width estimation are important for many computer vision applications, e.g., road network extraction from high resolution remotely sensed imagery. Radon transform-based linear feature detection has many advantages over other approaches: for example, its robustness in noisy images. However, it usually fails to detect the centerline of a thick line due to the peak selection problem. In this paper, several key issues that affect the centerline detection using the radon transform are investigated. A mean filter is proposed to locate the true peak in the radon image and a profile analysis technique is used to further refine the line parameters. The thetas-boundary problem of the radon transform is also discussed and the erroneous line parameters are corrected. Intensive experiments have shown that the proposed methodology is effective in finding the centerline and estimating the line width of thick lines View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Flexible Skew-Symmetric Shape Model for Shape Representation, Classification, and Sampling

    Publication Year: 2007 , Page(s): 317 - 328
    Cited by:  Papers (1)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (892 KB) |  | HTML iconHTML  

    Skewness of shape data often arises in applications (e.g., medical image analysis) and is usually overlooked in statistical shape models. In such cases, a Gaussian assumption is unrealistic and a formulation of a general shape model which accounts for skewness is in order. In this paper, we present a novel statistical method for shape modeling, which we refer to as the flexible skew-symmetric shape model (FSSM). The model is sufficiently flexible to accommodate a departure from Gaussianity of the data and is fairly general to learn a "mean shape" (template), with a potential for classification and random generation of new realizations of a given shape. Robustness to skewness results from deriving the FSSM from an extended class of flexible skew-symmetric distributions. In addition, we demonstrate that the model allows us to extract principal curves in a point cloud. The idea is to view a shape as a realization of a spatial random process and to subsequently learn a shape distribution which captures the inherent variability of realizations, provided they remain, with high probability, within a certain neighborhood range around a mean. Specifically, given shape realizations, FSSM is formulated as a joint bimodal distribution of angle and distance from the centroid of an aggregate of random points. Mean shape is recovered from the modes of the distribution, while the maximum likelihood criterion is employed for classification View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Example-Based Color Transformation of Image and Video Using Basic Color Categories

    Publication Year: 2007 , Page(s): 329 - 336
    Cited by:  Papers (12)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1689 KB) |  | HTML iconHTML  

    Color transformation is the most effective method to improve the mood of an image, because color has a large influence in forming the mood. However, conventional color transformation tools have a tradeoff between the quality of the resultant image and the amount of manual operation. To achieve a more detailed and natural result with less labor, we previously suggested a method that performs an example-based color stylization of images using perceptual color categories. In this paper, we extend this method to make the algorithm more robust and to stylize the colors of video frame sequences. We present a variety of results, arguing that these images and videos convey a different, but coherent mood View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Morphological Decomposition of 2-D Binary Shapes Into Modestly Overlapped Octagonal and Disk Components

    Publication Year: 2007 , Page(s): 337 - 348
    Cited by:  Papers (4)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (479 KB) |  | HTML iconHTML  

    One problem with several leading morphological shape representation algorithms is the heavy overlapping among the representative disks of the same size. A shape component formed by grouping connected disk centers may use many heavily overlapping disks to represent a simple shape part. Sometimes, these representative disks form complicated structures. A generalized skeleton transform was recently introduced which allows a shape to be represented as a collection of modestly overlapped octagonal shape parts. However, the generalized skeleton transform needs to be applied many times. Furthermore, an octagonal component is not easily matched up with another octagonal component. In this paper, we describe a octagon-fitting algorithm which identifies a special maximal octagon for each image point in a given shape. This transform leads to the development of two new shape decomposition algorithms. These algorithms are more efficient to implement; the octagon-fitting algorithm only needs to be applied once. The components generated are better characterized mathematically. The disk components used in the second decomposition algorithm are more primitive than octagons and easily matched up with other disk components from another shape. The experiments show that the new decomposition algorithms produce as efficient representations as the old algorithm for both exact and approximate cases. A simple shape-matching algorithm using disk components is also demonstrated View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Kernel Regression for Image Processing and Reconstruction

    Publication Year: 2007 , Page(s): 349 - 366
    Cited by:  Papers (226)  |  Patents (9)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (9338 KB) |  | HTML iconHTML  

    In this paper, we make contact with the field of nonparametric statistics and present a development and generalization of tools and results for use in image processing and reconstruction. In particular, we adapt and expand kernel regression ideas for use in image denoising, upscaling, interpolation, fusion, and more. Furthermore, we establish key relationships with some popular existing methods and show how several of these algorithms, including the recently popularized bilateral filter, are special cases of the proposed framework. The resulting algorithms and analyses are amply illustrated with practical examples View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Novel Cooperative Neural Fusion Algorithms for Image Restoration and Image Fusion

    Publication Year: 2007 , Page(s): 367 - 381
    Cited by:  Papers (12)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (7231 KB) |  | HTML iconHTML  

    To deal with the problem of restoring degraded images with non-Gaussian noise, this paper proposes a novel cooperative neural fusion regularization (CNFR) algorithm for image restoration. Compared with conventional regularization algorithms for image restoration, the proposed CNFR algorithm can relax need of the optimal regularization parameter to be estimated. Furthermore, to enhance the quality of restored images, this paper presents a cooperative neural fusion (CNF) algorithm for image fusion. Compared with existing signal-level image fusion algorithms, the proposed CNF algorithm can greatly reduce the loss of contrast information under blind Gaussian noise environments. The performance analysis shows that the proposed two neural fusion algorithms can converge globally to the robust and optimal image estimate. Simulation results confirm that in different noise environments, the proposed two neural fusion algorithms can obtain a better image estimate than several well known image restoration and image fusion methods View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Multiresolution 3-D Reconstruction From Side-Scan Sonar Images

    Publication Year: 2007 , Page(s): 382 - 390
    Cited by:  Papers (9)  |  Patents (1)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (3691 KB) |  | HTML iconHTML  

    In this paper, a new method for the estimation of seabed elevation maps from side-scan sonar images is presented. The side-scan image formation process is represented by a Lambertian diffuse model, which is then inverted by a multiresolution optimization procedure inspired by expectation-maximization to account for the characteristics of the imaged seafloor region. On convergence of the model, approximations for seabed reflectivity, side-scan beam pattern, and seabed altitude are obtained. The performance of the system is evaluated against a real structure of known dimensions. Reconstruction results for images acquired by different sonar sensors are presented. Applications to augmented reality for the simulation of targets in sonar imagery are also discussed View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Optimal Signature Design for Spread-Spectrum Steganography

    Publication Year: 2007 , Page(s): 391 - 405
    Cited by:  Papers (13)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1996 KB) |  | HTML iconHTML  

    For any given host image or group of host images and any (block) transform domain of interest, we find the signature vector that when used for spread-spectrum (SS) message embedding maximizes the signal-to-interference-plus-noise ratio (SINR) at the output of the corresponding maximum-SINR linear filter. We establish that, under a (colored) Gaussian assumption on the transform domain host data, the same derived signature minimizes host distortion for any target message recovery error rate and maximizes the Shannon capacity of the covert steganographic link. Then, we derive jointly optimal signature and linear processor designs for SS embedding in linearly modified transform domain host data and demonstrate orders of magnitude improvement over current SS steganographic practices. Optimized multisignature/multimessage embedding in the same host data is studied as well View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Matching Pursuit-Based Region-of-Interest Image Coding

    Publication Year: 2007 , Page(s): 406 - 415
    Cited by:  Papers (5)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1501 KB) |  | HTML iconHTML  

    Matching pursuit (MP) is a multiresolution signal analysis method and can be used to render a selected region of an image with a specific quality. A novel, scalable, and progressive MP-based region-of-interest image-coding scheme is presented. The method is capable of providing a trade off between rate, distortion, and complexity. The method also provides an interactive way of information refinement for regions of an image with higher receiver's priority. By selecting a proper subset of the huge initial MP dictionary, using the method described in this paper, the complexity burden of MP analysis can be adapted to the computational power of the image encoder View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Adaptive Directional Lifting-Based Wavelet Transform for Image Coding

    Publication Year: 2007 , Page(s): 416 - 427
    Cited by:  Papers (78)  |  Patents (2)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (4766 KB) |  | HTML iconHTML  

    We present a novel 2-D wavelet transform scheme of adaptive directional lifting (ADL) in image coding. Instead of alternately applying horizontal and vertical lifting, as in present practice, ADL performs lifting-based prediction in local windows in the direction of high pixel correlation. Hence, it adapts far better to the image orientation features in local windows. The ADL transform is achieved by existing 1-D wavelets and is seamlessly integrated into the global wavelet transform. The predicting and updating signals of ADL can be derived even at the fractional pixel precision level to achieve high directional resolution, while still maintaining perfect reconstruction. To enhance the ADL performance, a rate-distortion optimized directional segmentation scheme is also proposed to form and code a hierarchical image partition adapting to local features. Experimental results show that the proposed ADL-based image coding technique outperforms JPEG 2000 in both PSNR and visual quality, with the improvement up to 2.0 dB on images with rich orientation features View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Undersampled Boundary Pre-/Postfilters for Low Bit-Rate DCT-Based Block Coders

    Publication Year: 2007 , Page(s): 428 - 441
    Cited by:  Papers (2)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (4164 KB) |  | HTML iconHTML  

    It has been well established that critically sampled boundary pre-/postfiltering operators can improve the coding efficiency and mitigate blocking artifacts in traditional discrete cosine transform-based block coders at low bit rates. In these systems, both the prefilter and the postfilter are square matrices. This paper proposes to use undersampled boundary pre- and postfiltering modules, where the pre-/postfilters are rectangular matrices. Specifically, the prefilter is a "fat" matrix, while the postfilter is a "tall" one. In this way, the size of the prefiltered image is smaller than that of the original input image, which leads to improved compression performance and reduced computational complexities at low bit rates. The design and VLSI-friendly implementation of the undersampled pre-/postfilters are derived. Their relations to lapped transforms and filter banks are also presented. Two design examples are also included to demonstrate the validity of the theory. Furthermore, image coding results indicate that the proposed undersampled pre-/postfiltering systems yield excellent and stable performance in low bit-rate image coding View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Robust Coding Over Noisy Overcomplete Channels

    Publication Year: 2007 , Page(s): 442 - 452
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1905 KB) |  | HTML iconHTML  

    We address the problem of robust coding in which the signal information should be preserved in spite of intrinsic noise in the representation. We present a theoretical analysis for 1- and 2-D cases and characterize the optimal linear encoder and decoder in the mean-squared error sense. Our analysis allows for an arbitrary number of coding units, thus including both under- and over-complete representations, and provides insights into optimal coding strategies. In particular, we show how the form of the code adapts to the number of coding units and to different data and noise conditions in order to achieve robustness. We also present numerical solutions of robust coding for high-dimensional image data, demonstrating that these codes are substantially more robust than other linear image coding methods such as PCA, ICA, and wavelets View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • An Exact Algorithm for Optimal MAE Stack Filter Design

    Publication Year: 2007 , Page(s): 453 - 462
    Cited by:  Papers (2)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (3399 KB) |  | HTML iconHTML  

    We propose a new algorithm for optimal MAE stack filter design. It is based on three main ingredients. First, we show that the dual of the integer programming formulation of the filter design problem is a minimum cost network flow problem. Next, we present a decomposition principle that can be used to break this dual problem into smaller subproblems. Finally, we propose a specialization of the network Simplex algorithm based on column generation to solve these smaller subproblems. Using our method, we were able to efficiently solve instances of the filter problem with window size up to 25 pixels. To the best of our knowledge, this is the largest dimension for which this problem was ever solved exactly View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • The Regularized Iteratively Reweighted MAD Method for Change Detection in Multi- and Hyperspectral Data

    Publication Year: 2007 , Page(s): 463 - 478
    Cited by:  Papers (50)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (13246 KB) |  | HTML iconHTML  

    This paper describes new extensions to the previously published multivariate alteration detection (MAD) method for change detection in bi-temporal, multi- and hypervariate data such as remote sensing imagery. Much like boosting methods often applied in data mining work, the iteratively reweighted (IR) MAD method in a series of iterations places increasing focus on "difficult" observations, here observations whose change status over time is uncertain. The MAD method is based on the established technique of canonical correlation analysis: for the multivariate data acquired at two points in time and covering the same geographical region, we calculate the canonical variates and subtract them from each other. These orthogonal differences contain maximum information on joint change in all variables (spectral bands). The change detected in this fashion is invariant to separate linear (affine) transformations in the originally measured variables at the two points in time, such as 1) changes in gain and offset in the measuring device used to acquire the data, 2) data normalization or calibration schemes that are linear (affine) in the gray values of the original variables, or 3) orthogonal or other affine transformations, such as principal component (PC) or maximum autocorrelation factor (MAF) transformations. The IR-MAD method first calculates ordinary canonical and original MAD variates. In the following iterations we apply different weights to the observations, large weights being assigned to observations that show little change, i.e., for which the sum of squared, standardized MAD variates is small, and small weights being assigned to observations for which the sum is large. Like the original MAD method, the iterative extension is invariant to linear (affine) transformations of the original variables. To stabilize solutions to the (IR-)MAD problem, some form of regularization may be needed. This is especially useful for work on hyperspectral data. This paper describes or- - dinary two-set canonical correlation analysis, the MAD transformation, the iterative extension, and three regularization schemes. A simple case with real Landsat Thematic Mapper (TM) data at one point in time and (partly) constructed data at the other point in time that demonstrates the superiority of the iterative scheme over the original MAD method is shown. Also, examples with SPOT High Resolution Visible data from an agricultural region in Kenya, and hyperspectral airborne HyMap data from a small rural area in southeastern Germany are given. The latter case demonstrates the need for regularization View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A MAP Approach for Joint Motion Estimation, Segmentation, and Super Resolution

    Publication Year: 2007 , Page(s): 479 - 490
    Cited by:  Papers (59)  |  Patents (2)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (2743 KB) |  | HTML iconHTML  

    Super resolution image reconstruction allows the recovery of a high-resolution (HR) image from several low-resolution images that are noisy, blurred, and down sampled. In this paper, we present a joint formulation for a complex super-resolution problem in which the scenes contain multiple independently moving objects. This formulation is built upon the maximum a posteriori (MAP) framework, which judiciously combines motion estimation, segmentation, and super resolution together. A cyclic coordinate descent optimization procedure is used to solve the MAP formulation, in which the motion fields, segmentation fields, and HR images are found in an alternate manner given the two others, respectively. Specifically, the gradient-based methods are employed to solve the HR image and motion fields, and an iterated conditional mode optimization method to obtain the segmentation fields. The proposed algorithm has been tested using a synthetic image sequence, the "Mobile and Calendar" sequence, and the original "Motorcycle and Car" sequence. The experiment results and error analyses verify the efficacy of this algorithm View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Wiener Filter-Based Error Resilient Time-Domain Lapped Transform

    Publication Year: 2007 , Page(s): 491 - 502
    Cited by:  Papers (4)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (2140 KB) |  | HTML iconHTML  

    In this paper, the design of the error resilient time-domain lapped transform is formulated as a linear minimal mean-squared error problem. The optimal Wiener solution and several simplifications with different tradeoffs between complexity and performance are developed. We also prove the persymmetric structure of these Wiener filters. The existing mean reconstruction method is proven to be a special case of the proposed framework. Our method also includes as a special case the linear interpolation method used in DCT-based systems when there is no pre/postfiltering and when the quantization noise is ignored. The design criteria in our previous results are scrutinized and improved solutions are obtained. Various design examples and multiple description image coding experiments are reported to demonstrate the performance of the proposed method View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Detection of Gait Characteristics for Scene Registration in Video Surveillance System

    Publication Year: 2007 , Page(s): 503 - 510
    Cited by:  Papers (8)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (2369 KB) |  | HTML iconHTML  

    This paper presents a robust walk-detection algorithm, based on our symmetry approach which can be used to extract gait characteristics from video-image sequences. To obtain a useful descriptor of a walking person, we temporally track the symmetries of a person's legs. Our method is suitable for use in indoor or outdoor surveillance scenes. Determining the leading leg of the walking subject is important, and the presented method can identify this from two successive walk steps (one walk cycle). We tested the accuracy of the presented walk-detection method in a possible application: Image registration methods are presented which are applicable to multicamera systems viewing human subjects in motion View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Correction of Simple Contrast Loss in Color Images

    Publication Year: 2007 , Page(s): 511 - 522
    Cited by:  Papers (17)  |  Patents (5)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1816 KB) |  | HTML iconHTML  

    This paper is concerned with the mitigation of simple contrast loss due to added lightness in an image. This added lightness has been referred to as "airlight" in the literature since it is often caused by optical scattering due to fog or mist. A statistical model for scene content is formulated that gives a way of detecting the presence of airlight in an arbitrary image. An algorithm is described for estimating the level of this airlight given the assumption that it is constant throughout the image. This algorithm is based on finding the minimum of a global cost function and is applicable to both monochrome and color images. The method is robust and insensitive to scaling. Once an estimate of airlight is achieved, then image correction is straightforward. The performance of the algorithm is explored using the Monte Carlo simulation with synthetic images under different statistical assumptions. Several examples of before and after color images are given. Results with real video data obtained in poor visibility conditions indicate frame-to-frame consistency of better than 1% of maximum level View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Image Denoising by Averaging of Piecewise Constant Simulations of Image Partitions

    Publication Year: 2007 , Page(s): 523 - 533
    Cited by:  Papers (5)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (5687 KB) |  | HTML iconHTML  

    This paper investigates the problem of image denoising when the image is corrupted by additive white Gaussian noise. We herein propose a spatial adaptive denoising method which is based on an averaging process performed on a set of Markov Chain Monte-Carlo simulations of region partition maps constrained to be spatially piecewise uniform (i.e., constant in the grey level value sense) for each estimated constant-value regions. For the estimation of these region partition maps, we have adopted the unsupervised Markovian framework in which parameters are automatically estimated in the least square sense. This sequential averaging allows to obtain, under our image model, an approximation of the image to be recovered in the minimal mean square sense error. The experiments reported in this paper demonstrate that the discussed method performs competitively and sometimes better than the best existing state-of-the-art wavelet-based denoising methods in benchmark tests View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Iterative Regularization and Nonlinear Inverse Scale Space Applied to Wavelet-Based Denoising

    Publication Year: 2007 , Page(s): 534 - 544
    Cited by:  Papers (19)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (4993 KB) |  | HTML iconHTML  

    In this paper, we generalize the iterative regularization method and the inverse scale space method, recently developed for total-variation (TV) based image restoration, to wavelet-based image restoration. This continues our earlier joint work with others where we applied these techniques to variational-based image restoration, obtaining significant improvement over the Rudin-Osher-Fatemi TV-based restoration. Here, we apply these techniques to soft shrinkage and obtain the somewhat surprising result that a) the iterative procedure applied to soft shrinkage gives firm shrinkage and converges to hard shrinkage and b) that these procedures enhance the noise-removal capability both theoretically, in the sense of generalized Bregman distance, and for some examples, experimentally, in terms of the signal-to-noise ratio, leaving less signal in the residual View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Video Inpainting Under Constrained Camera Motion

    Publication Year: 2007 , Page(s): 545 - 553
    Cited by:  Papers (42)  |  Patents (3)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1306 KB) |  | HTML iconHTML  

    A framework for inpainting missing parts of a video sequence recorded with a moving or stationary camera is presented in this work. The region to be inpainted is general: It may be still or moving, in the background or in the foreground, it may occlude one object and be occluded by some other object. The algorithm consists of a simple preprocessing stage and two steps of video inpainting. In the preprocessing stage, we roughly segment each frame into foreground and background. We use this segmentation to build three image mosaics that help to produce time consistent results and also improve the performance of the algorithm by reducing the search space. In the first video inpainting step, we reconstruct moving objects in the foreground that are "occluded" by the region to be inpainted. To this end, we fill the gap as much as possible by copying information from the moving foreground in other frames, using a priority-based scheme. In the second step, we inpaint the remaining hole with the background. To accomplish this, we first align the frames and directly copy when possible. The remaining pixels are filled in by extending spatial texture synthesis techniques to the spatiotemporal domain. The proposed framework has several advantages over state-of-the-art algorithms that deal with similar types of data and constraints. It permits some camera motion, is simple to implement, fast, does not require statistical models of background nor foreground, works well in the presence of rich and cluttered backgrounds, and the results show that there is no visible blurring or motion artifacts. A number of real examples taken with a consumer hand-held camera are shown supporting these findings View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A Successive Approximation Technique for Displaying Gray Shades in Liquid Crystal Displays (LCDs)

    Publication Year: 2007 , Page(s): 554 - 561
    Cited by:  Papers (9)  |  Patents (3)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (492 KB) |  | HTML iconHTML  

    A successive approximation technique that is based on the conventional line-by-line-addressing is proposed. A large number of gray shades can be displayed without flicker by using low-cost liquid crystal display drivers that are designed to drive the pixels to either ON or OFF states in bilevel displays View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.

Aims & Scope

IEEE Transactions on Image Processing focuses on signal-processing aspects of image processing, imaging systems, and image scanning, display, and printing.

Full Aims & Scope

Meet Our Editors

Editor-in-Chief
Scott Acton
University of Virginia
Charlottesville, VA, USA
E-mail: acton@virginia.edu 
Phone: +1 434-982-2003