By Topic

Image Processing, IEEE Transactions on

Issue 8 • Date Aug. 2007

Filter Results

Displaying Results 1 - 25 of 32
  • Table of contents

    Publication Year: 2007 , Page(s): C1 - C4
    Save to Project icon | Request Permissions | PDF file iconPDF (41 KB)  
    Freely Available from IEEE
  • IEEE Transactions on Image Processing publication information

    Publication Year: 2007 , Page(s): C2
    Save to Project icon | Request Permissions | PDF file iconPDF (35 KB)  
    Freely Available from IEEE
  • Reduced-Complexity Delayed-Decision Algorithm for Context-Based Image Processing Systems

    Publication Year: 2007 , Page(s): 1937 - 1945
    Cited by:  Papers (1)  |  Patents (3)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (388 KB) |  | HTML iconHTML  

    It is well known that the performance of context-based image processing systems can be improved by allowing the processor (e.g., an encoder or a denoiser) a delay of several samples before making a processing decision. Often, however, for such systems, traditional delayed-decision algorithms can become computationally prohibitive due to the growth in the size of the space of possible solutions. In this paper, we propose a reduced-complexity, one-pass, delayed-decision algorithm that systematically reduces the size of the search space, while also preserving its structure. In particular, we apply the proposed algorithm to two examples of adaptive context-based image processing systems, an image coding system that employs a context-based entropy coder, and a spatially adaptive image-denoising system. For these two types of widely used systems, we show that the proposed delayed-decision search algorithm outperforms instantaneous-decision algorithms with only a small increase in complexity. We also show that the performance of the proposed algorithm is better than that of other, higher complexity, delayed-decision algorithms. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Interconversion Between Truncated Cartesian and Polar Expansions of Images

    Publication Year: 2007 , Page(s): 1946 - 1955
    Cited by:  Papers (5)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1638 KB) |  | HTML iconHTML  

    In this paper, we propose an algorithm for lossless conversion of data between Cartesian and polar coordinates, when the data is sampled from a 2-D real-valued function (a mapping: ) expressed as a particular kind of truncated expansion. We use Laguerre functions and the Fourier basis for the polar coordinate expression. Hermite functions are used for the Cartesian coordinate expression. A finite number of coefficients for the truncated expansion specifies the function in each coordinate system. We derive the relationship between the coefficients for the two coordinate systems. Based on this relationship, we propose an algorithm for lossless conversion between the two coordinate systems. Resampling can be used to evaluate a truncated expansion on the complementary coordinate system without computing a new set of coefficients. The resampled data is used to compute the new set of coefficients to avoid the numerical instability associated with direct conversion of the coefficients. In order to apply our algorithm to discrete image data, we propose a method to optimally fit a truncated expression to a given image. We also quantify the error that this filtering process can produce. Finally the algorithm is applied to solve the polar-Cartesian interpolation problem. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Robust Image Watermarking Based on Multiband Wavelets and Empirical Mode Decomposition

    Publication Year: 2007 , Page(s): 1956 - 1966
    Cited by:  Papers (29)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (4258 KB) |  | HTML iconHTML  

    In this paper, we propose a blind image watermarking algorithm based on the multiband wavelet transformation and the empirical mode decomposition. Unlike the watermark algorithms based on the traditional two-band wavelet transform, where the watermark bits are embedded directly on the wavelet coefficients, in the proposed scheme, we embed the watermark bits in the mean trend of some middle-frequency subimages in the wavelet domain. We further select appropriate dilation factor and filters in the multiband wavelet transform to achieve better performance in terms of perceptually invisibility and the robustness of the watermark. The experimental results show that the proposed blind watermarking scheme is robust against JPEG compression, Gaussian noise, salt and pepper noise, median filtering, and Con-vFilter attacks. The comparison analysis demonstrate that our scheme has better performance than the watermarking schemes reported recently. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Time-Reversal MUSIC Imaging of Extended Targets

    Publication Year: 2007 , Page(s): 1967 - 1984
    Cited by:  Papers (32)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1933 KB) |  | HTML iconHTML  

    This paper develops, within a general framework that is applicable to rather arbitrary electromagnetic and acoustic remote sensing systems, a theory of time-reversal ldquomultiple signal classificationrdquo (MUSIC)-based imaging of extended (nonpoint-like) scatterers (targets). The general analysis applies to arbitrary remote sensing geometry and sheds light onto how the singular system of the scattering matrix relates to the geometrical and propagation characteristics of the entire transmitter-target-receiver system and how to use this effect for imaging. All the developments are derived within exact scattering theory which includes multiple scattering effects. The derived time-reversal MUSIC methods include both interior sampling, as well as exterior sampling (or enclosure) approaches. For presentation simplicity, particular attention is given to the time-harmonic case where the informational wave modes employed for target interrogation are purely spatial, but the corresponding generalization to broadband fields is also given. This paper includes computer simulations illustrating the derived theory and algorithms. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Statistical Reconstruction for Cosmic Ray Muon Tomography

    Publication Year: 2007 , Page(s): 1985 - 1993
    Cited by:  Papers (14)  |  Patents (3)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (882 KB) |  | HTML iconHTML  

    Highly penetrating cosmic ray muons constantly shower the earth at a rate of about 1 muon per cm2 per minute. We have developed a technique which exploits the multiple Coulomb scattering of these particles to perform nondestructive inspection without the use of artificial radiation. In prior work , we have described heuristic methods for processing muon data to create reconstructed images. In this paper, we present a maximum likelihood/expectation maximization tomographic reconstruction algorithm designed for the technique. This algorithm borrows much from techniques used in medical imaging, particularly emission tomography, but the statistics of muon scattering dictates differences. We describe the statistical model for multiple scattering, derive the reconstruction algorithm, and present simulated examples. We also propose methods to improve the robustness of the algorithm to experimental errors and events departing from the statistical model. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Segmentation and Quantification of Human Vessels Using a 3-D Cylindrical Intensity Model

    Publication Year: 2007 , Page(s): 1994 - 2004
    Cited by:  Papers (11)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1514 KB) |  | HTML iconHTML  

    We introduce a new approach for 3-D segmentation and quantification of vessels. The approach is based on a 3-D cylindrical parametric intensity model, which is directly fitted to the image intensities through an incremental process based on a Kalman filter. Segmentation results are the vessel centerline and shape, i.e., we estimate the local vessel radius, the 3-D position and 3-D orientation, the contrast, as well as the fitting error. We carried out an extensive validation using 3-D synthetic images and also compared the new approach with an approach based on a Gaussian model. In addition, the new model has been successfully applied to segment vessels from 3-D MRA and computed tomography angiography image data. In particular, we compared our approach with an approach based on the randomized Hough transform. Moreover, a validation of the segmentation results based on ground truth provided by a radiologist confirms the accuracy of the new approach. Our experiments show that the new model yields superior results in estimating the vessel radius compared to previous approaches based on a Gaussian model as well as the Hough transform. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Hierarchical Dynamic Range Coding of Wavelet Subbands for Fast and Efficient Image Decompression

    Publication Year: 2007 , Page(s): 2005 - 2015
    Cited by:  Papers (5)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (2064 KB) |  | HTML iconHTML  

    An image coding algorithm, progressive resolution coding (PROGRES), for a high-speed resolution scalable decoding is proposed. The algorithm is designed based on a prediction of the decaying dynamic ranges of wavelet subbands. Most interestingly, because of the syntactic relationship between two coders, the proposed method costs an amount of bits very similar to that used by uncoded (i.e., not entropy coded) SPIHT. The algorithm bypasses bit-plane coding and complicated list processing of SPIHT in order to obtain a considerable speed improvement, giving up quality scalability, but without compromising coding efficiency. Since each tree of coefficients is separately coded, where the root of the tree corresponds to the coefficient in LL subband, the algorithm is easily extensible to random access decoding. The algorithm is designed and implemented for both 2D and 3D wavelet subbands. Experiments show that the decoding speeds of the proposed coding model are four times and nine times faster than uncoded 2D-SPIHT and 3D-SPIHT, respectively, with almost the same decoded quality. The higher decoding speed gain in a larger image source validates the suitability of the proposed method to very large scale image encoding and decoding. In the Appendix, we explain the syntactic relationship of the proposed PROGRES method to uncoded SPIHT, and demonstrate that, in the lossless case, the bits sent to the codestream for each algorithm are identical, except that they are sent in different order. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Joint Source-Channel Rate Allocation in Parallel Channels

    Publication Year: 2007 , Page(s): 2016 - 2022
    Cited by:  Papers (5)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (742 KB) |  | HTML iconHTML  

    A fast rate-optimal rate allocation algorithm is proposed for parallel transmission of scalable images in multichannel systems. Scalable images are transmitted via fixed-length packets. The proposed algorithm selects a subchannel, as well as a channel code rate for each packet, based on the signal-to-noise ratios (SNRs) of the subchannels. The resulting scheme provides unequal error protection of source bits and significant gains are obtained over equal error protection schemes. An application of the proposed algorithm to JPEG2000 transmission shows the advantages of exploiting differences in SNRs between subchannels. Multiplexing of multiple sources is also considered, and additional gains are achieved by exploiting information diversity among the sources. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Spatially Variant Apodization for Squinted Synthetic Aperture Radar Images

    Publication Year: 2007 , Page(s): 2023 - 2027
    Cited by:  Papers (1)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (872 KB) |  | HTML iconHTML  

    Spatially variant apodization (SVA) is a nonlinear sidelobe reduction technique that improves sidelobe level and preserves resolution at the same time. This method implements a bidimensional finite impulse response filter with adaptive taps depending on image information. Some papers that have been previously published analyze SVA at the Nyquist rate or at higher rates focused on strip synthetic aperture radar (SAR). This paper shows that traditional SVA techniques are useless when the sensor operates with a squint angle. The reasons for this behaviour are analyzed, and a new implementation that largely improves the results is presented. The algorithm is applied to simulated SAR images in order to demonstrate the good quality achieved along with efficient computation. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Accurate Calculation of Image Moments

    Publication Year: 2007 , Page(s): 2028 - 2037
    Cited by:  Papers (8)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (986 KB) |  | HTML iconHTML  

    Image moments have been extensively used as feature descriptors. However, the quantization error introduced in discrete signals presents problems, especially when dealing with small-size images. This results in the fact that the invariant properties of moments are compromised. In this paper, we present a technique suitable for the calculation of moments from a continuous signal, derived by piecewise polynomial interpolation of the corresponding discrete one. The computed moments exhibit significantly increased accuracy while requiring trivial computational effort. Zernike moments are then computed using the proposed scheme and are shown to display increased stability to geometrical transformations. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Motion Estimation in the 3-D Gabor Domain

    Publication Year: 2007 , Page(s): 2038 - 2047
    Cited by:  Patents (1)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (876 KB) |  | HTML iconHTML  

    Motion estimation methods can be broadly classified as being spatiotemporal or frequency domain in nature. The Gabor representation is an analysis framework providing localized frequency information. When applied to image sequences, the 3-D Gabor representation displays spatiotemporal/spatiotemporal-frequency (st/stf) information, enabling the application of robust frequency domain methods with adjustable spatiotemporal resolution. In this work, the 3-D Gabor representation is applied to motion analysis. We demonstrate that piecewise uniform translational motion can be estimated by using a uniform translation motion model in the st/stf domain. The resulting motion estimation method exhibits both good spatiotemporal resolution and substantial noise resistance compared to existing spatiotemporal methods. To form the basis of this model, we derive the signature of the translational motion in the 3-D Gabor domain. Finally, to obtain higher spatiotemporal resolution for more complex motions, a dense motion field estimation method is developed to find a motion estimate for every pixel in the sequence. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Optimal Approach for Fast Object-Template Matching

    Publication Year: 2007 , Page(s): 2048 - 2057
    Cited by:  Papers (6)  |  Patents (1)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1631 KB) |  | HTML iconHTML  

    This paper proposes a novel algorithm for an optimal reduction of object description for object matching purposes. Our aim is to decrease the computation needs by considering simplified objects, thus reducing the number of pixels involved in the matching process. We develop the appropriate theoretical background based on centroidal Voronoi tessellations. Its use within the chamfer matching framework is also discussed. We present experimental results regarding the performance of this approach for 2-D contour and region-like object matching. As a special case, we investigate how the snake based representation of target objects can be employed in chamfer matching. The experimental results concern the use of object part matching for recognizing humans and show how the proposed simplification leads to valid replacements of the original templates. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A Novel Fast and Reduced Redundancy Structure for Multiscale Directional Filter Banks

    Publication Year: 2007 , Page(s): 2058 - 2068
    Cited by:  Papers (3)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (782 KB) |  | HTML iconHTML  

    The multiscale directional filter bank (MDFB) improves the radial frequency resolution of the contourlet transform by introducing an additional decomposition in the high-frequency band. The increase in frequency resolution is particularly useful for texture description because of the quasi-periodic property of textures. However, the MDFB needs an extra set of scale and directional decomposition, which is performed on the full image size. The rise in computational complexity is, thus, prominent. In this paper, we develop an efficient implementation framework for the MDFB. In the new framework, directional decomposition on the first two scales is performed prior to the scale decomposition. This allows sharing of directional decomposition among the two scales and, hence, reduces the computational complexity significantly. Based on this framework, two fast implementations of the MDFB are proposed. The first one can maintain the same flexibility in directional selectivity in the first two scales while the other has the same redundancy ratio as the contourlet transform. Experimental results show that the first and the second schemes can reduce the computational time by 33.3%-34.6% and 37.1%-37.5%, respectively, compared to the original MDFB algorithm. Meanwhile, the texture retrieval performance of the proposed algorithms is more or less the same as the original MDFB approach which outperforms the steerable pyramid and the contourlet transform approaches. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A New Approach to Image Copy Detection Based on Extended Feature Sets

    Publication Year: 2007 , Page(s): 2069 - 2079
    Cited by:  Papers (10)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (897 KB) |  | HTML iconHTML  

    Conventional image copy detection research concentrates on finding features that are robust enough to resist various kinds of image attacks. However, finding a globally effective feature is difficult and, in many cases, domain dependent. Instead of simply extracting features from copyrighted images directly, we propose a new framework called the extended feature set for detecting copies of images. In our approach, virtual prior attacks are applied to copyrighted images to generate novel features, which serve as training data. The copy-detection problem can be solved by learning classifiers from the training data, thus, generated. Our approach can be integrated into existing copy detectors to further improve their performance. Experiment results demonstrate that the proposed approach can substantially enhance the accuracy of copy detection. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Image Denoising by Sparse 3-D Transform-Domain Collaborative Filtering

    Publication Year: 2007 , Page(s): 2080 - 2095
    Cited by:  Papers (485)  |  Patents (23)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (6170 KB) |  | HTML iconHTML  

    We propose a novel image denoising strategy based on an enhanced sparse representation in transform domain. The enhancement of the sparsity is achieved by grouping similar 2D image fragments (e.g., blocks) into 3D data arrays which we call "groups." Collaborative Altering is a special procedure developed to deal with these 3D groups. We realize it using the three successive steps: 3D transformation of a group, shrinkage of the transform spectrum, and inverse 3D transformation. The result is a 3D estimate that consists of the jointly filtered grouped image blocks. By attenuating the noise, the collaborative filtering reveals even the finest details shared by grouped blocks and, at the same time, it preserves the essential unique features of each individual block. The filtered blocks are then returned to their original positions. Because these blocks are overlapping, for each pixel, we obtain many different estimates which need to be combined. Aggregation is a particular averaging procedure which is exploited to take advantage of this redundancy. A significant improvement is obtained by a specially developed collaborative Wiener filtering. An algorithm based on this novel denoising strategy and its efficient implementation are presented in full detail; an extension to color-image denoising is also developed. The experimental results demonstrate that this computationally scalable algorithm achieves state-of-the-art denoising performance in terms of both peak signal-to-noise ratio and subjective visual quality. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Active Contour External Force Using Vector Field Convolution for Image Segmentation

    Publication Year: 2007 , Page(s): 2096 - 2106
    Cited by:  Papers (77)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (2236 KB) |  | HTML iconHTML  

    Snakes, or active contours, have been widely used in image processing applications. Typical roadblocks to consistent performance include limited capture range, noise sensitivity, and poor convergence to concavities. This paper proposes a new external force for active contours, called vector field convolution (VFC), to address these problems. VFC is calculated by convolving the edge map generated from the image with the user-defined vector field kernel. We propose two structures for the magnitude function of the vector field kernel, and we provide an analytical method to estimate the parameter of the magnitude function. Mixed VFC is introduced to alleviate the possible leakage problem caused by choosing inappropriate parameters. We also demonstrate that the standard external force and the gradient vector flow (GVF) external force are special cases of VFC in certain scenarios. Examples and comparisons with GVF are presented in this paper to show the advantages of this innovation, including superior noise robustness, reduced computational cost, and the flexibility of tailoring the force field. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A Topdown Algorithm for Computation of Level Line Trees

    Publication Year: 2007 , Page(s): 2107 - 2116
    Cited by:  Papers (1)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (728 KB) |  | HTML iconHTML  

    We introduce an optimal topdown algorithm for computing and representing level line trees of 2-D intensity images. The running time of the algorithm is , where is the size of the input image and is the total length of all level lines. The properties of level line trees are also investigated. The efficiency of the algorithm is illustrated by experiments on images of different sizes and scenes. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Text Extraction and Document Image Segmentation Using Matched Wavelets and MRF Model

    Publication Year: 2007 , Page(s): 2117 - 2128
    Cited by:  Papers (26)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (2455 KB) |  | HTML iconHTML  

    In this paper, we have proposed a novel scheme for the extraction of textual areas of an image using globally matched wavelet filters. A clustering-based technique has been devised for estimating globally matched wavelet filters using a collection of groundtruth images. We have extended our text extraction scheme for the segmentation of document images into text, background, and picture components (which include graphics and continuous tone images). Multiple, two-class Fisher classifiers have been used for this purpose. We also exploit contextual information by using a Markov random field formulation-based pixel labeling scheme for refinement of the segmentation results. Experimental results have established effectiveness of our approach. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Real-Time Decentralized Articulated Motion Analysis and Object Tracking From Videos

    Publication Year: 2007 , Page(s): 2129 - 2138
    Cited by:  Papers (8)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (865 KB) |  | HTML iconHTML  

    In this paper, we present two new articulated motion analysis and object tracking approaches: the decentralized articulated object tracking method and the hierarchical articulated object tracking method. The first approach avoids the common practice of using a high-dimensional joint state representation for articulated object tracking. Instead, we introduce a decentralized scheme and model the interpart interaction within an innovative Bayesian framework. Specifically, we estimate the interaction density by an efficient decomposed interpart interaction model. To handle severe self-occlusions, we further extend the first approach by modeling high-level interunit interaction and develop the second algorithm within a consistent hierarchical framework. Preliminary experimental results have demonstrated the superior performance of the proposed approaches on real-world videos in both robustness and speed compared with other articulated object tracking methods. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Fast Template Matching With Polynomials

    Publication Year: 2007 , Page(s): 2139 - 2149
    Cited by:  Papers (11)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1497 KB) |  | HTML iconHTML  

    Template matching is widely used for many applications in image and signal processing. This paper proposes a novel template matching algorithm, called algebraic template matching. Given a template and an input image, algebraic template matching efficiently calculates similarities between the template and the partial images of the input image, for various widths and heights. The partial image most similar to the template image is detected from the input image for any location, width, and height. In the proposed algorithm, a polynomial that approximates the template image is used to match the input image instead of the template image. The proposed algorithm is effective especially when the width and height of the template image differ from the partial image to be matched. An algorithm using the Legendre polynomial is proposed for efficient approximation of the template image. This algorithm not only reduces computational costs, but also improves the quality of the approximated image. It is shown theoretically and experimentally that the computational cost of the proposed algorithm is much smaller than the existing methods. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Iterative Cross Section Sequence Graph for Handwritten Character Segmentation

    Publication Year: 2007 , Page(s): 2150 - 2154
    Cited by:  Papers (3)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (764 KB) |  | HTML iconHTML  

    The iterative cross section sequence graph (ICSSG) is an algorithm for handwritten character segmentation. It expands the cross section sequence graph concept by applying it iteratively at equally spaced thresholds. The iterative thresholding reduces the effect of information loss associated with image binarization. ICSSG preserves the characters' skeletal structure by preventing the interference of pixels that causes flooding of adjacent characters' segments. Improving the structural quality of the characters' skeleton facilitates better feature extraction and classification, which improves the overall performance of optical character recognition (OCR). Experimental results showed significant improvements in OCR recognition rates compared to other well-established segmentation algorithms. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Estimating Planar Surface Orientation Using Bispectral Analysis

    Publication Year: 2007 , Page(s): 2154 - 2160
    Cited by:  Papers (3)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (3337 KB) |  | HTML iconHTML  

    In this correspondence, we propose a direct method for estimating the orientation of a plane from a single view under perspective projection. Assuming that the underlying planar texture has random phase, we show that the nonlinearities introduced by perspective projection lead to higher order correlations in the frequency domain. We also empirically show that these correlations are proportional to the orientation of the plane. Minimization of these correlations, using tools from polyspectral analysis, yields the orientation of the plane. We show the efficacy of this technique on synthetic and natural images. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • IEEE Transactions on Image Processing EDICS

    Publication Year: 2007 , Page(s): 2161
    Save to Project icon | Request Permissions | PDF file iconPDF (24 KB)  
    Freely Available from IEEE

Aims & Scope

IEEE Transactions on Image Processing focuses on signal-processing aspects of image processing, imaging systems, and image scanning, display, and printing.

Full Aims & Scope

Meet Our Editors

Editor-in-Chief
Scott Acton
University of Virginia
Charlottesville, VA, USA
E-mail: acton@virginia.edu 
Phone: +1 434-982-2003