By Topic

Image Processing, IEEE Transactions on

Issue 7 • Date Jul 2002

Filter Results

Displaying Results 1 - 11 of 11
  • Curvature of n-dimensional space curves in grey-value images

    Page(s): 738 - 745
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (287 KB)  

    Local curvature represents an important shape parameter of space curves which are well described by differential geometry. We have developed an estimator for local curvature of space curves embedded in n-dimensional (n-D) grey-value images. There is neither a segmentation of the curve needed nor a parametric model assumed. Our estimator works on the orientation field of the space curve. This orientation field and a description of local structure is obtained by the gradient structure tensor. The orientation field has discontinuities; walking around a closed contour yields two such discontinuities in orientation. This field is mapped via the Knutsson (1985) mapping to a continuous representation; from a n-D vector to a symmetric n2-D tensor field. The curvature of a space curve, a coordinate invariant property, is computed in this tensor field representation. An extensive evaluation shows that our curvature estimation is unbiased even in the presence of noise, independent of the scale of the object and furthermore the relative error stays small. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Automatic image orientation detection

    Page(s): 746 - 755
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (388 KB) |  | HTML iconHTML  

    We present an algorithm for automatic image orientation estimation using a Bayesian learning framework. We demonstrate that a small codebook (the optimal size of codebook is selected using a modified MDL criterion) extracted from a learning vector quantizer (LVQ) can be used to estimate the class-conditional densities of the observed features needed for the Bayesian methodology. We further show how principal component analysis (PCA) and linear discriminant analysis (LDA) can be used as a feature extraction mechanism to remove redundancies in the high-dimensional feature vectors used for classification. The proposed method is compared with four different commonly used classifiers, namely k-nearest neighbor, support vector machine (SVM), a mixture of Gaussians, and hierarchical discriminating regression (HDR) tree. Experiments on a database of 16 344 images have shown that our proposed algorithm achieves an accuracy of approximately 98% on the training set and over 97% on an independent test set. A slight improvement in classification accuracy is achieved by employing classifier combination techniques. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Adaptive wavelet graph model for Bayesian tomographic reconstruction

    Page(s): 756 - 770
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (551 KB) |  | HTML iconHTML  

    We introduce an adaptive wavelet graph image model applicable to Bayesian tomographic reconstruction and other problems with nonlocal observations. The proposed model captures coarse-to-fine scale dependencies in the wavelet tree by modeling the conditional distribution of wavelet coefficients given overlapping windows of scaling coefficients containing coarse scale information. This results in a graph dependency structure which is more general than a quadtree, enabling the model to produce smooth estimates even for simple wavelet bases such as the Haar basis. The inter-scale dependencies of the wavelet graph model are specified using a spatially nonhomogeneous Gaussian distribution with parameters at each scale and location. The parameters of this distribution are selected adaptively using nonlinear classification of coarse scale data. The nonlinear adaptation mechanism is based on a set of training images. In conjunction with the wavelet graph model, we present a computationally efficient multiresolution image reconstruction algorithm. This algorithm is based on iterative Bayesian space domain optimization using scale recursive updates of the wavelet graph prior model. In contrast to performing the optimization over the wavelet coefficients, the space domain formulation facilitates enforcement of pixel positivity constraints. Results indicate that the proposed framework can improve reconstruction quality over fixed resolution Bayesian methods. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Forward-and-backward diffusion processes for adaptive image enhancement and denoising

    Page(s): 689 - 703
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (461 KB) |  | HTML iconHTML  

    Signal and image enhancement is considered in the context of a new type of diffusion process that simultaneously enhances, sharpens, and denoises images. The nonlinear diffusion coefficient is locally adjusted according to image features such as edges, textures, and moments. As such, it can switch the diffusion process from a forward to a backward (inverse) mode according to a given set of criteria. This results in a forward-and-backward (FAB) adaptive diffusion process that enhances features while locally denoising smoother segments of the signal or image. The proposed method, using the FAB process, is applied in a super-resolution scheme. The FAB method is further generalized for color processing via the Beltrami flow, by adaptively modifying the structure tensor that controls the nonlinear diffusion process. The proposed structure tensor is neither positive definite nor negative, and switches between these states according to image features. This results in a forward-and-backward diffusion flow where different regions of the image are either forward or backward diffused according to the local geometry within a neighborhood. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Enhancing image watermarking methods with/without reference images by optimization on second-order statistics

    Page(s): 771 - 782
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (359 KB) |  | HTML iconHTML  

    The watermarking method has emerged as an important tool for content tracing, authentication, and data hiding in multimedia applications. We propose a watermarking strategy in which the watermark of a host is selected from the robust features of the estimated forged images of the host. The forged images are obtained from Monte Carlo simulations of potential pirate attacks on the host image. The solution of applying an optimization technique to the second-order statistics of the features of the forged images gives two orthogonal spaces. One of them characterizes most of the variations in the modifications of the host. Our watermark is embedded in the other space that most potential pirate attacks do not touch. Thus, the embedded watermark is robust. Our watermarking method uses the same framework for watermark detection with a reference and blind detection. We demonstrate the performance of our method under various levels of attacks. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Automatic segmentation and skeletonization of neurons from confocal microscopy images based on the 3-D wavelet transform

    Page(s): 790 - 801
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (461 KB)  

    We focus on methods for the preprocessing of neurons from three-dimensional (3-D) confocal microscopy images, which are needed for a subsequent detailed morphologic analysis. Due to the specific image properties of confocal microscopy scans, we had to include several heuristic approaches which are based on multiscale edges to guarantee meaningful results: (1) a reliable segmentation of objects of different sizes independent of image contrast, and, based on it, (2) the computation of skeleton points along the branch central axes, and (3) the reliable detection of branching points and of problematic regions. These are preprocessing steps to gather information which is needed by the subsequent construction of a graph representing the geometry of the neuron and a final surface reconstruction. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Cooperation of color pixel classification schemes and color watershed: a study for microscopic images

    Page(s): 783 - 789
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (274 KB) |  | HTML iconHTML  

    We study the ability of the cooperation of two-color pixel classification schemes (Bayesian and K-means classification) with color watershed. Using color pixel classification alone does not sufficiently accurately extract color regions so we suggest to use a strategy based on three steps: simplification, classification, and color watershed. Color watershed is based on a new aggregation function using local and global criteria. The strategy is performed on microscopic images. Quantitative measures are used to evaluate the resulting segmentations according to a learning set of reference images. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • On the reconstruction of height functions and terrain maps from dense range data

    Page(s): 704 - 716
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (470 KB)  

    This paper describes a method for combining multiple, dense range images to create surface reconstructions of height functions. Height functions are a special class of three-dimensional (3-D) surfaces, where one 3-D coordinate is a function of the other two. They are relevant for application domains such as terrain modeling or two-and-half dimensional surface reconstruction. Dense range maps are produced by either a range measuring device combined with a scanning mechanism or a triangulation scheme, such as active or passive stereo. The proposed method follows from a statistical formulation that characterizes the optimal surface estimate as the one that maximizes the posterior probability conditional on the input data and prior information about the application domain. Because the domain of the reconstruction is a two-dimensional (2-D) scalar function, the optimal surface can be expressed as an image, and the variational form of that optimization produces a 2-D partial differential equation (PDE). The PDE consists of two parts: a first-order data term and a second-order smoothing term. Thus optimal surface reconstruction is formulated as the solution to a second-order, nonlinear, PDE on an image, which is related to the family of PDE-based image processing algorithms in the literature. This paper presents the theory for reconstruction and some particular aspects of the numerical implementation. It also analyzes results on both synthetic and real data sets, which show a 75%-95% reduction of the RMS sensor error. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • On optimal linear filtering for edge detection

    Page(s): 728 - 737
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (385 KB)  

    In this paper, we revisit the analytical expressions of the three Canny's (1983) criteria for edge detection quality: good detection, good localization, and low multiplicity of false detections. Our work differs from Canny's work on two essential points. Here, the criteria are given for discrete sampled signals, i.e., for the real, implemented filters. Instead of a single-step edge as input signal, we use pulses of various width. The proximity of other edges affects the quality of the detection process. This is taken into account in the new expressions of these criteria. We derive optimal filters for each of the criteria and for any combination of them. In particular, we define an original filter which maximizes detection and localization and a simple approximation of the optimal filter for the simultaneous maximization of the three criteria. The upper bounds of the criteria are computed which allow users to measure the absolute and relative performance of any filter (exponential, Deriche (1987), and first derivative of Gaussian filters are evaluated). Our criteria can also be used to compute the optimal value of the scale parameter of a given filter when the resolution of the detection is fixed. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • 3-D wavelet compression and progressive inverse wavelet synthesis rendering of concentric mosaic

    Page(s): 802 - 816
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (485 KB)  

    Using an array of photo shots, the concentric mosaic offers a quick way to capture and model a realistic three-dimensional (3-D) environment. We compress the concentric mosaic image array with a 3-D wavelet transform and coding scheme. Our compression algorithm and bitstream syntax are designed to ensure that a local view rendering of the environment requires only a partial bitstream, thereby eliminating the need to decompress the entire compressed bitstream before rendering. By exploiting the ladder-like structure of the wavelet lifting scheme, the progressive inverse wavelet synthesis (PIWS) algorithm is proposed to maximally reduce the computational cost of selective data accesses on such wavelet compressed datasets. Experimental results show that the 3-D wavelet coder achieves high-compression performance. With the PIWS algorithm, a 3-D environment can be rendered in real time from a compressed dataset. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Weighted median image sharpeners for the World Wide Web

    Page(s): 717 - 727
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (431 KB) |  | HTML iconHTML  

    A class of robust weighted median (WM) sharpening algorithms is developed in this paper. Unlike traditional linear sharpening methods, weighted median sharpeners are shown to be less sensitive to background random noise or to image artifacts introduced by JPEG and other compression algorithms. These concepts are extended to include data dependent weights under the framework of permutation weighted medians leading to tunable sharpeners that, in essence, are insensitive to noise and compression artifacts. Permutation WM sharpeners are subsequently generalized to smoother/sharpener structures that can sharpen edges and image details while simultaneously filter out background random noise. A statistical analysis of the various algorithms is presented, theoretically validating the characteristics of the proposed sharpening structures. A number of experiments are shown for the sharpening of JPEG compressed images and sharpening of images with background film-grain noise. These algorithms can prove useful in the enhancement of compressed or noisy images posted on the World Wide Web (WWW) as well as in other applications where the underlying images are unavoidably acquired with noise. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.

Aims & Scope

IEEE Transactions on Image Processing focuses on signal-processing aspects of image processing, imaging systems, and image scanning, display, and printing.

Full Aims & Scope

Meet Our Editors

Editor-in-Chief
Scott Acton
University of Virginia
Charlottesville, VA, USA
E-mail: acton@virginia.edu 
Phone: +1 434-982-2003