Scheduled System Maintenance:
Some services will be unavailable Sunday, March 29th through Monday, March 30th. We apologize for the inconvenience.
By Topic

Image Processing, IEEE Transactions on

Issue 8 • Date Aug 1999

Filter Results

Displaying Results 1 - 18 of 18
  • Convergence index filter for vector fields

    Publication Year: 1999 , Page(s): 1029 - 1038
    Cited by:  Papers (26)  |  Patents (4)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (484 KB)  

    This paper proposes a unique fitter called an iris filter, which evaluates the degree of convergence of the gradient vectors within its region of support toward a pixel of interest. The degree of convergence is related to the distribution of the directions of the gradient vectors and not to their magnitudes. The convergence index of a gradient vector at a given pixel is defined as the cosine of its orientation with respect to the line connecting the pixel and the pixel of interest. The output of the iris filter is the average of the convergence indices within its region of support and lies within the range [-1,1]. The region of support of the iris filter changes so that the degree of convergence of the gradient vectors in it becomes a maximum, i.e., the size and shape of the region of support at each pixel of interest changes adaptively according to the distribution pattern of the gradient vectors around it. Theoretical analysis using models of a rounded convex region and a semi-cylindrical one is given. These show that rounded convex regions are generally enhanced, even if the contrast to their background is weak and also that elongated objects are suppressed. The filter output is 1/π at the boundaries of rounded convex regions and semi-cylindrical ones. This value does not depend on the contrast to their background. This indicates that boundaries of rounded or slender objects, with weak contrast to their background, are enhanced by the iris filter and that the absolute value of 1/π can be used to detect the boundaries of these objects. These theoretical characteristics are confirmed by experiments using X-ray images View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A genetic algorithm for the estimation of ridges in fingerprints

    Publication Year: 1999 , Page(s): 1134 - 1139
    Cited by:  Papers (2)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (908 KB)  

    A genetic algorithm is developed to find the ridges in paper fingerprints. It is based on the fact that the ridges of the fingerprints are parallel. When scanning the fingerprint, line by line, the ideal noise-free gray level distribution should yield lines of black and white. The widths of these lines are not constant. The proposed genetic algorithm generates black and white lines of different widths. The widths change until we get the best match with the original fingerprint View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Postprocessing for very low bit-rate video compression

    Publication Year: 1999 , Page(s): 1125 - 1129
    Cited by:  Papers (22)  |  Patents (14)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (340 KB)  

    This paper presents a novel postprocessing algorithm developed specifically for very low bit-rate MC-DCT video coders operating at low spatial resolution, postprocessing is intricate in this situation because the low sampling rate (as compared to the image feature size) makes it very easy to overfilter, producing excessive blurring. The proposed algorithm uses pixel-by-pixel processing to identify and reduce both blocking artifacts and mosquito noise while attempting to preserve the sharpness and naturalness of the reconstructed video signal and minimize the system complexity. Experimental results show that the algorithm successfully reduces artifacts in a 16 kb/s scene-adaptive coder for video signals sampled at 80×112 pixels per frame and 5-10 frames/s. Furthermore, the portability of the proposed algorithm to other block-DCT based compression systems is shown by applying it, without modification, to successfully post-process a JPEG-compressed image View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Postprocessing of images by filtering the unmasked coding noise

    Publication Year: 1999 , Page(s): 1050 - 1062
    Cited by:  Papers (5)  |  Patents (3)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (620 KB)  

    This paper presents a methodology for the restoration of the visual quality of still images affected by coding noise. This quality restoration is achieved only by considering the additive coding noise and is therefore limited to an adaptive postprocessing filtering. It is based on a model of the human visual system that considers the relationship between visual stimuli and their visibility. This phenomenon known as masking is used as a criterion for the locally adaptive filtering design. An image transformation that yields visual stimuli tuned to the frequency and orientation according to the perceptual model is proposed. It allows a local measure of the masking of each perceptual stimulus considering the contrast between signal and estimated noise. This measure is obtained by analytic filtering. Processing schemes are presented with applications to the discrete cosine transform (DCT) and subband coded images. One proposed solution considers the characteristics of DCT coding noise for the estimation of the noise. Another solution is based on a “blind” neural estimation of the noise characteristics. Experimental results of the proposed approaches show significant improvements of the visual quality, which validates our perceptual model and filtering View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • An efficient motion vector coding scheme based on minimum bitrate prediction

    Publication Year: 1999 , Page(s): 1117 - 1120
    Cited by:  Papers (23)  |  Patents (8)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (108 KB)  

    Motion vector coding efficiency is becoming an important issue in low bitrate video coding because of its increasing relative bit portion. This work presents a new motion vector coding technique based on minimum bitrate prediction. In the proposed scheme, a predicted motion vector is chosen from the three causal neighboring motion vectors so that it can produce a minimum bitrate in motion vector difference coding. Then the prediction error, or motion vector difference (MVD), and the mode information (MODE) for determining the predicted motion vector at a decoder are coded and transmitted in order. Sending bits for the MVD ahead of bits for the MODE, the scheme can minimize the bit amount for the MODE by taking advantage of the fact that the minimum bitrate predictor is used for motion vector prediction. Adaptively combining this minimum bitrate prediction scheme with the conventional model-based prediction scheme, more efficient motion vector coding can be achieved. The proposed scheme improves the coding efficiency noticeably for various video sequences View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A fast algorithm for designing stack filters

    Publication Year: 1999 , Page(s): 1014 - 1028
    Cited by:  Papers (9)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (876 KB)  

    Stack filters are a class of nonlinear filters with excellent properties for signal restoration. Unfortunately, present algorithms for designing stack filters can only be used for small window sizes because of either their computational overhead or their serial nature. This paper presents a new adaptive algorithm for determining a stack filter that minimizes the mean absolute error criterion. The new algorithm retains the iterative nature of many current adaptive stack filtering algorithms, but significantly reduces the number of iterations required to converge to an optimal filter. This algorithm is faster than all currently available stack filter design algorithms, is simple to implement, and is shown in this paper to always converge to an optimal stack filter. Extensive comparisons between this new algorithm and all existing algorithms are provided. The comparisons are based both on the performance of the resulting filters and upon the time and space complexity of the algorithms. They demonstrate that the new algorithm has three advantages: it is faster than all other available algorithms; it can be used on standard workstations (SPARC 5 with 48 MB) to design filters with windows containing 20 or more points; and, its highly parallel structure allows very fast implementations on parallel machines. This new algorithm allows cascades of stack filters to be designed; stack filters with windows containing 72 points have been designed in a matter of minutes under this new approach View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Nonuniformity correction of infrared image sequences using the constant-statistics constraint

    Publication Year: 1999 , Page(s): 1148 - 1151
    Cited by:  Papers (40)  |  Patents (4)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (460 KB)  

    Using clues from neurobiological adaptation, we have developed the constant-statistics (CS) algorithm for nonuniformity correction of infrared focal point arrays (IRFPAs) and other imaging arrays. The CS model provides an efficient implementation that can also eliminate much of the ghosting artifact that plagues all scene-based nonuniformity correction (NUC) algorithms. The CS algorithm with deghosting is demonstrated on synthetic and real infrared (IR) sequences and shown to improve the overall accuracy of the correction procedure View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Feedback control strategies for object recognition

    Publication Year: 1999 , Page(s): 1084 - 1101
    Cited by:  Papers (11)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (2580 KB)  

    We present a paradigm for feedback strategies that find instances of a generic class of objects by improving on established single-pass hypothesis generation and verification approaches. We improve upon the mechanisms of the traditional or classical image processing systems by introducing control strategies at low, intermediate, and high levels of analysis. We produce optimal sets of low-level features to reduce the number of hypotheses generated. The feedback further enables updated sets of features to be extracted so that the target object may be located even in very, noisy data. The use of an interest operator in the feedback directs the search through the hypotheses in an optimal manner, so minimizing the amount of feedback to false alarms. Furthermore, we aim to obtain detailed information about a complex object and not just its location. Thus, following top-down recognition of the object our feedback control directs the search for missing information. The system can extract complex objects in a scale and rotation independent manner where the objects may be partially occluded. The method is illustrated using box shaped objects and noisy IR images of a number of bridges View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Adaptive approximation bounds for vertex based contour encoding

    Publication Year: 1999 , Page(s): 1142 - 1147
    Cited by:  Papers (4)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (420 KB)  

    When approximating the shape of a region, a fixed bound on the tolerable distortion is set for approximating its contour points. An adaptive approximation bound for lossy coding of the contour points is proposed. A function representing the relative significance of the contour points is defined to adjust the distortion bound along the region contour allowing an adaptive approximation of the region shape. The effectiveness of the adaptive contour coding approach for a region-based coding system is verified through experiments View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Image enhancement based on signal subspace approach

    Publication Year: 1999 , Page(s): 1129 - 1134
    Cited by:  Papers (4)  |  Patents (1)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (868 KB)  

    This paper describes a block-by-block basis image enhancement algorithm which uses the signal subspace method to enhance images corrupted by uncorrelated additive noise. The enhancement is performed by eliminating the noise components in the noise subspace and estimating the clean image from the remaining components in the signal subspace View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Conditional entropy coding of VQ indexes for image compression

    Publication Year: 1999 , Page(s): 1005 - 1013
    Cited by:  Papers (3)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (196 KB)  

    Block sizes of practical vector quantization (VQ) image coders are not large enough to exploit all high-order statistical dependencies among pixels. Therefore, adaptive entropy coding of VQ indexes via statistical context modeling can significantly reduce the bit rate of VQ coders for given distortion. Address VQ was a pioneer work in this direction. In this paper we develop a framework of conditional entropy coding of VQ indexes (CECOVI) based on a simple Bayesian-type method of estimating probabilities conditioned on causal contexts, CECOVI is conceptually cleaner and algorithmically more efficient than address VQ, with address-VQ technique being its special case. It reduces the bit rate of address VQ by more than 20% for the same distortion, and does so at only a tiny fraction of address VQ's computational cost View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Spread spectrum image steganography

    Publication Year: 1999 , Page(s): 1075 - 1083
    Cited by:  Papers (144)  |  Patents (2)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (912 KB)  

    We present a new method of digital steganography, entitled spread spectrum image steganography (SSIS). Steganography, which means “covered writing” in Greek, is the science of communicating in a hidden manner. Following a discussion of steganographic communication theory and review of existing techniques, the new method, SSIS, is introduced. This system hides and recovers a message of substantial length within digital imagery while maintaining the original image size and dynamic range. The hidden message can be recovered using appropriate keys without any knowledge of the original image. Image restoration, error-control coding, and techniques similar to spread spectrum are described, and the performance of the system is illustrated. A message embedded by this method can be in the form of text, imagery, or any other digital signal. Applications for such a data-hiding scheme include in-band captioning, covert communication, image tamperproofing, authentication, embedded control, and revision tracking View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Least-squares model-based halftoning

    Publication Year: 1999 , Page(s): 1102 - 1116
    Cited by:  Papers (36)  |  Patents (4)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1084 KB)  

    A least-squares model-based (LSMB) approach to digital halftoning is proposed. It exploits both a printer model and a model for visual perception. It attempts to produce an optimal halftoned reproduction, by minimizing the squared error between the response of the cascade of the printer and visual models to the binary image and the response of the visual model to the original gray-scale image. It has been shown that the one-dimensional (1-D) least-squares problem, in which each row or column of the image is halftoned independently, can be implemented using the Viterbi algorithm to obtain the globally optimal solution. Unfortunately, the Viterbi algorithm cannot be used in two dimensions. In this paper, the two-dimensional (2-D) least-squares solution is obtained by iterative techniques, which are only guaranteed to produce a total optimum. Experiments show that LSMB halftoning produces better textures and higher spatial and gray-scale resolution than conventional techniques. We also show that the least-squares approach eliminates most of the problems associated with error diffusion. We investigate the performance of the LSMB algorithms over a range of viewing distances, or equivalently, printer resolutions. We also show that the LSMB approach gives us precise control of image sharpness View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Minimum error rate training for PHMM-based text recognition

    Publication Year: 1999 , Page(s): 1120 - 1124
    Cited by:  Papers (5)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (196 KB)  

    Discriminative training is studied to improve the performance of our pseudo two-dimensional (2-D) hidden Markov model (PHMM) based text recognition system. The aim of this discriminative training is to adjust model parameters to directly minimize the classification error rate. Experimental results have shown great reduction in recognition error rate even for PHMMs already well-trained using conventional maximum likelihood (ML) approaches View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Singular value decomposition-based reconstruction algorithm for seismic traveltime tomography

    Publication Year: 1999 , Page(s): 1152 - 1154
    Cited by:  Papers (8)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (140 KB)  

    A reconstruction method is given for seismic transmission traveltime tomography. The method is implemented via the combinations of singular value decomposition, appropriate weighting matrices, and variable regularization parameter. The problem is scaled through the weighting matrices so that the singular spectrum is normalized. Matching the normalized singular values, a regularization parameter varies within the interval [0, 1], and linearly increases with singular value index from a small, initial value rather than a fixed one to eliminate the impacts of smaller singular values' components. The experimental results show that the proposed method is superior to the ordinary singular value decomposition (SVD) methods such as truncated SVD and Tikhonov (1977) regularization View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • On the computational aspects of Gibbs-Markov random field modeling of missing-data in image sequences

    Publication Year: 1999 , Page(s): 1139 - 1142
    Cited by:  Papers (2)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (152 KB)  

    Gibbs-Markov random field (GMRF) modeling has been shown to be a robust method in the detection of missing-data in image sequences for a video restoration application. However, the maximum a posteriori probability (MAP) estimation of the GMRF model requires computationally expensive optimization algorithms in order to achieve an optimal solution. The continuous relaxation labeling (RL) is explored in this paper as an efficient approach for solving the optimization problem. The conversion of the original combinatorial optimization into a continuous RL formulation is presented. The performance of the RL formulation is analyzed and compared with that of other optimization methods such as stochastic simulated annealing, iterated conditional modes, and mean field annealing. The results show that RL holds out promise as an optimization algorithm for problems in image sequence processing View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A surprising Radon transform result and its application to motion detection

    Publication Year: 1999 , Page(s): 1039 - 1049
    Cited by:  Papers (5)  |  Patents (18)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (236 KB)  

    An elliptical region of the plane supports a positive-valued function whose Radon transform depends only on the slope of the integrating line. Any two parallel lines that intersect the ellipse generate equal line integrals of the function. We prove that this peculiar property is unique to the ellipse; no other convex, compact region of the plane supports a nonzero-valued function whose Radon transform depends only on slope. We motivate this problem by considering the detection of a constant-velocity moving object in a sequence of images. In the presence of additive, white, Gaussian noise. The intensity distribution of the object is known, but the velocity is only assumed to lie in some known set, for example, an ellipse or a rectangle. The object is to find a space-time linear filter, operating on the image sequence, whose minimum output signal-to-noise ratio (SNR) for any velocity in the set is maximized. For an ellipse (and its special cases, the disk and the line-segment) the special Radon transform property of the ellipse enables us to obtain a closed-form, analytical solution for the minimax filter, which significantly outperforms the conventional three-dimensional (3-D) matched filter. This analytical solution also suggests a constrained minimax filter for other velocity sets, obtainable in closed form, whose SNR can be very close to the minimax SNR View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Enhancement by image-dependent warping

    Publication Year: 1999 , Page(s): 1063 - 1074
    Cited by:  Papers (11)  |  Patents (1)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1544 KB)  

    All image warping algorithms to date are image-independent, namely, relate only to the geometry of the image plane, ignoring the content of the image. We show that taking the image content into account yields elaborate warping schemes which may be used to enhance, sharpen and scale images. Sharpening the image is achieved by “squashing” the pixels in edge areas, and “stretching” the pixels in flat areas. Since image pixels are only moved, not modified, some drawbacks of classical linear filtering methods are avoided. We also lay the mathematical foundation for the use of an image-dependent warping scheme in traditional warping applications, such as distortion minimization View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.

Aims & Scope

IEEE Transactions on Image Processing focuses on signal-processing aspects of image processing, imaging systems, and image scanning, display, and printing.

Full Aims & Scope

Meet Our Editors

Editor-in-Chief
Scott Acton
University of Virginia
Charlottesville, VA, USA
E-mail: acton@virginia.edu 
Phone: +1 434-982-2003