By Topic

Image Processing, IEEE Transactions on

Issue 4 • Date Apr 1995

Filter Results

Displaying Results 1 - 14 of 14
  • Layered image coding using the DCT pyramid

    Publication Year: 1995 , Page(s): 512 - 516
    Cited by:  Papers (10)  |  Patents (1)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (576 KB)  

    A block-based subband image coder that exploits the ability to perform decimation in the discrete cosine transform (DCT) domain to effect a pyramidal data structure is described. The proposed “DCT pyramid” has a distinct feature of improved image rendition properties without the associated blocking artifacts at low bit-rates View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Inverse halftoning and kernel estimation for error diffusion

    Publication Year: 1995 , Page(s): 486 - 498
    Cited by:  Papers (56)  |  Patents (6)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1308 KB)  

    Two different approaches in the inverse halftoning of error-diffused images are considered. The first approach uses linear filtering and statistical smoothing that reconstructs a gray-scale image from a given error-diffused image. The second approach can be viewed as a projection operation, where one assumes the error diffusion kernel is known, and finds a gray-scale image that will be halftoned into the same binary image. Two projection algorithms, viz., minimum mean square error (MMSE) projection and maximum a posteriori probability (MAP) projection, that differ on the way an inverse quantization step is performed, are developed. Among the filtering and the two projection algorithms, MAP projection provides the best performance for inverse halftoning. Using techniques from adaptive signal processing, we suggest a method for estimating the error diffusion kernel from the given halftone. This means that the projection algorithms can be applied in the inverse halftoning of any error-diffused image without requiring any a priori information on the error diffusion kernel. It is shown that the kernel estimation algorithm combined with MAP projection provide the same performance in inverse halftoning compared to the case where the error diffusion kernel is known View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Adaptive median filters: new algorithms and results

    Publication Year: 1995 , Page(s): 499 - 502
    Cited by:  Papers (167)  |  Patents (1)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (464 KB)  

    Based on two types of image models corrupted by impulse noise, we propose two new algorithms for adaptive median filters. They have variable window size for removal of impulses while preserving sharpness. The first one, called the ranked-order based adaptive median filter (RAMF), is based on a test for the presence of impulses in the center pixel itself followed by a test for the presence of residual impulses in the median filter output. The second one, called the impulse size based adaptive median filter (SAMF), is based on the detection of the size of the impulse noise. It is shown that the RAMF is superior to the nonlinear mean Lp filter in removing positive and negative impulses while simultaneously preserving sharpness; the SAMF is superior to Lin's (1988) adaptive scheme because it is simpler with better performance in removing the high density impulsive noise as well as nonimpulsive noise and in preserving the fine details. Simulations on standard images confirm that these algorithms are superior to standard median filters View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A perceptually motivated three-component image model-part II: applications to image compression

    Publication Year: 1995 , Page(s): 430 - 447
    Cited by:  Papers (17)  |  Patents (5)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1976 KB)  

    For past I see ibid., vol.4, no.4, p.405 (1995). The use of the image model of Part I is investigated in the context of image compression. The model decomposes the image into a primary component that contains the strong edge information, a smooth component that represents the background slow-intensity variations, and a texture component that contains the textures. The primary component, which is known to be perceptually important, is encoded separately by encoding the intensity and geometric information of the strong edge brim contours. Two alternatives for coding the smooth and texture components are studied: entropy-coded adaptive DCT and entropy-coded subband coding. It is shown via simulations that the proposed schemes, which can be thought of as a hybrid of waveform coding and feature-based coding techniques, result in both subjective and objective performance improvements over several other image coding schemes and, in particular, over the JPEG continuous-tone image compression standard. These improvements are especially noticeable at low bit rates. Furthermore, it is shown that a perceptual tuning based on the contrast-sensitivity of the human visual system can be used in the DCT-based scheme, which in conjunction with the three-component model, leads to additional subjective performance improvements View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • High-order moment computation of gray-level images

    Publication Year: 1995 , Page(s): 502 - 505
    Cited by:  Papers (4)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (308 KB)  

    Describes an efficient approach to calculate geometric moments of a 2-D gray-level image. It is shown both theoretically and experimentally that the new method compares favorably with previous techniques, especially for high-order moments View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Analysis of optical flow constraints

    Publication Year: 1995 , Page(s): 460 - 469
    Cited by:  Papers (6)  |  Patents (5)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (952 KB)  

    Different constraint equations have been proposed in the literature for the derivation of optical flow. Despite of the large number of papers dealing with computational techniques to estimate optical flow, only a few authors have investigated conditions under which these constraints exactly model the velocity field, that is, the perspective projection on the image plane of the true 3-D velocity. These conditions are analyzed under different hypotheses, and the departures of the constraint equations in modeling the velocity field are derived for different motion conditions. Experiments are also presented giving measures of these departures and of the induced errors in the estimation of the velocity field View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A comparison of rotational representations in structure and motion estimation for manoeuvring objects

    Publication Year: 1995 , Page(s): 516 - 520
    Cited by:  Papers (2)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (472 KB)  

    Alternative methods are examined for estimating motion and structure of objects undergoing smooth maneuvers through measurements of feature positions in long, multiple-camera image sequences. Performance of extended Kalman filters are compared for Euler angle-axis, roll-pitch-yaw, and quaternion parameterizations of rotational motion. The angle-axis method was found to give the best overall performance with a computationally efficient implementation View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A perceptually motivated three-component image model-Part I: description of the model

    Publication Year: 1995 , Page(s): 401 - 415
    Cited by:  Papers (49)  |  Patents (3)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1536 KB)  

    Some psychovisual properties of the human visual system are discussed and interpreted in a mathematical framework. The formation of perception is described by appropriate minimization problems and the edge information is found to be of primary importance in visual perception. Having introduced the concept of edge strength, it is demonstrated that strong edges are of higher perceptual importance than weaker edges (textures). We have also found that smooth areas of an image influence our perception together with the edge information, and that this influence can be mathematically described via a minimization problem. Based on this study, we have proposed to decompose the image into three components: (i) primary, (ii) smooth, and (iii) texture, which contain, respectively, the strong edges, the background, and the textures. An algorithm is developed to generate the three-component image model, and an example is provided in which the resulting three components demonstrate the specific properties as expected. Finally, it is shown that the primary component provides a superior representation of the strong edge information as compared with the popular Laplacian-Gaussian operator edge extraction scheme View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • The study of logarithmic image processing model and its application to image enhancement

    Publication Year: 1995 , Page(s): 506 - 512
    Cited by:  Papers (22)  |  Patents (18)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (592 KB)  

    Describes a new implementation of Lee's (1980) image enhancement algorithm. This approach, based on the logarithmic image processing (LIP) model, can simultaneously enhance the overall contrast and the sharpness of an image. A normalized complement transform has been proposed to simplify the analysis and the implementation of the LIP model-based algorithms. This new implementation has been compared with histogram equalization and Lee's original algorithm View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Subband coding of images using asymmetrical filter banks

    Publication Year: 1995 , Page(s): 478 - 485
    Cited by:  Papers (15)  |  Patents (6)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (640 KB)  

    We address the problem of the choice of subband filters in the context of image coding. The ringing effects that occur in subband-based compression schemes are the major unpleasant distortions. A new set of two-band filter banks suitable for image coding applications is presented. The basic properties of these filters are linear phase, perfect reconstruction, asymmetric length, and maximum regularity. The better overall performances compared to the classical QMF subband filters are explained. The asymmetry of the filter lengths results in a better compaction of the energy, especially in the highpass subbands. Moreover, the quantization error is reduced due to the short lowpass synthesis filter. The undesirable ringing effect is considerably reduced due to the good step response of the synthesis lowpass filter. The proposed design takes into account the statistics of natural images and the effect of quantization errors in the reconstructed images, which explains the better coding performance View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A recursive nonstationary MAP displacement vector field estimation algorithm

    Publication Year: 1995 , Page(s): 416 - 429
    Cited by:  Papers (15)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1292 KB)  

    A recursive model-based algorithm for obtaining the maximum a posteriori (MAP) estimate of the displacement vector field (DVF) from successive image frames of an image sequence is presented. To model the DVF, we develop a nonstationary vector field model called the vector coupled Gauss-Markov (VCGM) model. The VCGM model consists of two levels: an upper level, which is made up of several submodels with various characteristics, and a lower level or line process, which governs the transitions between the submodels. A detailed line process is proposed. The VCGM model is well suited for estimating the DVF since the resulting estimates preserve the boundaries between the differently moving areas in an image sequence. A Kalman type estimator results, followed by a decision criterion for choosing the appropriate line process. Several experiments demonstrate the superior performance of the proposed algorithm with respect to prediction error, interpolation error, and robustness to noise View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Restoration of spatially varying blurred images using multiple model-based extended Kalman filters

    Publication Year: 1995 , Page(s): 520 - 523
    Cited by:  Papers (8)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (284 KB)  

    Image restoration based upon unrealistic homogeneous image and blur models can result in highly inaccurate estimates with excessive ringing. Thus, it is important at each pixel location to restore the image using the particular image and blur parameters characteristic of the immediate local neighborhood. Toward this goal, a multiple model extended Kalman filters (EKF) procedure was developed and tested for spatially varying parameterized blurs. Results show this procedure to be very useful for restoring representative images with significant simulated variations of the blur parameter View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Concealment of damaged block transform coded images using projections onto convex sets

    Publication Year: 1995 , Page(s): 470 - 477
    Cited by:  Papers (160)  |  Patents (12)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (688 KB)  

    An algorithm for lost signal restoration in block-based still image and video sequence coding is presented. Problems arising from imperfect transmission of block-coded images result in lost blocks. The resulting image is flawed by the absence of square pixel regions that are notably perceived by human vision, even in real-time video sequences. Error concealment is aimed at masking the effect of missing blocks by use of temporal or spatial interpolation to create a subjectively acceptable approximation to the true error-free image. This paper presents a spatial interpolation algorithm that addresses concealment of lost image blocks using only intra-frame information. It attempts to utilize spatially correlated edge information from a large local neighborhood of surrounding pixels to restore missing blocks. The algorithm is a Gerchberg (1974) type spatial domain/spectral domain constraint-satisfying iterative process, and may be viewed as an alternating projections onto convex sets method View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Filters involving derivatives with application to reconstruction from scanned halftone images

    Publication Year: 1995 , Page(s): 448 - 459
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1300 KB)  

    This paper presents a method for designing finite impulse response (FIR) filters for samples of a 2-D signal, e.g., an image, and its gradient. The filters, which are called blended filters, are decomposable in three filters, each separable in 1-D filters on subsets of the data set. Optimality in the minimum mean square error sense (MMSE) of blended filtering is shown for signals with separable autocorrelation function. Relations between correlation functions for signals and their gradients are derived. Blended filters may be composed from FIR Wiener filters using these relations. Simple blended filters are developed and applied to the problem of gray value image reconstruction from bilevel (scanned) clustered-dot halftone images, which is an application useful in the graphic arts. Reconstruction results are given, showing that reconstruction with higher resolution than the halftone grid is achievable with blended filters View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.

Aims & Scope

IEEE Transactions on Image Processing focuses on signal-processing aspects of image processing, imaging systems, and image scanning, display, and printing.

Full Aims & Scope

Meet Our Editors

Editor-in-Chief
Scott Acton
University of Virginia
Charlottesville, VA, USA
E-mail: acton@virginia.edu 
Phone: +1 434-982-2003