Scheduled System Maintenance on May 29th, 2015:
IEEE Xplore will be upgraded between 11:00 AM and 10:00 PM EDT. During this time there may be intermittent impact on performance. We apologize for any inconvenience.
By Topic

Image Processing, IEEE Transactions on

Issue 2 • Date Feb 1999

Filter Results

Displaying Results 1 - 14 of 14
  • Motion-compensated 3-D subband coding of video

    Publication Year: 1999 , Page(s): 155 - 167
    Cited by:  Papers (189)  |  Patents (82)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (428 KB)  

    This paper describes a video coding system based on motion-compensated three-dimensional (3-D) subband/wavelet coding (MC-3DSBC), which can overcome the limits of both 3-D SBC and MC prediction-based coding. In this new system, spatio-temporal subbands are generated by MC temporal analysis and a spatial wavelet transform, and then encoded by 3-D subband-finite state scalar quantization (3DSB-FSSQ). The rate allocation from the GOP level to each class of subbands is optimized by utilizing the structural property of MC-3DSBC that additive superposition approximately holds for both rate and distortion. The proposed video coding system is applied to several test video clips. Its performance exceeds that of both a known MPEG-1 implementation and a similar subband MC predictive coder while maintaining modest computational complexity and memory size View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Shape preserving local histogram modification

    Publication Year: 1999 , Page(s): 220 - 230
    Cited by:  Papers (53)  |  Patents (1)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1192 KB)  

    A novel approach for shape preserving contrast enhancement is presented in this paper. Contrast enhancement is achieved by means of a local histogram equalization algorithm which preserves the level-sets of the image. This basic property is violated by common local schemes, thereby introducing spurious objects and modifying the image information. The scheme is based on equalizing the histogram in all the connected components of the image, which are defined based both on the grey-values and spatial relations between pixels in the image, and following mathematical morphology, constitute the basic objects in the scene. We give examples for both grey-value and color images View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Cross Burg entropy maximization and its application to ringing suppression in image reconstruction

    Publication Year: 1999 , Page(s): 286 - 292
    Cited by:  Papers (5)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (700 KB)  

    We present a multiplicative algorithm for image reconstruction, together with a partial convergence proof. The iterative scheme aims to maximize cross Burg entropy between modeled and measured data. Its application to infrared astronomical satellite (IRAS) data shows reduced ringing around point sources, compared to the EM (Richardson-Lucy) algorithm View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Bayesian and regularization methods for hyperparameter estimation in image restoration

    Publication Year: 1999 , Page(s): 231 - 246
    Cited by:  Papers (69)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1264 KB)  

    In this paper, we propose the application of the hierarchical Bayesian paradigm to the image restoration problem. We derive expressions for the iterative evaluation of the two hyperparameters applying the evidence and maximum a posteriori (MAP) analysis within the hierarchical Bayesian paradigm. We show analytically that the analysis provided by the evidence approach is more realistic and appropriate than the MAP approach for the image restoration problem. We furthermore study the relationship between the evidence and an iterative approach resulting from the set theoretic regularization approach for estimating the two hyperparameters, or their ratio, defined as the regularization parameter. Finally the proposed algorithms are tested experimentally View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Biorthogonal filterbanks and energy preservation property in image compression

    Publication Year: 1999 , Page(s): 168 - 178
    Cited by:  Papers (18)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (380 KB)  

    The energy preservation property is among the most widely used properties of orthogonal transforms in image compression because the reconstruction error can be computed as the sum of the subband distortions. Thus, this is a key point in the use of efficient bit allocation techniques such as rate-distortion algorithms. Therefore, we study the nonorthogonality of biorthogonal filterbanks with reference to energy preservation from both theoretical and applicative points of view. We calculate the Riesz bounds as energy preservation bounds for filterbanks and discrete wavelet transforms, and then connect these results with the Riesz bounds of the related continuous wavelet transform. The simultaneous use of biorthogonal filterbanks and rate-distortion algorithms is then discussed as the issue of estimating the reconstruction error as an additive function of the subband distortion. We propose a weighted sum of the subband distortions as an estimate, whose accuracy is calculated by a wide range of experiments. This accuracy is shown to be correlated to the Riesz bounds of the filterbanks. We conclude that from this point of view, most of the usual biorthogonal filterbanks may be considered as nearly orthogonal View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Rotation-invariant texture classification using a complete space-frequency model

    Publication Year: 1999 , Page(s): 255 - 269
    Cited by:  Papers (74)  |  Patents (4)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (3940 KB)  

    A method of rotation-invariant texture classification based on a complete space-frequency model is introduced. A polar, analytic form of a two-dimensional (2-D) Gabor wavelet is developed, and a multiresolution family of these wavelets is used to compute information-conserving microfeatures. From these microfeatures a micromodel, which characterizes spatially localized amplitude, frequency, and directional behavior of the texture, is formed. The essential characteristics of a texture sample, its macrofeatures, are derived from the estimated selected parameters of the micromodel. Classification of texture samples is based on the macromodel derived from a rotation invariant subset of macrofeatures. In experiments, comparatively high correct classification rates were obtained using large sample sets View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Blind image deconvolution using a robust GCD approach

    Publication Year: 1999 , Page(s): 295 - 301
    Cited by:  Papers (19)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (636 KB)  

    In this correspondence, a new viewpoint is proposed for estimating an image from its distorted versions in presence of noise without the a priori knowledge of the distortion functions. In z-domain, the desired image can be regarded as the greatest common polynomial divisor among the distorted versions. With the assumption that the distortion filters are finite impulse response (FIR) and relatively coprime, in the absence of noise, this becomes a problem of taking the greatest common divisor (GCD) of two or more two-dimensional (2-D) polynomials. Exact GCD is not desirable because even extremely small variations due to quantization error or additive noise can destroy the integrity of the polynomial system and lead to a trivial solution. Our approach to this blind deconvolution approximation problem introduces a new robust interpolative 2-D GCD method based on a one-dimensional (1-D) Sylvester-type GCD algorithm. Experimental results with both synthetically blurred images and real motion-blurred pictures show that it is computationally efficient and moderately noise robust View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A confidence-based approach to enhancing underwater acoustic image formation

    Publication Year: 1999 , Page(s): 270 - 285
    Cited by:  Papers (5)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (552 KB)  

    This paper describes a flexible technique to enhance the formation of short-range acoustic images so as to improve image quality and facilitate the tasks of subsequent postprocessing methods. The proposed methodology operates as an ideal interface between the signals formed by a focused beamforming technique (i.e., the beam signals) and the related image, whether a two-dimensional (2-D) or three-dimensional (3-D) one. To this end, a reliability measure has been introduced, called confidence, which allows one to perform a rapid examination of the beam signals and is aimed at accurately detecting echoes backscattered from a scene. The confidence-based approach exploits the physics of the process of image formation and generic a priori knowledge of a scene to synthesize model-based signals to be compared with actual backscattered echoes, giving, at the same time, a measure of the reliability of their similarity. The objectives that can be attained by this method can be summarized in a reduction in artifacts due to the lowering of the side-lobe level, a better lateral resolution, a greater accuracy in range determination, a direct estimation of the reliability of the information acquired, thus leading to a higher image quality and hence a better scene understanding. Tests on both simulated and actual data (concerning both 2-D and 3-D images) show the higher efficiency of the proposed confidence-based approach, as compared with more traditional techniques View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Multiplication-free approximate algorithms for compressed-domain linear operations on images

    Publication Year: 1999 , Page(s): 247 - 254
    Cited by:  Papers (6)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (696 KB)  

    We propose a method for devising approximate multiplication-free algorithms for compressed-domain linear operations on images, e.g., downsampling, translation, filtering, etc. We demonstrate that the approximate algorithms give output images that are perceptually nearly equivalent to those of the exact processing, while the computational complexity is significantly reduced View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Perfect blind restoration of images blurred by multiple filters: theory and efficient algorithms

    Publication Year: 1999 , Page(s): 202 - 219
    Cited by:  Papers (51)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1136 KB)  

    We address the problem of restoring an image from its noisy convolutions with two or more unknown finite impulse response (FIR) filters. We develop theoretical results about the existence and uniqueness of solutions, and show that under some generically true assumptions, both the filters and the image can be determined exactly in the absence of noise, and stably estimated in its presence. We present efficient algorithms to estimate the blur functions and their sizes. These algorithms are of two types, subspace-based and likelihood-based, and are extensions of techniques proposed for the solution of the multichannel blind deconvolution problem in one dimension. We present memory and computation-efficient techniques to handle the very large matrices arising in the two-dimensional (2-D) case. Once the blur functions are determined, they are used in a multichannel deconvolution step to reconstruct the unknown image. The theoretical and practical implications of edge effects, and “weakly exciting” images are examined. Finally, the algorithms are demonstrated on synthetic and real data View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Image reconstruction by convolution with symmetrical piecewise nth-order polynomial kernels

    Publication Year: 1999 , Page(s): 192 - 201
    Cited by:  Papers (32)  |  Patents (1)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (556 KB)  

    The reconstruction of images is an important operation in many applications. From sampling theory, it is well known that the sine-function is the ideal interpolation kernel which, however, cannot be used in practice. In order to be able to obtain an acceptable reconstruction, both in terms of computational speed and mathematical precision, it is required to design a kernel that is of finite extent and resembles the sinc-function as much as possible. In this paper, the applicability of the sine-approximating symmetrical piecewise nth-order polynomial kernels is investigated in satisfying these requirements. After the presentation of the general concept, kernels of first, third, fifth and seventh order are derived. An objective, quantitative evaluation of the reconstruction capabilities of these kernels is obtained by analyzing the spatial and spectral behavior using different measures, and by using them to translate, rotate, and magnify a number of real-life test images. From the experiments, it is concluded that while the improvement of cubic convolution over linear interpolation is significant, the use of higher order polynomials only yields marginal improvement View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Requantization for transcoding of MPEG-2 intraframes

    Publication Year: 1999 , Page(s): 179 - 191
    Cited by:  Papers (37)  |  Patents (18)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (404 KB)  

    An investigation on requantization for transcoding of video signals is carried out. Specifically, MPEG-2 compatible discrete cosine transform (DCT) intraframe coding is addressed. The aim of this work is twofold: first, to provide a theoretical analysis of the transcoding problem, and second, to derive quantization methods for efficient transcoding based on the results of the analysis. The mean squared error (MSE) cost function is proposed for designing a quantizer with minimum distortion resulting in up to 1.3 dB gain compared with the quantizer used in the MPEG-2 reference coder TM5. However, the MSE quantizer leads in general to a larger bit rate and may therefore only be applied locally to blocks of sensitive image content. A better rate-distortion performance can be provided by the maximum a posteriori (MAP) cost function. In critical cases, the MAP quantizer gives a 0.4 dB larger signal-to-noise-ratio (SNR) at the same bit rate compared with the TM5 quantizer. The results are not limited to MPEG-2 and can be adapted to other coding schemes such as H.263 or JPEG View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Rate-distortion-constrained subband video coding

    Publication Year: 1999 , Page(s): 145 - 154
    Cited by:  Papers (6)  |  Patents (1)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (272 KB)  

    This paper introduces a subband video coding algorithm for operation over a continuum of rates from very low to very high. The key elements of the system are statistical rate-distortion-constrained motion estimation and compensation, multistage residual quantization, high order statistical modeling, and arithmetic coding. The method is unique in that it provides an improved mechanism for dynamic spatial and temporal coding. Motion vectors are determined in a nontraditional way, using a rate-distortion cost criterion. This results in a smoother and more consistent motion field, relative to that produced by conventional block matching algorithms. Control over the system computational complexity and performance may be exercised easily View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Predictive vector quantization with intrablock prediction support region

    Publication Year: 1999 , Page(s): 293 - 295
    Cited by:  Papers (2)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (104 KB)  

    A predictive vector quantization (PVQ) structure is proposed, where the encoder uses a predictor based on an intrablock support region, followed by a modified vector quantizer stage. Simulation results show that a modification on a previously published PVQ system led to an improvement of 1 dB in PSNR for Lenna View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.

Aims & Scope

IEEE Transactions on Image Processing focuses on signal-processing aspects of image processing, imaging systems, and image scanning, display, and printing.

Full Aims & Scope

Meet Our Editors

Editor-in-Chief
Scott Acton
University of Virginia
Charlottesville, VA, USA
E-mail: acton@virginia.edu 
Phone: +1 434-982-2003