Scheduled System Maintenance:
On May 6th, system maintenance will take place from 8:00 AM - 12:00 PM ET (12:00 - 16:00 UTC). During this time, there may be intermittent impact on performance. We apologize for the inconvenience.
By Topic

Image Processing, IET

Issue 4 • Date August 2008

Filter Results

Displaying Results 1 - 6 of 6
  • Near-computation-free image encoding scheme based on adaptive decimation

    Publication Year: 2008 , Page(s): 175 - 184
    Cited by:  Papers (2)  |  Patents (1)
    Save to Project icon | Click to expandQuick Abstract | PDF file iconPDF (583 KB)  

    Adaptive decimation (AD) is a technique that aims at compressing images with a very small amount of computations and memory requirement. For images that contain moderate amounts of textural content, the method exhibits satisfactory performance and in general provides good visual quality and acceptable coding fidelity at low bit-rate of around 0.2 bpp. Although the complexity of the method is relatively light when compared with the existing compression methods, it still involves a considerable amount of computations that would require the use of medium-speed processors to achieve real-time operation. In the paper, a novel image encoder based on the principles of AD is reported. The scheme is near-computation-free as it involves on average a single fixed point multiplication plus a few other summing and logical operations for every four pixels. Experimental results reveal that, despite the substantial reduction in complexity, the performance of proposed method is similar to, if not better than, the existing AD encoding algorithms. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Fast intra-prediction with near pixel correlation approach for H.264/AVC system

    Publication Year: 2008 , Page(s): 185 - 193
    Cited by:  Papers (1)
    Save to Project icon | Click to expandQuick Abstract | PDF file iconPDF (810 KB)  

    A fast intra-prediction algorithm to determine the coding mode for 4 x 4 blocks is presented. The near pixel-correlation integration approach is proposed to compute the parameter for each mode along the predictive direction. The parameters that can be easily estimated with the simple computation are useful to represent the feature of block mode. On the basis of the fast algorithm, only three to four candidate modes are selected from nine reference blocks to find the best coding mode with a minimum rate-distortion (RD) cost for H.264 system coding.The complexity of intra-prediction can be reduced ~60% compared with the exhaustive search and the processing time of full H.264 coding can be reduced ~45% while maintaining the same video quality, but the bit-rate increases ~2%. When compared with the recent competing algorithms, the proposed algorithm can achieve better coding efficiency with faster speed, higher quality, lower bit-rate and lower matching error in the experimental results. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Compression of multi-polarimetric SAR intensity images based on 3D-matrix transform

    Publication Year: 2008 , Page(s): 194 - 202
    Cited by:  Papers (2)
    Save to Project icon | Click to expandQuick Abstract | PDF file iconPDF (416 KB)  

    The algorithms for multi-polarimetric synthetic aperture radar (SAR) intensity image compression are investigated. First, the multi-polarimetric SAR intensity images (HH, HV and VV) are considered as a 3D-matrix unit, and then a 3D-matrix transform is adopted to remove the redundancies, which includes ID discrete cosine transform (DCT) in the polarimetric channels and 2D discrete wavelet transform (DWT) in each polarimetric SAR image plane. After the 3D-matrix transform, two methods are proposed to encode the 3D mixed coefficients. One is a bit allocation encoding based on differential entropy and the other is a 3D set partitioning in hierarchical trees (SPIHT) encoding which is an improvement of the conventional SPIHT. The two methods can remove not only the redundancies of the image inside but also the redundancies among the polarimetric channels, because they do not process every channel image separately. Both the theory and experimental results show that the proposed methods are efficient. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Wavelet-based image denoising with the normal inverse Gaussian prior and linear MMSE estimator

    Publication Year: 2008 , Page(s): 203 - 217
    Cited by:  Papers (3)
    Save to Project icon | Click to expandQuick Abstract | PDF file iconPDF (894 KB)  

    A new spatially adaptive wavelet-based method is introduced for reducing noise in images corrupted by additive white Gaussian noise. It is shown that a symmetric normal inverse Gaussian distribution is highly suitable for modelling the wavelet coefficients. In order to estimate the parameters of the distribution, a maximum- likelihood-based technique is proposed, wherein the Gauss-Hermite quadrature approximation is exploited to perform the maximisation in a computationally efficient way. A Bayesian minimum mean-squared error (MMSE) estimator is developed utilising the proposed distribution. The variances corresponding to the noise- free coefficients are obtained from the Bayesian estimates using a local neighbourhood. A modified linear MMSE estimator that incorporates both intra-scale and inter-scale dependencies is proposed. The performance of the proposed method is studied using typical noise-free images corrupted with simulated noise and compared with that of the other state-of-the-art methods. It is shown that the proposed method gives higher values of the peak signal-to-noise ratio compared with most of the other denoising techniques and provides images of good visual quality. Also, the performance of the proposed method is quite close to that of the state-of-the-art Gaussian scale mixture (GSM) method, but with much less complexity. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Fractal indexing with the joint statistical properties and its application in texture image retrieval

    Publication Year: 2008 , Page(s): 218 - 230
    Cited by:  Papers (3)  |  Patents (1)
    Save to Project icon | Click to expandQuick Abstract | PDF file iconPDF (1274 KB)  

    Fractal image coding is a block-based scheme that exploits the self-similarity hiding within an image. Fractal parameters generated by the block-based scheme are quantitative measurements of self-similarity, and therefore they can be used to construct image signatures. By combining fractal parameters and collage error, a set of new statistical fractal signatures, such as histogram of collage error (HE), joint histogram of contrast scaling and collage error (JHSE), and joint histogram of range block mean and contrast scaling and collage error (JHMSE) is proposed. These fractal signatures effectively extract and reflect the statistical properties intrinsic in texture images. Hence, they provide new statistical features for use in texture image retrieval and identification. Furthermore, in order to reduce computational complexity of the JHMSE signature, the JHMSE signature is simplified to HM (histogram of range block mean) +JHSE and HM+HS (histogram of contrast scaling) +HE, based on the independence and distance equivalence. Mathematical analysis of the simplification scheme is also carried out. The proposed fractal signatures are compared with the existing fractal signatures. Experimental results show that the proposed signatures, HM+JHSE and HM+HS+HE, achieve a higher retrieval rate with a lower computational complexity. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Complexity analysis of morphological area openings and closings with set union

    Publication Year: 2008 , Page(s): 231 - 238
    Save to Project icon | Click to expandQuick Abstract | PDF file iconPDF (211 KB)  

    Area openings and closings are basic connected morphological operators that remove connected components which fail an area criterion. They are widely used in image filtering (e.g. for noise reduction) and segmentation and can be implemented efficiently using union-find-based algorithms. The authors show that the computational complexity of morphological area openings/closings based on disjoint set union is of order O(N) when N/ lambda, where lambda is an area threshold and N the image size, is sufficiently large, as in most practical applications. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.

Aims & Scope

The range of topics covered by IET Image Processing includes areas related to the generation, processing and communication of visual information.

Full Aims & Scope

Meet Our Editors

Publisher
IET Research Journals
iet_ipr@theiet.org