By Topic

Image Processing, IET

Issue 4 • Date August 2010

Filter Results

Displaying Results 1 - 10 of 10
  • Improving histogram-based reversible data hiding by interleaving predictions

    Page(s): 223 - 234
    Save to Project icon | Click to expandQuick Abstract | PDF file iconPDF (748 KB)  

    Data hiding is an important way of realising copyright protection for multimedia. In this study, a new predictive method is proposed to enhance the histogram-based reversible data hiding approach on grey images. In those developed histogram-based reversible data hiding approaches, their drawbacks are the number of predictive values less to the number of pixels in an image. In these interleaving prediction methods, the predictive values are as many as the pixel values. All predictive error values are transformed into histogram to create higher peak values and to improve the embedding capacity. Moreover, for each pixel, its difference value between the original image and the stego-image remains within ±1. This guarantees that the peak signal-to-noise ratio (PSNR) of the stego-image is above 48±dB. Experimental results show that the histogram-based reversible data hiding approach can raise a larger capacity and still remain a good image quality, compared to other histogram-based approaches. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Urban features recognition and extraction from very-high resolution multi-spectral satellite imagery: a micro-macro texture determination and integration framework

    Page(s): 235 - 254
    Save to Project icon | Click to expandQuick Abstract | PDF file iconPDF (1567 KB)  

    This study presents the first experimental results on the integration of discrete wavelet transform (DWT) derived contexture (macro-texture) and grey-level co-occurrence matrices (GLCM) (micro-texture) in the recognition and extraction of the following selected urban land cover information from very-high spatial resolution Quickbird imagery: residential buildings, commercial buildings, roads/parking and green vegetation. The DWT filters capture the lower and mid-frequency texture information, whereas the GLCM captures the high-frequency textural components, for the same scene features. Besides the commonly used micro-texture (GLCM), the macro-texture (DWT) is modelled here to take care of the contextual information defined as feature edge (size and shape). This edge information is arguably derived from the multi-scale and multi-directional components of the DWT. From the statistical significance testing of the per-pixel classification accuracy results with the z-score, it was found that the integrated feature sets comprising the Quickbird spectral bands, 3 × 3 mean-GLCM and the first level of the vertical-DWT sub-band outperformed all the other tested input primitives, with a z-score value of 2.25. The accuracy results showed that all the three feature primitives were essential in improving the recognition and extraction of tested urban land cover in very-high spatial resolution Quickbird imagery. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Fusion of panchromatic and multispectral images using temporal fourier transform

    Page(s): 255 - 260
    Save to Project icon | Click to expandQuick Abstract | PDF file iconPDF (292 KB)  

    Image fusion techniques can be used to enhance the resolution of multispectral (MS) image, which is helpful for categorisation, recognition as well as other decision making processes. In this paper, a new class of image fusion algorithms is proposed that decomposes images into similar and non-similar information and fuses the corresponding non-similar one. It is based on temporal Fourier analysis. The details of panchromatic (PAN) image are fused with details of MS image by eliminating their similar information which is done by high-pass filtering of their temporal Fourier transform. The cut-off frequency of this filtering is obtained adaptively based on the input MS and PAN images. It results in minimum spectral and spatial distortion. The experimental results demonstrate the superior performance of the proposed method. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Multihypothesis recursive video denoising based on separation of motion state

    Page(s): 261 - 268
    Save to Project icon | Click to expandQuick Abstract | PDF file iconPDF (372 KB)  

    A multihypothesis recursive video denoising filter (MRF) based on separation of motion state is proposed. For video sequence degraded by additive Gaussian white noise, local motion state will be detected combining multiple hypotheses (temporal predictions) first. Then different denoising schemes will be selected to suppress the noise according to the local motion state. Areas detected as stationary motion will be filtered by multihypothesis motion compensated filter (MHMCF), whereas areas detected as non-stationary motion will be filtered by self-cross-bilateral filter (SCBF). The definitions of stationary motion state and non-stationary motion state are given. In addition, the threshold used to classify motion state is equal to the noise standard deviation. The simulation results show that MRF outperforms conventional denoising methods like joint filtering scheme, spatio-temporal varying filter and MHMCF both in peak signal-to-noise ratio and visual quality. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Full text access may be available. Click article title to sign in or learn about subscription options.
  • Continuous wavelet transform for time-varying motion extraction

    Page(s): 271 - 282
    Save to Project icon | Click to expandQuick Abstract | PDF file iconPDF (672 KB)  

    The widespread use of digital multimedia data has made the development of advanced processing techniques necessary, to enable its more efficient analysis. For video content, the estimation of motion is a fundamental step in the extraction of activity, for tracking, motion segmentation, video classification and other applications. The numerous methods that have been proposed over the years for the problem of motion estimation can be divided into two categories. The first group processes data in the spatial domain, and the other in the frequency domain. In this work, an original approach for the estimation of motion in the frequency domain is presented. The proposed method avoids limitations of illumination-based methods, such as sensitivity to local illumination variations and noise by employing the continuous wavelet transform (CWT). All video frames are processed simultaneously, so as to create a frequency-modulated (FM) signal, which contains the motion information in its frequency. The resulting FM signal is then processed using the CWT, which extracts its time-varying frequency and consequently its motion. This system is shown to be robust to local measurement noise and occlusions, as it processes the available data in a global, integrated manner. Experiments take place with both synthetic and real video sequences to demonstrate the capabilities of the proposed approach. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Multifocus image fusion based on redundant wavelet transform

    Page(s): 283 - 293
    Save to Project icon | Click to expandQuick Abstract | PDF file iconPDF (1102 KB)  

    Image fusion is a process of integrating complementary information from multiple images of the same scene such that the resultant image contains a more accurate description of the scene than any of the individual source images. A method for fusion of multifocus images is presented. It combines the traditional pixel-level fusion with some aspects of feature-level fusion. First, multifocus images are decomposed using a redundant wavelet transform (RWT). Then the edge features are extracted to guide coefficient combination. Finally, the fused image is reconstructed by performing the inverse RWT. The experimental results on several pairs of multifocus images show that the proposed method can achieve good results and exhibit clear advantages over the gradient pyramid transform and discrete wavelet transform techniques. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Fusion of intensity and inter-component chromatic difference for effective and robust colour edge detection

    Page(s): 294 - 301
    Save to Project icon | Click to expandQuick Abstract | PDF file iconPDF (1232 KB)  

    Edge detection, especially from colour images, plays very important roles in many applications for image analysis, segmentation and recognition. Most existing methods extract colour edges via fusing edges detected from each colour components or detecting from the intensity image where inter-component information is ignored. In this study, an improved method on colour edge detection is proposed in which the significant advantage is the use of inter-component difference information for effective colour edge detection. For any given colour image C, a grey D-image is defined as the accumulative differences between each of its two colour components, and another grey R-image is then obtained by weighting of D-image and the grey intensity image G. The final edges are determined through fusion of edges extracted from R-image and G-image. Quantitative evaluations under various levels of Gaussian noise are achieved for further comparisons. Comprehensive results from different test images have proved that this approach outperforms edges detected from traditional colour spaces like RGB, YCbCr and HSV in terms of effectiveness and robustness. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Linear Gaussian blur evolution for detection of blurry images

    Page(s): 302 - 312
    Save to Project icon | Click to expandQuick Abstract | PDF file iconPDF (722 KB)  

    Even though state-of-the-art digital cameras are equipped with auto-focusing and motion compensation functions, several other factors including limited contrast, inappropriate exposure time and improper device handling can still lead to unsatisfactory image quality such as blurriness. Indeed, blurry images make up a significant percentage of anyone's picture collections. Consequently, an efficient tool to detect blurry images and label or separate them for automatic deletion in order to preserve storage capacity and the quality of image collections is needed. A new technique for automatic detection and removal of blurry pictures is presented. Initially, a set of interest points and local image areas is extracted. These areas are then evolved in time according to the conventional linear scale space. The gradient of the evolution curve through scale is then used to produce a 'blur graph' representing the probability of a picture being blurred or not. Complexity is kept low by applying a Monte-Carlo like technique for the selection of representative image areas and interest points and by implicitly estimating the gradient of the scale-space curve evolution. An exhaustive evaluation of the proposed technique is conducted to validate its performance in terms of detection accuracy and efficiency. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Different non-linear diffusion filters combined with triangle method used for noise removal from polygonal shapes

    Page(s): 313 - 333
    Save to Project icon | Click to expandQuick Abstract | PDF file iconPDF (2160 KB)  

    A two-step process for removing noise from polygonal shapes is presented in this study. The authors present a polygonal shape as its turning function and then apply a non-linear diffusion filter and a triangle method on it. In the first step the authors apply several different non-linear diffusion filters on the turning function and compare the performance of these filters later. Non-linear diffusion filters identify dominant vertices in a polygon and remove those vertices that are identified as noise or irrelevant features. The vertices in the turning function which diffuse until the sides that immediately surround them approach the same turning function are identified as noise and removed. The vertices that are enhanced are preserved without changing their coordinates and they are identified as dominant ones. After the authors carry this process as far as it will go without introducing noticeable shape distortion, and switch to the triangle method for further removal of vertices that are to be treated as noise. In the second step the authors remove the vertices that form the smallest area triangles. The authors submit experimental results of the tests that demonstrate that this two-step process successfully removes vertices that should be dismissed as noise while preserving dominant vertices that can be accepted as relevant features and give a faithful description of the shape of the polygon. In experimental tests of this procedure the authors demonstrate successful removal of noise and excellent preservation of shape, thanks to appropriate emphasis of dominant vertices. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.

Aims & Scope

The range of topics covered by IET Image Processing includes areas related to the generation, processing and communication of visual information.

Full Aims & Scope

Meet Our Editors

Publisher
IET Research Journals
iet_ipr@theiet.org