By Topic

Image Processing, IET

Issue 2 • Date March 2011

Filter Results

Displaying Results 1 - 10 of 10
  • Image fusion technique based on non-subsampled contourlet transform and adaptive unit-fast-linking pulse-coupled neural network

    Page(s): 113 - 121
    Save to Project icon | Click to expandQuick Abstract | PDF file iconPDF (509 KB)  

    A new image fusion technique based on non-subsampled contourlet transform (NSCT) and adaptive unit-fast-linking pulse-coupled neural network (PCNN) is presented. By using NSCT, multi-scale and multi-direction sparse decompositions of the source images are performed. Then, the basic PCNN model is improved to be an adaptive unit-fast-linking PCNN model, which synthesises the advantages of both unit-linking PCNN and fast-linking PCNN. The novel PCNN model utilises the clarity of each pixel in images as the linking strength β; moreover, the time matrix T of the sub-images can be obtained via the synchronous pulse burst property. Finally, the sub-images are fused by analysing the time matrix T and linking strength β. The experimental results show that the proposed approach is better than some current methods. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Efficient saliency detection based on gaussian models

    Page(s): 122 - 131
    Save to Project icon | Click to expandQuick Abstract | PDF file iconPDF (938 KB)  

    This study presents an efficient saliency model mainly aiming at content-based applications such as salient object segmentation. The input colour image is first pre-segmented into a set of regions using the mean shift algorithm. A set of Gaussian models are estimated on the basis of segmented regions, and then for each pixel, a set of normalised colour likelihood measures to different Gaussian models are calculated. The colour saliency measure and spatial saliency measure of each Gaussian model are evaluated based on its colour distinctiveness and the spatial distribution, respectively. Finally, the pixel-wise colour saliency map and spatial saliency map are generated by summing the colour and spatial saliency measures of Gaussian models weighted by the normalised colour likelihood measures, and they are further combined to obtain the final saliency map. Experimental results on a dataset with 1000 images and ground truths demonstrate the better saliency detection performance of our saliency model. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Algorithm to decompose three-dimensional complex structures at the necks: tested on snow structures

    Page(s): 132 - 140
    Save to Project icon | Click to expandQuick Abstract | PDF file iconPDF (860 KB)  

    The separation of overlapping particles in three-dimensional images is an image processing task with many fields of application. However, commonly used image processing algorithms have difficulties to identify the single particles and their connections in certain structures. Here the authors present an alternative algorithm for part decomposition, which performs better on some structures than common algorithms. This algorithm is a special case of part decomposition, as it decomposes a structure at the necks into single particles. The necks are detected based on their characteristic negative Gaussian curvature. The algorithm itself consists of three steps: cutting at negative Gaussian curvature, region growing and intersecting plane minimisation. The authors tested the performance of the algorithm by comparison with two state-of-the-art algorithms for part decomposition, the watershed and a skeleton-based algorithm on strongly differing geometries, taken from natural snow samples. The two test algorithms are known for having difficulties to decompose certain structures. As the new algorithm uses a different characteristic to decompose the structure, the new algorithm is a good alternative to the existing algorithms. The new algorithm decomposes 72% of the reference structure correctly. This is a better performance than by the two other algorithms. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Efficient fusion for infrared and visible images based on compressive sensing principle

    Page(s): 141 - 147
    Save to Project icon | Click to expandQuick Abstract | PDF file iconPDF (485 KB)  

    In this study, the potential application of compressive sensing (CS) principle in the image fusion for infrared (IR) and visible images is studied. First, the theory of CS is introduced briefly. Some comparative analyses of different reconstruction techniques are carried out in view of their performance in multisensor image recovery and the minimum number of sampling measurements one has to take to achieve perfectly reconstruction of images is investigated afterwards. Then, a novel self-adaptive weighted average fusion scheme based on standard deviation of measurements to merge IR and visible images is developed in the special domain of CS using the better recovery tool of total variation optimisation. Both the subjective visual effect and objective evaluation indicate that the presented method enhances the definition of fused results greatly, and it achieves a high level of fusion quality in human perception of global information. On the other hand, no structure priori information about the original images is required and only some concise fusion computation of compressive measurements is needed in the authors' proposed algorithm, thus it has superiority in saving computation resources and enhancing the fusion efficiency. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Segmentation of textured cell images based on frequency analysis

    Page(s): 148 - 158
    Save to Project icon | Click to expandQuick Abstract | PDF file iconPDF (657 KB)  

    A novel frequency analysis algorithm for segmentation of textured cells is presented. The algorithm is developed based on an ideal simulation model and is applicable to real cell images. A simulated cell image is assumed to have an ellipse-like region of textured interior embedded in a relatively flat background. The size of the original image is expanded multiple times by extrapolating it to additional regions with estimated background intensities before a larger sized discrete Fourier transform (DFT) is applied. The idealised model for the cell images shows a direct relationship between the boundaries of the cell regions and the inner zero-crossing lines in the large-sized DFT of the expanded images. The shape, size and orientation of the cell region are determined by the parameters derived from the estimated inner zero-crossing line in the DFT whereas the position of the cell region is determined by searching for the location of the minimum in the moving average with the window shaped the same as the previously acquired cell region. Experimental results of both the simulated and the real microscopic cell images are provided to show the performance of the proposed algorithm. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Segmentation of noisy colour images using cauchy distribution in the complex wavelet domain

    Page(s): 159 - 170
    Save to Project icon | Click to expandQuick Abstract | PDF file iconPDF (1022 KB)  

    This study proposes a novel image segmentation technique for noisy colour images, in which the heavy-tailed characteristics of the image are modelled by Cauchy distributions. First, the RGB colour bands of the noisy image are decomposed into multiresolution representations using the dual-tree complex wavelet transform. For each wavelet subband, a model is built assuming that the input coefficients are contaminated with signal-independent additive white Gaussian noise. Hence, the authors derive an estimation rule in the wavelet domain to obtain the noise-free coefficients based on the bivariate Cauchy distribution. The bivariate model makes it possible to exploit the inter-scale dependencies of wavelet coefficients. Subsequently, the image is roughly segmented into textured and non-textured regions using the bivariate model parameters corresponding to the denoised coefficients. A multiscale segmentation is then applied to the resulting regions. Finally, a novel statistical region merging algorithm is introduced by measuring the Kullback-Leibler distance between the estimated Cauchy models for the neighbouring segments. The experiments demonstrate that the authors algorithm yields robust segmentation results for noisy images containing artificial or natural noise. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Digital image processing approach using combined wavelet hidden markov model for well-being analysis of insulators

    Page(s): 171 - 183
    Save to Project icon | Click to expandQuick Abstract | PDF file iconPDF (698 KB)  

    This study employs a digital image processing (DIP) technique for video surveillance (VS) of overhead power distribution line insulators. Such VS technique is one of the promising means for augmenting the process of distribution system automation (DSA). Particularly in semi-urban areas and as well as urban areas in developing countries like India, the overhead distribution lines traverse mostly near the roads and thus VS based on surface vehicular approach equipped with moving cameras seems to be more suitable compared to VS using either remote terminal units (RTUs) or a helicopter. The proposed VS approach has unique advantage due to the fact that a non-technical person can go for patrol using a vehicle so as to capture images of power lines along with insulators for subsequent analysis. In addition, such an approach avoids the difficulties such as blurring of images and camera sight control as in case of aerial VS. Also in case of VS based on RTUs, the maintenance of RTUs becomes a challenging task, particularly because of the outdoor nature of installations. Thus, this study has uniquely employed the use of DIP for video tracking of overhead power lines using template design, feature extraction by applying wavelet transform, and subsequently hidden Markov model is being used for well-being analysis to segregate the damaged insulators from good ones. The case studies validate the efficacy of the proposed methodology for insulator monitoring to augment on-going DSA. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Refining structural texture synthesis approach

    Page(s): 184 - 189
    Save to Project icon | Click to expandQuick Abstract | PDF file iconPDF (701 KB)  

    Structural textures are characterised by a repeating pattern called `Texton` and placement rule - that determines the nature of periodicity. Based on the periodicity, textures are classified as homogeneous - perfectly periodic - and weakly homogeneous - quasi-periodic. Both of these textures are assumed to be combination of structural information, illumination, that is, average brightness at different sites of the texture and stochasticity to allow local variations. A top-down approach extracts structural information, that is, the grid, representative texton and illumination component from the original texture patch and then the information is used to synthesise similar textures. Obviously, the technique does not produce ditto copies in the synthesised texture. The experimentation is carried out to improve quality of synthesised textures by incorporating multiple representative textons. Also a parameter, namely homogeneity co-efficient (HC), is suggested to compare the original texture patch and synthesised texture. The parameter captures variations in the textons, contents and sizes both, and thus can be used to compare the synthesis results. The suitability of the proposed synthesis approach and HC is verified by rigorous experimentation on weakly homogeneous artificial and standard structural textures. Efforts are also made to utilise the multi-core processing capability of the processor to improve the speed of analysis and synthesis phases. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Reversible and high-capacity data hiding in medical images

    Page(s): 190 - 197
    Save to Project icon | Click to expandQuick Abstract | PDF file iconPDF (675 KB)  

    In this study, the authors introduce a highly efficient reversible data hiding system. It is based on dividing the image into tiles and shifting the histograms of each image tile between its minimum and maximum frequency. Data are then inserted at the pixel level with the largest frequency to maximise data hiding capacity. It exploits the special properties of medical images, where the histogram of their non-overlapping image tiles mostly peak around some grey values and the rest of the spectrum is mainly empty. The zeros (or minima) and peaks (maxima) of the histograms of the image tiles are then relocated to embed the data. The grey values of some pixels are therefore modified. High capacity, high fidelity, reversibility and multiple data insertions are the key requirements of data hiding in medical images. The authors show how histograms of image tiles of medical images can be exploited to achieve these requirements. Compared to the data hiding method applied to the whole image, the authors' scheme can result in 30-200% capacity improvement and still with better image quality, depending on the medical image content. Additional advantages of the proposed method include hiding data in the regions of non-interest and better exploitation of spatial masking. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Metropolis monte carlo for tomographic reconstruction with prior smoothness information

    Page(s): 198 - 204
    Save to Project icon | Click to expandQuick Abstract | PDF file iconPDF (597 KB)  

    The Metropolis Monte Carlo algorithm was applied to produce tomographic reconstructions from scarce projection data supplemented by prior information about the smoothness of the object. The prior information is represented by means of local energy functions, which are added to the projection error. The proposed prior function is an extension of previous proposals of border filters, the novelty introduced here being an adaptive control of the filter during the reconstruction process. The method was tested on synthetic phantoms and the reconstructions of a real object from a small number of projections. The technique shows good results in images with piecewise homogeneous regions, and can be useful in certain applications, where the scanning views are within an angular range that is either limited or sparsely sampled, as the detection of material defects in non-destructive testing or special anatomical components in medical images. Finally, the method is applied to the reconstruction of an industrial application of a stainless-steel BNC elbow from very few projections. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.

Aims & Scope

The range of topics covered by IET Image Processing includes areas related to the generation, processing and communication of visual information.

Full Aims & Scope

Meet Our Editors

Publisher
IET Research Journals
iet_ipr@theiet.org