By Topic

Image Processing, IET

Issue 5 • Date July 2013

Filter Results

Displaying Results 1 - 13 of 13
  • Image fusion scheme based on modified dual pulse coupled neural network

    Page(s): 407 - 414
    Save to Project icon | Click to expandQuick Abstract | PDF file iconPDF (739 KB)  

    Image fusion combines information from multiple images of the same scene to obtain a composite image which is more suitable for further image processing tasks. This study presented an image fusion scheme based on the modified dual pulse coupled neural network (PCNN) in non-subsampled contourlet transform (NSCT) domain. NSCT can overcome the lack of shift invariance in contourlet transform. Original images were decomposed to obtain the coefficients of low-frequency subbands and high-frequency subbands. In this fusion scheme, a new sum-modified Laplacian of the low-frequency subband image, which represents the edge-feature of the low-frequency subband image in NSCT domain, is presented and input to motivate modified dual PCNN. For fusion of high-frequency subband coefficients, spatial frequency will be used as the gradient features of images to motivate dual channel PCNN and to overcome Gibbs phenomena. Experimental results show that the proposed scheme can significantly improve image fusion performance, performs very well in fusion and outperforms conventional methods such as traditional discrete wavelet transform, dual tree complex wavelet and PCNN in terms of objective criteria and visual appearance. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Myocardium segmentation in strain-encoded (SENC) magnetic resonance images using graph-cuts

    Page(s): 415 - 422
    Save to Project icon | Click to expandQuick Abstract | PDF file iconPDF (608 KB)  

    Evaluation of cardiac functions using Strain Encoded (SENC) magnetic resonance (MR) imaging is a powerful tool for imaging the deformation of left and right ventricles. However, automated analysis of SENC images is hindered due to the low signal-to-noise ratio SENC images. In this work, the authors propose a method to segment the left and right ventricles myocardium simultaneously in SENC-MR short-axis images. In addition, myocardium seed points are automatically selected using skeletonisation algorithm and used as hard constraints for the graph-cut optimization algorithm. The method is based on a modified formulation of the graph-cuts energy term. In the new formulation, a signal probabilistic model is used, rather than the image histogram, to capture the characteristics of the blood and tissue signals and include it in the cost function of the graph-cuts algorithm. The method is applied to SENC datasets for 11 human subjects (five normal and six patients with known myocardial wall motion abnormality). The segmentation results of the proposed method are compared with those resulting from both manual segmentation and the conventional histogram-based graph-cuts segmentation algorithm. The results show that the proposed method outperforms the histogram-based graph-cuts algorithm especially to segment the thin structure of the right ventricle. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • New approach for identifying hereditary relation using primary fingerprint patterns

    Page(s): 423 - 431
    Save to Project icon | Click to expandQuick Abstract | PDF file iconPDF (928 KB)  

    In this work, an effort has been made to explore the existence of hereditary relations by analysing the central region of the fingerprints using global ridge patterns. So far, fingerprint identification systems have been developed for biometric applications and criminal investigations. The present work attempts to identify the hereditary relation among inter and intra class family members using fingerprints. Fingerprints are obtained from 324 subjects of 54 families comprising of a group of three generations. Ridge pattern types and ridge orientation map are estimated for the analysis. The results obtained show that 85.18% of intra class family members have similar ridge pattern. The study on ridge orientation map indicates that the angles of ridge orientation field vary in the same way for intra class family members. Furthermore, the specific range of angles of ridge orientation field occurs maximum number of times for intra class members and location of median value of repetitions of orientation angles are the same. Hence, it is evident that the intra class members have certain similarities in fingerprint which in turn replicate the presence of hereditary relation. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Robust quantisation index modulation-based approach for image watermarking

    Page(s): 432 - 441
    Save to Project icon | Click to expandQuick Abstract | PDF file iconPDF (695 KB)  

    In this study, a robust image watermarking based on the quantisation index modulation (QIM) method is proposed. Conventional QIM methods employ a fixed quantisation step-size that results in poor robustness of the algorithm. Here, the quantisation step-size in the QIM method is adaptively selected using a power-law function and with the aid of the side information, the proposed method is invariant to gain and rotation attack. To keep the watermark imperceptible and increase its robustness, the low-frequency components of high-entropy image blocks are used for data hiding. The analytical error probability and embedding distortion are derived and assessed by simulations on artificial signals. The optimum parameter in the power-law function is obtained based on minimising the error probability. Experimental results confirm the superiority of the proposed technique against common attacks in comparison with the recently proposed methods. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Image denoising algorithm based on contourlet transform for optical coherence tomography heart tube image

    Page(s): 442 - 450
    Save to Project icon | Click to expandQuick Abstract | PDF file iconPDF (1443 KB)  

    Optical coherence tomography (OCT) is becoming an increasingly important imaging technology in the Biomedical field. However, the application of OCT is limited by the ubiquitous noise. In this study, the noise of OCT heart tube image is first verified as being multiplicative based on the local statistics (i.e. the linear relationship between the mean and the standard deviation of certain flat area). The variance of the noise is evaluated in log-domain. Based on these, a joint probability density function is constructed to take the inter-direction dependency in the contourlet domain from the logarithmic transformed image into account. Then, a bivariate shrinkage function is derived to denoise the image by the maximum a posteriori estimation. Systemic comparative experiments are made to synthesis images, OCT heart tube images and other OCT tissue images by subjective assessment and objective metrics. The experiment results are analysed based on the denoising results and the predominance degree of the proposed algorithm with respect to the wavelet-based algorithm. The results show that the proposed algorithm improves the signal-to-noise ratio, whereas preserving the edges and has more advantages on the images containing multi-direction information like OCT heart tube image. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Robust image watermarking using dihedral angle based on maximum-likelihood detector

    Page(s): 451 - 463
    Save to Project icon | Click to expandQuick Abstract | PDF file iconPDF (1077 KB)  

    This study presents a robust image watermarking method based on geometric modelling. In this method, nine samples of the approximation coefficient of the image blocks are utilised to construct a plane in the three-dimensional (3D) space. The authors change the dihedral angle formed between the created plane and the x-y plane for data embedding. To preserve the imperceptibility of the watermark, geometrical computations are used to minimise the embedding distortion. Maximum-likelihood detector is implemented to extract the watermark in the noisy channel at the receiver side. The authors experimentally determine the probability density function of the embedding dihedral angle for Gaussian samples. Owing to embedding in the dihedral angle between two planes, the proposed scheme has high robustness to gain attacks. In addition, by using the low-frequency components of the image blocks for data embedding, high robustness against noise and compression attacks has been achieved. Experimental results confirm the validity of the theoretical analysis given in this study and show the superiority of the method over similar techniques in this field. The proposed method is also robust to a wide range of attacks, namely Gaussian filtering, median filtering, JPEG compression, Gaussian noise and scaling. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Segmentation and localisation of whole slide images using unsupervised learning

    Page(s): 464 - 471
    Save to Project icon | Click to expandQuick Abstract | PDF file iconPDF (421 KB)  

    Digital pathology has been clinically approved for over a decade to replace traditional methods of diagnosis. Many challenges appear when digitising the whole slide scan into high resolution images including memory and time management. Whole slide images require huge memory space if the tissue is not pre-localised for the scanner. The authors propose a set of clinically motivated features representing colour, intensity, texture and location to segment and localise the tissue from the whole slide image. This step saves both the scanning time and the required memory space. On average, it reduces scanning time up to 40% depending on the tissue type. The authors propose, using unsupervised learning, to segment and localise tissue by clustering. Unlike supervised methods, this method does not require the ground truth which is time consuming for domain experts. The authors proposed method achieves an average of 96% localisation accuracy on a large dataset. Moreover, the authors outperform the previously proposed supervised learning results on the same data. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Low-contrast satellite images enhancement using discrete cosine transform pyramid and singular value decomposition

    Page(s): 472 - 483
    Save to Project icon | Click to expandQuick Abstract | PDF file iconPDF (1463 KB)  

    This study presents a satellite image contrast enhancement technique based on the discrete cosine transform (DCT) pyramid and singular value decomposition (SVD), in contrast to the methods based on wavelet decomposition and SVD which could fail to produce satisfactory results for some low-contrast images. With the proposed method, an input image is decomposed into a low sub-band image and reversed L-shape blocks containing the high-frequency coefficients of the DCT pyramid. The singular value matrix of the equalised low sub-band image is then estimated from the combination between the singular matrix of the low sub-band image and the singular matrix of its global histogram equalisation. The qualitative and quantitative performances of the proposed technique are compared with those of conventional image equalisation such as general histogram equalisation and local histogram equalisation, as well as some state-of-the-art techniques such as singular value equalisation technique. Moreover, the proposed technique is contrasted against the technique based on the discrete wavelet transform (DWT) and SVD (DWT-SVD) as well as the technique based on DCT-SVD. The experimental results show that the proposed method outperforms both conventional and the state-of-the-art techniques. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Compound image compression based on unified LZ and hybrid coding

    Page(s): 484 - 499
    Save to Project icon | Click to expandQuick Abstract | PDF file iconPDF (2821 KB)  

    This study proposes a unified LZ and hybrid coding (ULHC) method for compound image and video compression of visually lossless quality and high compression ratio. The method is macroblock-based for ultra-low coding latency and compatibility with the conventional and most popular hybrid video coding standards such as H.264 and MPEG-2. First, each macroblock is coded by two tools: (i) gzip, a popular lossless LZ coding tool, modified to be macroblock-oriented and to be seamlessly unifiable with a lossy hybrid coding tool; (ii) H.264, an advanced lossy hybrid coding tool. Then rate-distortion optimisation is used to select either the modified gzip or H.264 as the final coding. To seamlessly unify these two coding tools for maximum quality and high compression ratio in ULHC, the modified gzip uses the most recent reconstructed and specially serialised macroblock data as the dictionary. Experimental results show that for images and videos composed of natural or synthesised picture, text and graphics, the proposed method provides higher peak signal-to-noise ratio and better subjective quality than H.264 at the same bitrate, and also achieves much higher compression ratio than gzip without any visual quality loss. In fact, ULHC can achieve partial-lossless and partial-near-lossless coding with a high compression ratio. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Extraction of interest points by Harris interest operator for synthetic aperture radar image coregistration

    Page(s): 500 - 513
    Save to Project icon | Click to expandQuick Abstract | PDF file iconPDF (2992 KB)  

    In image coregistration of synthetic aperture radar (SAR) interferometry, a set of points is selected for tie point matching. Generally, some special points are possibly selected as tie points to improve the reliability of coregistration. However, special points cannot always be found in an image. Therefore points in grid form are commonly selected for image coregistration and this makes the results not as reliable as special points do. In this study, hence, a series of points detected by Harris interest operator (HIO) are used as tie points for SAR image coregistration. After wavelet decomposition of a SAR image, the basic energy is reserved in the low-pass subimage and this benefits the extraction of interest points conducted by HIO on the highest level. Three pairs of SAR image in Hong Kong area are used to prove the efficiency of the proposed method. For comparison, image coregistration based on grid points and interest points are implemented, respectively, in which different numbers of points are adopted. Based on the analysis of experimental results, it is found that the quality of interferogram is greatly improved by interest points and coregistration with interest points is more reliable. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Visual illumination compensation for face images using light mapping matrix

    Page(s): 514 - 522
    Save to Project icon | Click to expandQuick Abstract | PDF file iconPDF (1082 KB)  

    Illumination variation is a challenging issue in face recognition. In many conventional approaches the low-frequency coefficients are usually discarded in order to compensate the illumination variations, and hence degrade the visual quality. To deal with these problems, an adaptive normalisation-based method is proposed in this study. Each image is normalised according to its lighting attribute by mapping the low-frequency components to the normal condition instead of discarding them by applying a novel statistical concept called light mapping matrix. The method preserves the low-frequency facial features, maximising the intra-individual correlation and improves the visual quality of face images in different lighting conditions. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Automatic classification of medical X-ray images: hybrid generative-discriminative approach

    Page(s): 523 - 532
    Save to Project icon | Click to expandQuick Abstract | PDF file iconPDF (518 KB)  

    A new approach is presented to improve the classification performance of medical X-ray images based on the combination of generative and discriminative classification approach. A set of labelled X-ray images were given from 116 categories of different parts of body and the aim is to construct a classification model. This model was then used to classify any new X-ray images into one of the predefined categories. The classification task started with extracting local invariant features from all images. A generative model such as probabilistic latent semantic analysis (PLSA) was applied on extracted features in order to provide more stable representation of the images. Subsequently, this representation was used as input to a discriminative support vector machine classifier to construct a classification model. The experimental results were based on ImageCLEF 2007 medical database. The classification performance was evaluated on the entire dataset as well as the class specific level. It was also compared with other classification techniques with various image representations on the same database. The comparison results showed that superior performance has been achieved especially for classes with less number of training images. Thus, only 7 out of 116 classes were left with accuracy rate below 60% as it differs from the results obtained using other classification approaches. This was attained by exploiting the ability of PLSA to generate a better image representation, discriminative for accurate classification and more robust when less training data are available. The total classification rate obtained on the entire dataset is 92.5%. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • High throughput and energy efficient twodimensional inverse discrete cosine transform architecture

    Page(s): 533 - 541
    Save to Project icon | Click to expandQuick Abstract | PDF file iconPDF (824 KB)  

    This study presents an energy efficient and high throughput two-dimensional inverse discrete cosine transform (IDCT) architecture, suitable for high speed and high quality image and video processing applications. The proposed architecture is based on the Arai-Agui-Nakajima IDCT algorithm. The high throughput rate is accomplished through the high degree of pipelining in the multipliers of the architecture. The distribution properties of the input signal, that is, the high percentage of zero coefficients, are exploited in order to lower power consumption. In particular, whenever an all zero column enters the architecture, the corresponding pipeline stages are adjacently deactivated in order to lower switching activity. Furthermore, a novel approach, regarding the transposition structure, is introduced. The all zero columns are not loaded in the transposition memory. Instead, they are encoded through a single bit in a parallel data path, thereby reducing the power consumption of the transposition process by 19%. The proposed architecture, implemented in a 65 nm field programmable gate array, provides a throughput rate of 2.722 Gpixel/s for a power consumption of 0.831 W. The experimental results and the comparison with previous work validate the efficiency and the suitability of the proposed implementation for high speed and high quality video decoding applications. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.

Aims & Scope

The range of topics covered by IET Image Processing includes areas related to the generation, processing and communication of visual information.

Full Aims & Scope

Meet Our Editors

Publisher
IET Research Journals
iet_ipr@theiet.org