By Topic

Image Processing, IET

Issue 1 • Date 1 2015

Filter Results

Displaying Results 1 - 9 of 9
  • Single-frame image super-resolution inspired by perceptual criteria

    Page(s): 1 - 11
    Save to Project icon | Click to expandQuick Abstract | PDF file iconPDF (684 KB)

    In this study, the authors consider the problem of image super-resolution (SR) in terms of the perceptual criteria. Existing SR methods treat the traditional mean-squared error (MSE) as an irreplaceable objective function. However, MSE has been widely criticised since it is inconsistent with visual perception of human beings. The perceptual criteria, including the structural similarity (SSIM) index and feature similarity (FSIM) index, have been reported to be more effective in assessing image quality. Therefore SSIM and FSIM are included for the SR task in this study. Specifically, the authors first propose to reform principal component analysis (PCA), which is named as visual perceptual PCA (VP-PCA), by adopting SSIM as the object function. Subsequently, to accomplish the SR task, the authors cluster the training data and perform VP-PCA on each cluster to calculate the coefficients. Finally, based on the principle of FSIM, the traditional SR results and the SR results using VP-PCA are combined to form our fused results. Experimental results are provided to show the superiority of the proposed method over several state-of-the-art methods in both quantitative and visual comparisons. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Local gradient-based illumination invariant face recognition using local phase quantisation and multi-resolution local binary pattern fusion

    Page(s): 12 - 21
    Save to Project icon | Click to expandQuick Abstract | PDF file iconPDF (593 KB)

    A local-based illumination insensitive face recognition algorithm is proposed which is the combination of image normalisation and illumination invariant descriptors. Illumination insensitive representation of image is obtained based on the ratio of gradient amplitude to the original image intensity and partitioned into smaller sub-blocks. Local phase quantisation and multi-scale local binary pattern, extract the sub-regions characteristics. Distance measurements of local nearest neighbour classifiers are fused at the score level to find the best match and decision-level fusion combines the results of two matching techniques. Entropy, class posterior probability and mutual information are utilised as the weights of fusion components. Simulation results on the YaleB, Extended YaleB, AR, Multi-PIE and FRGC databases show the improved performance of the proposed algorithm under severe illumination with low computational complexity and no reconstruction or training requirement. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • New reversible full-embeddable information hiding method for vector quantisation indices based on locally adaptive complete coding list

    Page(s): 22 - 30
    Save to Project icon | Click to expandQuick Abstract | PDF file iconPDF (902 KB)

    Steganography based on vector quantisation (VQ)-compressed indices is widely used in information hiding. In this study, the authors propose a new reversible information hiding method for VQ indices using an on-line generated locally adaptive complete coding list. The complete coding list guarantees that all VQ indices can embed one or two secret bits which efficiently increase the embedding capacity. Additionally, they propose a mixed coding method with an index position threshold by exploring the biased-distribution of locally complete coding indices in order to reduce the bit rate. Experimental results demonstrated that the authors proposed method has a better performance in embedding capacity, bit rate and the embedding-efficiency compared with the four information hiding methods. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Towards blind detection of low-rate spatial embedding in image steganalysis

    Page(s): 31 - 42
    Save to Project icon | Click to expandQuick Abstract | PDF file iconPDF (523 KB)

    Steganalysis of least significant bit (LSB) embedded images in spatial domain has been investigated extensively over the past decade and most well-known LSB steganography methods have been shown to be detectable. However, according to the latest findings in the area, two major issues of very low-rate (VLR) embedding and content-adaptive steganography have remained hard to resolve. The problem of VLR embedding is indeed a generic problem to any steganalyser, while the issue of adaptive embedding specifically depends on the hiding algorithm employed. The latter challenge has recently been brought up again to the area of LSB steganalysis by highly undetectable stego image steganography that offers a content-adaptive embedding scheme for grey-scale images. The authors new image steganalysis method suggests analysis of the relative norm of the image Clouds manipulated in an LSB embedding system. The method is a self-dependent image analysis and is capable of operating on low-resolution images. The proposed algorithm is applied to the image in spatial domain through image Clouding, relative auto-decorrelation features extraction and quadratic rate estimation, as the main steps of the proposed analysis procedure. The authors then introduce and use new statistical features, Clouds-Min-Sum and Local-Entropies-Sum, which improve both the detection accuracy and the embedding rate estimation. They analytically verify the functionality of the scheme. Their simulation results show that the proposed approach outperforms some well known, powerful LSB steganalysis schemes, in terms of true and false detection rates and mean squared error. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Local brightness adaptive image colour enhancement with Wasserstein distance

    Page(s): 43 - 53
    Save to Project icon | Click to expandQuick Abstract | PDF file iconPDF (1415 KB)

    Colour image enhancement is an important preprocessing phase of many image analysis tasks such as image segmentation, pattern recognition and so on. This study presents a new local brightness adaptive variational model using Wasserstein distance for colour image enhancement. Under the perceptually inspired variational framework, the proposed energy functional consists of an improved contrast energy term and a Wasserstein dispersion energy term. To better adjust image dynamic range, the authors propose a local brightness adaptive contrast energy term using the average brightness of image local patch as the local brightness indicator. To restore image true colours, a Wasserstein distance-based dispersion energy term is used to measure the statistical similarity between the original image and the enhanced image. The proposed energy functional is minimised by using a gradient descent algorithm. Two objective measures are used to quantitatively measure the enhancement quality. Experimental results demonstrate the efficiency of the proposed model for removing colour cast and haze, enhancing contrast, recovering details and equalising low key images. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Semantic image compression based on data hiding

    Page(s): 54 - 61
    Save to Project icon | Click to expandQuick Abstract | PDF file iconPDF (765 KB)

    This study proposes a novel scheme of semantic image compression. A compressor firstly creates a compact image by gathering a part of pixels in an original image, and calculates estimation errors of the rest pixels. Then, a compressed image is produced by embedding the estimation errors into the compact image using data hiding techniques. This way, the compressed image are made up of a small number of pixel values, and the original content is still visible roughly through the compressed image without any decompression tool. If a decompression tool is available, a user may reconstruct a high quality image with original size by exploiting the embedded data. Because the proposed scheme is compatible with reversible and non-reversible data hiding techniques, either the lossy or lossless semantic compression can be performed. With different parameters, the qualities of compressed and decompressed images vary. Furthermore, the smoother the original image content, the better is the compression–decompression performance. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Segmentation with saliency map using colour and depth images

    Page(s): 62 - 70
    Save to Project icon | Click to expandQuick Abstract | PDF file iconPDF (667 KB)

    This study proposes a segmentation method using colour and depth images, from which saliency map is generated. With saliency map obtained from both images, salient foreground is extracted using adaptive thresholding of the saliency map, which reduces performance degradation of existing methods for images with complex background. Also, to enhance edges of the foreground, an adaptive guided filter is used. Normalised cut segmentation is performed with the extracted and enhanced foreground to separate different objects. Experimental results with three different types of datasets show that the proposed method gives better segmentation results than the existing methods, in which the complex background is well separated from the foreground. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Efficient image sharpening and denoising using adaptive guided image filtering

    Page(s): 71 - 79
    Save to Project icon | Click to expandQuick Abstract | PDF file iconPDF (755 KB)

    Enhancing the sharpness and reducing the noise of blurred, noisy images are crucial functions of image processing. Widely used unsharp masking filter-based approaches suffer from halo-artefacts and/or noise amplification, while noise- and halo-free adaptive bilateral filtering (ABF) is computationally intractable. In this study, the authors present an efficient sharpening algorithm inspired by guided image filtering (GF). The author's proposed adaptive GF (AGF) integrates the shift-variant technique, a part of ABF, into a guided filter to render crisp and sharpened outputs. Experiments showed the superiority of their proposed algorithm to existing algorithms. The proposed AGF sharply enhances edges and textures without causing halo-artefacts or noise amplification, and it is efficiently implemented using a fast linear-time algorithm. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • High-speed quantile-based histogram equalisation for brightness preservation and contrast enhancement

    Page(s): 80 - 89
    Save to Project icon | Click to expandQuick Abstract | PDF file iconPDF (992 KB)

    In this study, the authors introduce a new histogram equalisation-based contrast enhancement method called high-speed quantile-based histogram equalisation (HSQHE) suitable for high contrast digital images. The proposed method is an effective tool to deal with the ‘mean-shift’ problem, which is a usual problem with the histogram equalisation-based contrast enhancement methods. The main idea of HSQHE is to divide input image histogram into two or more sub-histograms, where segmentation is based on quantile values. Since the histogram segmentation is based on the quantile values, the entire spectrum of grey level will always play an important role in enhancement process. In addition, the proposed method does not require the recursive segmentation of the histogram as in many other methods, and hence the proposed method requires less time for segmentation. The experimental results show that the performance of the proposed HSQHE method is better as compared with other existing methods available in the literature. In addition, this method preserves image brightness more accurately than the prevailing state of art and takes less time as compared with the other methods. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.

Aims & Scope

The range of topics covered by IET Image Processing includes areas related to the generation, processing and communication of visual information.

Full Aims & Scope

Meet Our Editors

Publisher
IET Research Journals
iet_ipr@theiet.org