By Topic

Image Processing, IET

Issue 7 • Date October 2012

Filter Results

Displaying Results 1 - 22 of 22
  • Speckle reduction with edges preservation for ultrasound images: using function spaces approach

    Page(s): 813 - 821
    Save to Project icon | Click to expandQuick Abstract | PDF file iconPDF (670 KB)  

    In this study, a novel speckle reduction method is proposed for ultrasound images. This denoising method is designed to preserve both the edges and structural details of the image. Speckle noise is suppressed, without smearing the edges, by extending the smoothness of the image in the wavelet-based Hölder spaces. A comparison of smoothing speckles with the other well-known methods is provided via the size of Besov norm. The authors validate the proposed method using synthetic data, simulated and real ultrasound images. Experiments demonstrate the performance improvement of the proposed method over other state-of-the-art methods in terms of image quality and edge preservation indices. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Image retrieval and classification using adaptive local binary patterns based on texture features

    Page(s): 822 - 830
    Save to Project icon | Click to expandQuick Abstract | PDF file iconPDF (943 KB)  

    In this study, adaptive local binary patterns (ALBP) are proposed for image retrieval and classification. ALBP are based on texture features for local binary patterns. The texture features were used to propose an adaptive local binary patterns histogram (ALBPH) and gradient for adaptive local binary patterns (GALBP) in this study. Two texture features are most useful for describing the relationship in a local neighbourhood. ALBPH shows the texture distribution of an image by identifying and employing the difference between the centre pixel and the neighbourhood pixel values. In the GALBP, the gradient for each pixel is computed and the sum of the gradient of the ALBP number is adopted as an image feature. In this study, a set of colour and greyscale images were used to generate a variety of image subsets. Then, image retrieval and classification experiments were carried out for analysis and comparison with other methods. From the experimental results, the authors discovered that the proposed feature extraction method can effectively describe the characteristics of images in regard to texture image and image type. The image retrieval and classification experiments also produced better results than other methods. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Triplet markov fields with edge location for fast unsupervised multi-class segmentation of synthetic aperture radar images

    Page(s): 831 - 838
    Save to Project icon | Click to expandQuick Abstract | PDF file iconPDF (604 KB)  

    Triplet Markov fields (TMF) model is suitable for dealing with multi-class segmentation of non-stationary synthetic aperture radar (SAR) images. In this study, an algorithm using TMF with edge location for fast unsupervised multi-class segmentation of SAR images is proposed. The new segmentation algorithm can locate edge accurately with reasonable computational cost. First for the statistical characteristics of multiplicative speckle noise in SAR image, an edge strength based on the ratio of exponentially weighted averages operator is introduced into the Turbopixels algorithm to obtain a superpixel graph with accurate edge location in SAR images. To enhance the computational efficiency and suppress the speckle, the TMF model on pixel is generalised to that on the superpixel graph. Then, the new corresponding potential energy function and maximisation of posterior marginal segmentation formula are derived. The experimental results on synthetic and real SAR images show that the proposed algorithm can obtain accurate edge location in multi-class segmentation of SAR images, as well as enhance the computational efficiency. Especially when dealing with SAR images in large size, the proposed algorithm can give a robust and efficient result of segmentation. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • General method for edge detection based on the shear transform

    Page(s): 839 - 853
    Save to Project icon | Click to expandQuick Abstract | PDF file iconPDF (2245 KB)  

    A method for detecting edges is proposed in this study. First, it combines a traditional edge detection operator with the shear transform utilising the fact that the shear filters provide a more favourable treatment of directions, the shear transform makes the directional edges easy to detect by traditional operators; then the edge information in different directions are fused to complement each other, and a correspondence threshold, which is acquired according to receiver operating characteristic curves, is used to refine the edges; finally, the optimal result is obtained. Moreover, the authors can consider the proposed method as a general method since it could be applicable to other edge detection operators. The experimental results have indicated that our method has advantage over the traditional ones of the effectiveness of edge detection and the ability of noise rejection. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Characterisation of tool marks on cartridge cases by combining multiple images

    Page(s): 854 - 862
    Save to Project icon | Click to expandQuick Abstract | PDF file iconPDF (878 KB)  

    The characteristic marks left by firearms on cartridge cases (CCs) during firing are used by forensic experts to identify CCs fired from the same firearm; however, the nature of the tool marks on the CCs is not well understood. The objective of this study is to separate the tool marks i.e., the signal, from the background signal and the noise and thereby understand its peculiarities. To extract the signal, which is much weaker than the noise, second-order derivatives of three-dimensional images of the surfaces of a series of CCs fired from the same firearm were used. Instead of using rigid body transformation, the images are first registered based on estimated planar homographies; then unwanted areas, including the headstamp are masked out. Aligned images are merged by weighted averaging using estimated signal and noise variances, which are also used for calculating signal and noise spectra. Wiener filtering is applied to further increase the signal-to-noise ratio. The effectiveness of the method is demonstrated on real data. It is also shown that breech face marks exist on the outer ring, a feature typically ignored in automatic matching. In addition, visual results obtained by reconstructing the surface from second-order derivatives are presented. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Sharp feature extraction in point clouds

    Page(s): 863 - 869
    Save to Project icon | Click to expandQuick Abstract | PDF file iconPDF (548 KB)  

    Sharp feature extraction has been playing an important role in point cloud processing. In this study, a novel method for extracting sharp features from point clouds is presented. It is proposed that in a given point cloud, the displacement between each of the points and the weighted average position in the given neighbourhood of that point is calculated, and the point is labelled as the candidate sharp feature point if the displacement is salient. The normal directions of the obtained candidate sharp feature points are estimated by means of local principal component analysis. Tensor voting is performed to refine the normal estimates. The displacement between a point and its locally weighted average position is projected along the estimated normal direction. The points with extreme projection values are defined as the final sharp feature points. The implementation of the proposed method on both synthesised and practical scanned point clouds show that the method is effective and robust for the purpose of sharp feature extraction. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Concentric-circle-based camera calibration

    Page(s): 870 - 876
    Save to Project icon | Click to expandQuick Abstract | PDF file iconPDF (381 KB)  

    In recent years, camera calibration and three-dimensional (3D) reconstruction have attracted more and more attentions in the vision community and found wide applications in many vision-based robotics. This article discusses about a new technique for camera calibration based on concentric circles, whose positions and sizes can be arbitrary. Here, the key problem is how to efficiently estimate the projections of the circle centre from the images. The solution for this problem is formulated into a first-order polynomial eigenvalue problem (PEP) by considering the pole-polar relationship in the image. Then the camera can be calibrated with the images of circular points or the vanishing points. Accordingly, two algorithms are suggested. Finally, both numerical simulation and real data experiments have been carried out to validate the algorithms. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Fast image interpolation using the bilateral filter

    Page(s): 877 - 890
    Save to Project icon | Click to expandQuick Abstract | PDF file iconPDF (1618 KB)  

    In this study, the authors propose a new image interpolation technique using the bilateral filter to estimate the unknown high-resolution pixels. Compared with the least-squares estimation, a small-kernel bilateral filter has the advantages of fast computation and stability. The range distance of the bilateral filter is estimated using a novel maximum a posterior estimation, in order to consider both the diagonal and vertical-horizontal correlations. For the consideration of global consistency, the pixel-based soft-decision estimation (SAI) is proposed to constrain the consistency of edge statistic within a local window. Experimental results show that the interpolated images using the proposed algorithm give an average of 0.462, 0.413, 0.532 and 0.036-dB peak signal-to-noise ratio (PSNR) improvement compared with that using the bicubic interpolation, linear minimum mean squares error estimation, new edge-directed interpolation (NEDI) and SAI respectively. The subjective quality agrees with the PSNR as well. More importantly, the proposed algorithm is fast and it requires around 1/60 computational cost of the SAI. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Colourisation in Yxy colour space for purple fringing correction

    Page(s): 891 - 900
    Save to Project icon | Click to expandQuick Abstract | PDF file iconPDF (820 KB)  

    Purple fringing is an undesirable colour artefact in images acquired from digital cameras. As digital cameras are getting smaller these days with supporting high-quality images, purple fringing has become a serious problem. This study proposes an effective purple fringing correction method using colourisation in the Yxy colour space. In the proposed method, a chromaticity diagram-based method is used to detect the purple fringed region (PFR), and then this region is corrected by colourisation. The colours of intact pixels near the PFR are set to those of the seed pixels. Experimental results using a number of test images with purple fringing show that the regions, which are corrected by the proposed method, look more natural to human observers than those that are corrected by other existing methods. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Colour demosaicking for complementary colour filter array using spectral and spatial correlations

    Page(s): 901 - 909
    Save to Project icon | Click to expandQuick Abstract | PDF file iconPDF (612 KB)  

    The complementary colour filter array (CCFA) that is widely used in consumer-level digital video cameras measures only one colour component per pixel, that is, cyan, magenta, yellow and green. To infer whole colour information at each pixel, a demosaicking process is required. However, most demosaicking methods proposed in the literatures were developed for the Bayer CFA, which has quite different pattern than the CCFA. This study presents a high quality edge-adaptive colour interpolation approach for the CCFA. Two estimates of the luminance signal are made at each pixel under different hypotheses on edge directions simply using bilinear interpolation. Not only the sampled signal but also these two estimates are employed to assist the final interpolation. A post-processing step for suppressing demosaicking artefacts by adaptive filtering is also presented. Experimental results confirm the performance of this algorithm both objectively and subjectively. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Visual sensitivity-based low-bit-rate image compression algorithm

    Page(s): 910 - 918
    Save to Project icon | Click to expandQuick Abstract | PDF file iconPDF (1292 KB)  

    In this study, the authors present a visual sensitivity-based low-bit-rate image compression algorithm. The authors algorithm combines both visual sensitivity and compression techniques so that a higher compression rate, with satisfactory visual quality, can be achieved. In the coding process, the input image is divided into blocks, and each block is classified as an edge block (EB), a textural block (TB) or a flat block (FB). For EBs, which are most important to the subjective quality of decoded images, the standard Joint Photographic Experts Group (JPEG) coding scheme with a tolerant quantisation step is employed so as to restrict the blocking artefacts caused by the quantisation error to an acceptable level. For FBs, a skipping scheme is employed on blocks in the compression process so as to save the bits. The coding of the skip blocks, identified by the skipping scheme, will make reference to the reconstructed regions of the image in the encoding process. Owing to the masking effects of the human visual system on high-frequency textures, standard JPEG compression coding with a greater quantisation step is employed on the down-scaled version of non-skip blocks and TBs. Experimental results show the superior performance of our method in terms of both compression efficiency and visual quality. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Classification of surveillance video objects using chaotic series

    Page(s): 919 - 931
    Save to Project icon | Click to expandQuick Abstract | PDF file iconPDF (966 KB)  

    The authors propose a framework for binary classification of challenging objects (e.g. incomplete, partial occluded, background over-lapped, scaled, outdoor) in surveillance video. The framework uses feature binding of MPEG-7 visual descriptors via chaotic series simulation. Diverse video objects are tested in multiple binary classifiers for generic classes (e.g. has_person, has_group_of_persons, has_vehicle and has_unknown). Object classification accuracy is verified with both low- and high-dimensional chaotic series-based feature binding. With high-dimensional chaotic series simulation: (i) the classification accuracy significantly improves on average, 83% compared with the 62% with the original MPEG -7 visual descriptors; (ii) %vehicle% objects are clustered well, which leads to above 99% accuracy for only vehicles against other objects; and (iii) drifts in high-dimensional chaotic series, because of transient, allow the training feature vector to include subtle variations in MPEG-7 descriptor coefficients for video objects in a class. A higher variance in training feature vector, using high-dimensional chaotic series simulation, manifests these subtle variations. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Bayesian image denoising using two complementary discontinuity measures

    Page(s): 932 - 942
    Save to Project icon | Click to expandQuick Abstract | PDF file iconPDF (1161 KB)  

    This study introduces a novel Bayesian image denoising method using two complementary discontinuity measures. The first discontinuity measure is the spatial-gradient, which has been widely used as a discontinuity measure. Although the spatial-gradient measure effectively preserves edge components in images, it is inadequate to detect significant discontinuities from noisy images because of its over-locality. Thus, the other discontinuity measure to detect contextual discontinuities for feature preservation is additionally required. The local-inhomogeneity measure provides the degree of uniformity in small regions, and is able to detect locations of the significant discontinuities effectively. Therefore the authors propose a Bayesian denoising framework using the two complementary discontinuity measures. The two complementary discontinuity measures are elaborately combined to be employed for creating prior probabilities of the Bayesian denoising framework. The experimental results show that the proposed method not only achieves a high peak signal to noise ratio (PSNR) gain from noisy images but also reduces noise effectively while preserving edge components. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Level-base compounded logarithmic curve function for colour image enhancement

    Page(s): 943 - 958
    Save to Project icon | Click to expandQuick Abstract | PDF file iconPDF (2478 KB)  

    In this study, the authors present a new strategy to implement an illumination compensation-based contrast enhancement. Different from the traditional pixel-to-pixel transformation, the proposed method offers a level-to-level framework by generating the reference intensity level and the given target intensity level. Fundamentally, the traditional illumination compensation algorithms such as Histogram equalisation, log and gamma transformation are trade-off strategies and face the same dilemma. For example, the log function compresses the dynamic range of image with large variations in pixel values. The proposed method is an integration algorithm, the illumination compensation could be considered as the transformation from reference intensity to the chosen target intensity, and the contrast is enhanced by the given algebraic definition of the characteristic values of each pixel. The advantages are the adjusted intensity would be accomplished in the native phase according to the variant characteristic intensity values, clarifying the details of the darker areas as well as preserving the highlight areas, reducing HALO, colour constancy, removing colour cast and colour restoration. Shown as the experimental results, the proposed method can obtain better performance compared with other methods in subjective evaluation with contour plot and the objective evaluations by universal quality index, structural similarity index and average of standard deviations. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Super-resolution using an enhanced Papoulis-Gerchberg algorithm

    Page(s): 959 - 965
    Save to Project icon | Click to expandQuick Abstract | PDF file iconPDF (586 KB)  

    Super resolution (SR) is the process of generating a raster image with a higher resolution than its source. The Papoulis-Gerchberg (P-G) algorithm known as one of the most important approaches of signal reconstruction, has been extensively used to reconstruct high-resolution information from low-resolution images by imposing spatial and frequency domain constraints in an iterative process. Although this procedure is easy to implement, it converges slowly and requires pre-knowledge of the image bandwidth. To overcome these problems, a new procedure is proposed, to enhance the original P-G algorithm in terms of speed and performance, by using dynamic properties in the frequency domain. The proposed technique also allows an automatic adjustment of the bandwidth during each iteration. Computer simulations are given to illustrate the effectiveness of the proposed method. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Single image fog removal using anisotropic diffusion

    Page(s): 966 - 975
    Save to Project icon | Click to expandQuick Abstract | PDF file iconPDF (1392 KB)  

    In this study, a novel and efficient fog removal algorithm is proposed. Fog formation is because of attenuation and airlight. Attenuation reduces the contrast and airlight increases the whiteness in the scene. Proposed algorithm uses anisotropic diffusion to recover scene contrast. Simulation results demonstrate that proposed algorithm outperforms prior state-of-the-art algorithms in terms of contrast gain, percentage of number of saturated pixels and computation time. Proposed algorithm is independent of the density of fog and does not require user intervention. It can handle colour as well as grey images. Along with the RGB (red, blue and green) colour model, proposed algorithm can work for HSI (hue, saturation and intensity) model which further reduces the computation. Proposed algorithm has a wide application in tracking and navigation, consumer electronics and entertainment industries. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Improved decision-based detail-preserving variational method for removal of random-valued impulse noise

    Page(s): 976 - 985
    Save to Project icon | Click to expandQuick Abstract | PDF file iconPDF (682 KB)  

    The authors propose an improved decision-based detail-preserving variational method (DPVM) for removal of random-valued impulse noise. In the denoising scheme, adaptive centre weighted median filter (ACWMF) is first ameliorated by employing the variable window technique to improve its detection ability in highly corrupted images. Based on the improved ACWMF, a fast iteration strategy is used to classify the noise candidates and label them with different noise marks. Then, all the noise candidates are restored one-time by weight-adjustable detail-preserving variational method. The weights between the data-fidelity term and the smooth regularisation term of the convex cost-function in DPVM are decided by the noise marks. After minimisation, the restored image is obtained. Extensive simulation results show that the proposed method outperforms some existing algorithms, both in vision and quantitative measurements. Moreover, our method is faster than some decision-based DPVM. Therefore it can be ported into practical application easily. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Dynamic threshold-based keyframe detection and its application in rate control

    Page(s): 986 - 995
    Save to Project icon | Click to expandQuick Abstract | PDF file iconPDF (627 KB)  

    This study proposes a new dynamic threshold model to detect keyframes for the coding of a video sequence. The proposed detection threshold is content adaptive which dynamically updates its value at each frame by taking into consideration both the statistical properties of the video sequence and the resolution as well as the quantisation parameters. The content variation metric used in the proposed detection method utilises the information that is already generated by the encoder during the encoding process and thus it makes the detection process fast. By applying the proposed scheme to rate control, the sizes of group of pictures are now adapted to the video content and it removes the temporal redundancy among different frames better, which transforms to the improvements on coding efficiency. The experimental results have shown that the proposed method achieves peak signal-to-noise ratio (PSNR) improvement. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Fast computation of Zernike moments in polar coordinates

    Page(s): 996 - 1004
    Save to Project icon | Click to expandQuick Abstract | PDF file iconPDF (662 KB)  

    Zernike moments (ZMs) are widely used in many image analysis and pattern recognition problems because of their superiority compared with other moments. However, they suffer from high computation cost and inherent error. Previous researches have shown that the algorithm, computing ZMs in polar system, improves the ZMs accuracy of the reconstruction and invariance properties dramatically. In this study, the authors firstly modify a direct method for computing ZMs in polar coordinates and present a recursive relation. Then, this study presents an algorithm for fast computation of ZMs, based on the improved polar pixel tiling scheme. Owing to the symmetrical property, ZMs can be obtained by computing only one-sixteenth circle of the radial polynomials, which means that the number of pixels involved in the computation of ZMs is only 6.25% of the previous method. This leads to a significant reduction in the computational complexity requirements. A comparison with the other conventional method is performed in detail. The obtained results show the superiority of the proposed method. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Automatic detection of exudates and optic disk in retinal images using curvelet transform

    Page(s): 1005 - 1013
    Save to Project icon | Click to expandQuick Abstract | PDF file iconPDF (802 KB)  

    This work presents a curvelet-based algorithm for detection of optic disk (OD) and exudates on low contrast images. This algorithm which is composed of three main stages does not require user initialisation and is robust to the changes in the appearance of retinal fundus images. At first, bright candidate lesions in the image are extracted by employing DCUT and modification of curvelet coefficients of enhanced retinal image. For this purpose, the authors apply a new bright lesions enhancement on green plane of retinal image to obtain adequate illumination normalisation in the regions near the OD, and to increase brightness of lesions in dark areas such as fovea. Following this step, the authors introduce a new OD detection and boundary extraction method based on DCUT and level set method. Finally, bright lesions map (BLM) image is generated and to distinguish between exudates and OD (i.e. a false detection for the final exudates detection), the extracted candidate pixels in BLM that are not in OD regions (detected in previous step) are considered as actual bright lesions. The sensitivity and specificity of the authors exudates detection method are 98.4 and 90.1%, respectively, and the average accuracy of their OD boundary extraction method is 94.51%. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Biogeography-based optimisation search algorithm for block matching motion estimation

    Page(s): 1014 - 1023
    Save to Project icon | Click to expandQuick Abstract | PDF file iconPDF (580 KB)  

    Global optimisation methods such as genetic algorithm and particle swarm optimisation have been applied to motion estimation to prevent from being trapped into local minimum. However, their computational complexity is very high. To overcome this problem, a novel search algorithm for block motion estimation based on biogeography-based optimisation (BMEBBO) is proposed in this study. Since biogeography-based optimisation (BBO) has few initial parameters, fast convergence speed and high searching precision, BMEBBO can search global minimum effectively through the migration and the mutation operation of BBO. In addition, BMEBBO with chaotic search (BBOCHAO) is proposed to improve the local search ability of BMEBBO and a multi-mode algorithm combining BBOCHAO with diamond search (BBOCDS) is also proposed to improve the speed of BBOCHAO. Experimental results show that BBOCHAO has high prediction quality and low fluctuations of video quality especially for violent motion. BBOCDS can remarkably decrease the computational complexity of BBOCHAO with little sacrifice of peak signal-to-noise ratio. Moreover, BBOCDS is faster than test zero search algorithm in scalable video coding implementation with little sacrifice in rate-distortion sense. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Acceleration of fractal image compression using fuzzy clustering and discrete-cosine-transform-based metric

    Page(s): 1024 - 1030
    Save to Project icon | Click to expandQuick Abstract | PDF file iconPDF (573 KB)  

    The encoding step in a fractal image compression is very time consuming, because a large numbers of sequential search through a list of domains are needed to find the best match for a given range block. Adaptive domain clustering is one solution to overcome this computational burden. The use of a new metric with fewer operations for domain-range blocks comparison is fruitful. In this study, range and domain blocks are categorised by fuzzy c-mean-clustering approach and compared with the use of new metric based on discrete cosine transform coefficient. Experimental results show that by clustering image pixels into five clusters, the encoding step was 8.88 times faster than the full-search method (no clustering) at the expense of some reduction in the decoded image's quality. But the proposed method with the same number of clusters speeds up the encoding process by 45 with lower PSNR decay. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.

Aims & Scope

The range of topics covered by IET Image Processing includes areas related to the generation, processing and communication of visual information.

Full Aims & Scope

Meet Our Editors

Publisher
IET Research Journals
iet_ipr@theiet.org