Scheduled System Maintenance:
Some services will be unavailable Sunday, March 29th through Monday, March 30th. We apologize for the inconvenience.
By Topic

Vision, Image and Signal Processing, IEE Proceedings -

Issue 1 • Date 28 Feb. 2005

Filter Results

Displaying Results 1 - 15 of 15
  • Automated estimation of rock fragment distributions using computer vision and its application in mining

    Publication Year: 2005 , Page(s): 1 - 8
    Cited by:  Papers (3)
    Save to Project icon | Click to expandQuick Abstract | PDF file iconPDF (1893 KB)  

    Size distribution of rock fragments obtained from blasting and crushing in the mining industry has to be monitored for optimal control of a variety of processes before reaching the final grinding, milling and the froth flotation processes. Whenever feasible, mechanical sieving is the routine procedure to determine the cumulative rock weight distribution on conveyor belts or free falling off the end of transfer chutes. This process is tedious and very time consuming, even more so if a complete set of sieving meshes is used. A computer vision technique is proposed based on a series of segmentation, filtering and morphological operations specially designed to determine rock fragment sizes from digital images. The final step uses an area-based approach to estimate rock volumes. This segmentation technique was implemented and results of cumulative rock volume distributions obtained from this approach were compared to the mechanical fragment distributions. The technique yielded rock distribution curves which represents an alternative to the mechanical sieving distributions. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Novel embedded image coding algorithms based on wavelet difference reduction

    Publication Year: 2005 , Page(s): 9 - 19
    Cited by:  Papers (3)
    Save to Project icon | Click to expandQuick Abstract | PDF file iconPDF (1601 KB)  

    Wavelet difference reduction (WDR) has recently been proposed as a method for efficient embedded image coding. In this paper, the WDR algorithm is analysed and four new techniques are proposed to either reduce its complexity or improve its rate distortion (RD) performance. The first technique, dubbed modified WDR-A (MWDR-A), focuses on improving the efficiency of the arithmetic coding (AC) stage of the WDR. Based on experiments with the statistics of the output symbol sequence, it is shown that the symbols can either be arithmetic coded under different contexts or output without AC. In the second technique, MWDR-B, the AC stage is dropped from the coder. By employing MWDR-B, up to 20% of coding time can be saved without sacrificing the RD performance, when compared to WDR. The third technique focuses on the improvement of RD performance using context modelling. A low-complexity context model is proposed to exploit the statistical dependency among the wavelet coefficients. This technique is termed context-modelled WDR (CM-WDR), and acts without the AC stage to improve the RD performance by up to 1.5 dB over WDR on a set of test images, at various bit rates. The fourth technique combines CM-WDR with AC and achieves a 0.2 dB improvement over CM-WDR in terms of PSNR. The proposed techniques retain all the features of WDR, including low complexity, region-of-interest capability, and embeddedness. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Subspace-based approach for DOA estimation using pilot symbol channel identification

    Publication Year: 2005 , Page(s): 20 - 28
    Cited by:  Papers (2)
    Save to Project icon | Click to expandQuick Abstract | PDF file iconPDF (620 KB)  

    In the paper, antenna processing is applied in the radio localisation frame. For a source of interest, the multipath directions of arrival (DOA) are estimated and the shortest/direct propagation path is identified. First, an unstructured estimate of the sampled channel impulse response is derived by use of pilot symbols. The channel response samples are then separately processed to recover the DOA of the relative paths. For stationary channels, it is suggested that smoothing be used in the case of a uniform and linear antenna array (ULA) to recover the source subspace. For fast fading channels, which is typically the case for high speed mobiles, it is shown that using a MUSIC like algorithm allows source subspace recovery by exploiting the gain diversity over a reduced number of slots with unchanged DOA and time delays. Separate processing of channel response samples reduces the constraint on antenna array size and allows comparison of the path lengths. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Multitapering and a wavelet variant of MFCC in speech recognition

    Publication Year: 2005 , Page(s): 29 - 35
    Cited by:  Papers (3)
    Save to Project icon | Click to expandQuick Abstract | PDF file iconPDF (604 KB)  

    In speech recognition (ASR) based on hidden Markov models (HMM) it is necessary to obtain a spectral approximation with a reduced set of representation coefficients. The author introduces to the speech parameterisation scheme multitapering and a modification of the usual mel frequency cepstrum coefficients (MFCC) processing scheme based on wavelets on intervals (wavelet frequency coefficients, WFC). Phoneme recognition performance improvements compared to the MFCC have been experimentally verified on data from a speech database, using multitapering and WFC. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Fast-converging minimum frequency error RLS lattice filter for narrowband interference with discrete frequency steps

    Publication Year: 2005 , Page(s): 36 - 44
    Cited by:  Papers (1)
    Save to Project icon | Click to expandQuick Abstract | PDF file iconPDF (588 KB)  

    This paper discusses the fundamental convergence and frequency tracking properties of the recursive-least-squares (RLS) lattice filter in the presence of narrowband interference (NBI) whose frequency varies in discrete steps. It is shown for filters of this type, that the residual forward energy (RFE) after a frequency transition is a function of the input signal-to-noise ratio (SNR), separation of the sequential frequencies and the filter time constant and is exponentially decaying in nature. Reducing the RFE is important in removing unwanted transient artefacts from the desired signal. The convergence behaviour of the RLS algorithm based on a posteriori estimation errors is analysed under a number of conditions by varying the SNR and frequency step size. In order to limit the impact of the RFE while maintaining a minimum frequency tracking error in steady conditions, a fast-converging minimum frequency error (FCMFE) RLS lattice filter is suggested. For comparison, a least-mean-square (LMS) based gradient-adaptive lattice (GAL) filter is also analysed for this class of narrowband interference. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Airborne threat detection in navy IRST systems

    Publication Year: 2005 , Page(s): 45 - 51
    Cited by:  Papers (2)
    Save to Project icon | Click to expandQuick Abstract | PDF file iconPDF (656 KB)  

    A new algorithm is presented for the detection of airborne targets by means of infrared sensors operating in naval surveillance scenarios. The proposed algorithm consists of two steps: background clutter removal and detection over the residual clutter. The algorithm is fully automatic and its implementation does not require the tuning of any parameter other than the threshold for setting the probability of false alarm. The algorithm performance is investigated by means of a sequence of experimental IR images taken in a typical maritime environment. The results show that the proposed algorithm outperforms a standard detection algorithm specifically tailored to the analysed scenario. A statistical analysis of the experimental data is also performed to validate the hypotheses used to derive the detection algorithm. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Turbo equalisation in non-Gaussian impulsive noise

    Publication Year: 2005 , Page(s): 52 - 60
    Cited by:  Papers (3)
    Save to Project icon | Click to expandQuick Abstract | PDF file iconPDF (581 KB)  

    Turbo equalisation is a state-of-the-art receiving scheme for coded data transmission over channels introducing intersymbol interference (ISI). The author investigates turbo equalisation performance in the presence of ISI and impulsive noise. The design imperfections contributing to the non-robustness of the standard turbo equaliser to outliers are identified, and a novel turbo equaliser, at almost no additional increase in complexity, is proposed for joint mitigation of ISI and impulsive noise. The proposed turbo equaliser incorporates a Talwar penalty function into the maximum a posteriori (MAP) component equaliser to serve two purposes. First, it improves the estimation of the transition probabilities for all transitions through the trellis and for subsequent determination of the a posteriori log-likelihood ratio. Secondly, it absorbs the outliers and prevents them from spreading into the MAP constituent decoder. Simulation results based on Proakis's channel models show that the proposed turbo equaliser achieves a dramatic improvement over the standard turbo equaliser in impulsive noise. At a bit error rate (BER) of 10-2, the performance gain is as large as 3.5 to 5 dB, and as large as 7 to 8 dB at a BER of 10-3. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Regularised nonlinear blind signal separation using sparsely connected network

    Publication Year: 2005 , Page(s): 61 - 73
    Cited by:  Papers (3)
    Save to Project icon | Click to expandQuick Abstract | PDF file iconPDF (1446 KB)  

    A nonlinear approach based on the Tikhonov regularised cost function is presented for blind signal separation of nonlinear mixtures. The proposed approach uses a multilayer perceptron as the nonlinear demixer and combines both information theoretic learning and structural complexity learning into a single framework. It is shown that this approach can be jointly used to extract independent components while constraining the overall perceptron network to be as sparse as possible. The update algorithm for the nonlinear demixer is subsequently derived using the new cost function. Sparseness in the network connection is utilised to determine the total number of layers required in the multilayer perceptron and to prevent the nonlinear demixer from outputting arbitrary independent components. Experiments are meticulously conducted to study the performance of the new approach and the outcomes of these studies are critically assessed for performance comparison with existing methods. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Strategies to improve the performance of very low bit rate speech coders and application to a variable rate 1.2 kb/s codec

    Publication Year: 2005 , Page(s): 74 - 86
    Cited by:  Papers (5)  |  Patents (3)
    Save to Project icon | Click to expandQuick Abstract | PDF file iconPDF (608 KB)  

    This paper presents several strategies to improve the performance of very low bit rate speech coders and describes a speech codec that incorporates these strategies and operates at an average bit rate of 1.2 kb/s. The encoding algorithm is based on several improvements in a mixed multiband excitation (MMBE) linear predictive coding (LPC) structure. A switched-predictive vector quantiser technique that outperforms previously reported schemes is adopted to encode the LSF parameters. Spectral and sound specific low rate models are used in order to achieve high quality speech at low rates. An MMBE approach with three sub-bands is employed to encode voiced frames, while fricatives and stops modelling and synthesis techniques are used for unvoiced frames. This strategy is shown to provide good quality synthesised speech, at a bit rate of only 0.4 kb/s for unvoiced frames. To reduce coding noise and improve decoded speech, spectral envelope restoration combined with noise reduction (SERNR) postfilter is used. The contributions of the techniques described in this paper are separately assessed and then combined in the design of a low bit rate codec that is evaluated against the North American Mixed Excitation Linear Prediction (MELP) coder. The performance assessment is carried out in terms of the spectral distortion of LSF quantisation, mean opinion score (MOS), A/B comparison tests and the ITU-T P.862 perceptual evaluation of speech quality (PESQ) standard. Assessment results show that the improved methods for LSF quantisation, sound specific modelling and synthesis and the new postfiltering approach can significantly outperform previously reported techniques. Further results also indicate that a system combining the proposed improvements and operating at 1.2 kb/s, is comparable (slightly outperforming) a MELP coder operating at 2.4 kb/s. For tandem connection situations, the proposed system is clearly superior to the MELP coder. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Parametrically controlled noise shaping in variable state-step-back pseudo-Trellis SDM

    Publication Year: 2005 , Page(s): 87 - 96
    Save to Project icon | Click to expandQuick Abstract | PDF file iconPDF (831 KB)  

    Progress is reported in parametrically controlled noise shaping sigma delta modulator (SDM) design. As this SDM structure can provide a higher SNR than normal SDM structures, Philips Research Laboratories questioned whether further improvement could be obtained using techniques inspired by the Trellis SDM. Simulations are used here to illustrate the performance of a parametrically controlled pseudo-Trellis SDM. The technique uses uniquely a variable state step-back approach to mediate loop behaviour that is shown to achieve robust stability in the presence of aggressive noise shaping and high level signals. Comparisons are made with traditional SDM structures and LPCM systems for high-resolution audio applications. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Complex-variable distribution theory for Laplace and z transforms

    Publication Year: 2005 , Page(s): 97 - 106
    Cited by:  Papers (1)
    Save to Project icon | Click to expandQuick Abstract | PDF file iconPDF (888 KB)  

    The author proposes a generalisation of the theory of generalised functions, also known as the theory of distributions, by extending the theory to include generalised functions of a complex variable, both in the complex plane associated with continuous-time functions and that with discrete-time functions. The generalisation provides, among others, mathematical justifications of the properties of recently introduced generalised Dirac-delta impulses, using the principles of distribution theory. Properties of generalised functions of a complex variable are explored both in the Laplace domain associated with continuous-time functions and the z domain associated with discrete-time functions. Shifting of distributions, scaling, derivation, convolution with distributions and convolution with ordinary functions are evaluated in Laplace and z domains. Three-dimensional generalisations of sequences leading to generalised impulses, and of test functions in Laplace and z domains are presented. New expanded Laplace and z transforms are obtained using the proposed generalisation. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Performance study of gradient correlation for sub-pixel motion estimation in the frequency domain

    Publication Year: 2005 , Page(s): 107 - 114
    Cited by:  Papers (4)
    Save to Project icon | Click to expandQuick Abstract | PDF file iconPDF (553 KB)  

    The authors present a performance study of gradient correlation in the context of the estimation of interframe motion in video sequences. The method is based on the maximisation of the spatial gradient cross-correlation function, which is computed in the frequency domain and therefore can be implemented by fast transformation algorithms. Enhancements to the baseline gradient-correlation algorithm are presented which further improve performance, especially in the presence of noise. A comparative performance study is also presented, which demonstrates that the proposed method outperforms state-of-the-art methods in frequency-domain motion estimation, in the shape of phase correlation, in terms of sub-pixel accuracy for a range of test material and motion scenarios. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Motion-aided sampling and reconstruction

    Publication Year: 2005 , Page(s): 115 - 121
    Save to Project icon | Click to expandQuick Abstract | PDF file iconPDF (719 KB)  

    Motivated by motion compensated filtering in image processing, this paper considers the problem of sampling and reconstruction of signals with sampling rates below the Nyquist rate. It is assumed that temporal dependence can be induced via motion. This way, the data consists of both spatial and temporal sampling, and here the conditions for reconstruction are analysed for a number of typical motions. Extensive simulation experiments are also provided which further support the analysis. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Acoustic echo cancellation for stereophonic systems derived from pairwise panning of monophonic speech

    Publication Year: 2005 , Page(s): 122 - 128
    Save to Project icon | Click to expandQuick Abstract | PDF file iconPDF (530 KB)  

    An algorithm is introduced that performs stereophonic acoustic echo cancellation (SAEC) for systems using pairwise panning of a single monophonic source to provide the effect of spatialisation. The technique exploits the inherent high correlation between the loudspeaker signals, unlike other general SAEC techniques, which try to utilise any small uncorrelated features in the signals. The algorithm maintains a single aggregate echo path estimate that is updated using normalised least mean square (NLMS) and the knowledge of any change in the spatialisation. Consequently, it achieves a computational complexity that is of the same order as a single channel NLMS algorithm. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Locally adaptive wavelet domain Bayesian processor for denoising medical ultrasound images using Speckle modelling based on Rayleigh distribution

    Publication Year: 2005 , Page(s): 129 - 135
    Cited by:  Papers (13)
    Save to Project icon | Click to expandQuick Abstract | PDF file iconPDF (974 KB)  

    The authors present a statistical approach to speckle reduction in medical ultrasound B-scan images based on maximum a posteriori (MAP) estimation in the wavelet domain. In this framework, a new class of statistical model for speckle noise is proposed to obtain a simple and tractable solution in a closed analytical form. The proposed method uses the Rayleigh distribution for speckle noise and a Gaussian distribution for modelling the statistics of wavelet coefficients in a logarithmically transformed ultrasound image. The method combines the MAP estimation with the assumption that speckle is spatially correlated within a small window and designs a locally adaptive Bayesian processor whose parameters are computed from the neighboring coefficients. Further, the locally adaptive estimator is extended to the redundant wavelet representation, which yields better results than the decimated wavelet transform. The experimental results show that the proposed method clearly outperforms the state-of-the-art medical image denoising algorithm of Pizurica et al., spatially adaptive single-resolution methods and band-adaptive multi-scale soft-thresholding techniques in terms of quantitative performance as well as in terms of visual quality of the images. The main advantage of the new method over the existing techniques is that it suppresses speckle noise well, while retaining the structure of the image, particularly the thin bright streaks, which tend to occur along boundaries between tissue layers. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.