By Topic

Vision, Image and Signal Processing, IEE Proceedings -

Issue 4 • Date Aug 1998

Filter Results

Displaying Results 1 - 10 of 10
  • Processing time-correlated single photon counting data to acquire range images

    Page(s): 237 - 243
    Save to Project icon | Click to expandQuick Abstract | PDF file iconPDF (1084 KB)  

    The processing and analysis are described of range data in a time-of-flight imaging system based on time-correlated single photon counting. The system is capable of acquiring range data accurate to 10 μm at a standoff distance in the order of 1 m, although this can be varied substantially. It is shown how fitting of the pulsed histogram data by a combination of a symmetric key and polynomial functions can improve the accuracy and robustness of the depth data, in comparison with methods based on upsampling and centroid estimation. The imaging capability of the system is also demonstrated View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Multiresolution and hybrid Bayesian algorithms for automatic detection of change points

    Page(s): 280 - 286
    Save to Project icon | Click to expandQuick Abstract | PDF file iconPDF (692 KB)  

    Two methods for detection of step changes in noise corrupted piecewise-constant univariate datasets are presented. The aim is to determine automatically the number and position of any discontinuities in the mean. This problem is commonly known as the change-point problem. The multiresolution method presented involves performing a discrete wavelet transform. Shrinking the coefficients via soft thresholding, and then correlating across scales. Also Bayesian algorithms have long been available; they yield good results but they are impossible to apply in many cases due to the huge computational complexity. The technique is compared with previously published hybrid Bayesian algorithms. It is essential in any technique that the probability of false detections is low while retaining a sufficiently high probability of detection for correct change points. To this end the Student's t-test is introduced as a final stage after both methods. This eliminates most, if not all, false detections while retaining most correct ones. Simulation results are presented for each algorithm demonstrating that a good performance is obtained for datasets with different characteristics View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Linear prediction analysis of speech signals in the presence of white Gaussian noise with unknown variance

    Page(s): 303 - 308
    Save to Project icon | Click to expandQuick Abstract | PDF file iconPDF (528 KB)  

    A simple method is presented to compensate for noise effects before performing linear prediction analysis of speech signals in the presence of white noise with unknown variance. The method determines a suitable bias that should be subtracted from the zero-lag autocorrelation function, rather than deriving the exact noise variance. The resulting linear prediction filter is guaranteed to be stable. Since the bias used is always smaller than the minimum eigenvalue of the autocorrelation matrix. In addition to a comparison with other methods, the proposed method is examined from various viewpoints, including the degree of formant intensity, signal-to-noise ratio (SNR), deviation of compensated spectra and objective distortion measures. The improvements observed across a data set, consisting of four sentences uttered by six speakers, indicate that the compensated spectra measured with low SNRs are comparable to the uncompensated counterparts measured with approximately 5 dB higher SNRs View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Design of causal stable IIR perfect reconstruction filter banks using transformation of variables

    Page(s): 287 - 292
    Save to Project icon | Click to expandQuick Abstract | PDF file iconPDF (572 KB)  

    A generalisation to the design technique of Tay and Kingsbury (see IEEE Trans. Circuits Syst. II: Analog Digit. Signal Process., vol.43, no.2, p.274-79, 1996) for two-channel, causal stable IIR perfect reconstruction filter banks is presented based on transformation of variables. Previously the transformation functions used were allpass, but this yielded subband filters with a fairly large overshoot in their frequency design responses. By relaxing the requirement of using allpass transformation functions, filters with improved response (lower and no overshoot) are achievable. Several design examples are presented to show the flexibility of the design technique View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Laser-video scanner calibration without the use of a frame store

    Page(s): 244 - 248
    Save to Project icon | Click to expandQuick Abstract | PDF file iconPDF (456 KB)  

    A calibration method for a structured light surface scanner is presented. The method uses a model-based calibration, accounting for laser stripe and translation stage alignment, and the internal and external camera parameters. The inclusion of nonlinear radial distortion in the camera model improves its calibration accuracy over a linear model. The method has been designed for laser-video scanner systems that use hardware-based measurement of the illuminated line position, rather than a frame store. Standard camera-based calibration techniques cannot be used for such systems. However. The method is also applicable to systems that use software-based stripe segmentation View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Restoration of images contaminated by mixed Gaussian and impulse noise using a recursive minimum-maximum method

    Page(s): 264 - 270
    Save to Project icon | Click to expandQuick Abstract | PDF file iconPDF (1876 KB)  

    A technique is proposed for removing impulse noise in images, called the recursive minimum-maximum method. Statistical analysis of this method indicates that it is good at preserving fine details and suppressing impulse noise at the same time. Experimental results show that the technique is robust and produces better restored images under various impulse noise conditions than other median filter-based methods View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Attack operators for digitally watermarked images

    Page(s): 271 - 279
    Save to Project icon | Click to expandQuick Abstract | PDF file iconPDF (2196 KB)  

    Experiments are reported in which the performance under malicious attack of discrete cosine transform-based digital watermarking has been investigated. Image-processing operators (attack operators) have been found that are able to impair significantly the embedded watermark without any knowledge of the secret keys used for the embedding process. Of these, the Laplacian removal operator proved to be particularly effective. Such operators need to be recognised and understood in the design of effective watermarking schemes View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Rate-distortion analysis of nonlinear quantisers for MPEG video coders: sigmoidal and unimodal quantiser control functions

    Page(s): 249 - 256
    Save to Project icon | Click to expandQuick Abstract | PDF file iconPDF (820 KB)  

    Quantisation is employed in MPEG video coders as a video rate control scheme to regulate the data rate of the compressed video bit stream entering the transmission buffer. For constant bit rate applications, the quantiser has a crucial effect on the video data rate and video quality. It has been a challenging task to optimise the quantiser step size for both bit rate and quality since they are mutually exclusive parameters, as defined by rate-distortion theory. The quantiser step size is generally determined by a linear relationship with respect to the buffer occupancy. Two nonlinear quantiser control functions are investigated, sigmoidal and unimodal, which achieve superior video rate control performance while maintaining similar video quality to the linear one. These two functions are analysed in the framework of rate-distortion theory. Their performance for video rate fluctuation has also been analysed. Encoding results for the two functions are compared to the MPEG2 TM5 evaluation model View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Reliable tracking of facial features in semantic-based video coding

    Page(s): 257 - 263
    Save to Project icon | Click to expandQuick Abstract | PDF file iconPDF (1448 KB)  

    A new method of tracking the position of important facial features for semantic-based moving image coding is presented. Reliable and fast tracking of the facial features in head-and-shoulders scenes is of paramount importance for reconstruction of the speakers motion in videophone systems. The proposed method is based on eigenvalue decomposition of the sub-images extracted from subsequent frames of the video sequence. The motion of each facial feature (the left eye, the right eye, the nose and the lips) is tracked separately; this means that the algorithm can be easily adapted for a parallel machine. No restrictions, other than the presence of the speaker's face, were imposed on the actual contents of the scene. The algorithm was tested on numerous widely used head-and-shoulders video sequences containing moderate head pan, rotation and zoom, with remarkably good results. Tracking was maintained even when the facial features were occluded. The algorithm can also be used in other semantic-based systems View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Root moments: a digital signal-processing perspective

    Page(s): 293 - 302
    Save to Project icon | Click to expandQuick Abstract | PDF file iconPDF (760 KB)  

    The use of cepstral parameters is gaining importance in many areas. However, their introduction is usually through an approach which often mars their simplicity and beauty. The differential cepstrum is an important variant of this class of signal transformations. It has been defined in terms of the logarithmic derivative of the z transform of a given signal. However, a more useful approach is through the Cauchy residue theorem, which yields additional insight and properties. The entire concept and additional properties may be developed in a way that leads naturally to the celebrated Newton identities. These identities are developed and elaborated in the paper. Furthermore, they are employed innovatively in signal-processing problems, including the determination of the minimum phase component of a signal, a stability test for linear systems and the detection of abrupt changes in a signal View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.