By Topic

Vision, Image and Signal Processing, IEE Proceedings -

Issue 1 • Date 9 Feb. 2006

Filter Results

Displaying Results 1 - 11 of 11
  • Cryptanalysis of a data security protection scheme for VoIP

    Page(s): 1 - 10
    Save to Project icon | Click to expandQuick Abstract | PDF file iconPDF (1141 KB)  

    A voice-over-Internet protocol technique with a new hierarchical data security protection (HDSP) scheme using a secret chaotic bit sequence has been recently proposed. Some insecure properties of the HDSP scheme are pointed out and then used to develop known/chosen-plaintext attacks. The main findings are: given n known plaintexts, about (100-(50/2n))% of secret chaotic bits can be uniquely determined; given only one specially-chosen plaintext, all secret chaotic bits can be uniquely derived; and the secret key can be derived with practically small computational complexity when only one plaintext is known (or chosen). These facts reveal that HDSP is very weak against known/chosen-plaintext attacks. Experiments are given to show the feasibility of the proposed attacks. It is also found that the security of HDSP against the brute-force attack is not practically strong. Some countermeasures are discussed for enhancing the security of HDSP and several basic principles are suggested for the design of a secure encryption scheme. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Eye location method based on symmetry analysis and high-order fractal feature

    Page(s): 11 - 16
    Save to Project icon | Click to expandQuick Abstract | PDF file iconPDF (230 KB)  

    A method for eye location in human facial images based on symmetry analysis and the lacunarity, which is a high-order fractal feature, is proposed. First, the valley field algorithm is applied to the facial image and the eye candidates are identified. Then, principal component analysis is used to detect the symmetry axis of the human face. The eye candidates are grouped to form eye-pair candidates, and the whole image is rotated around the symmetry axis. Finally, a novel approach to estimate lacunarity value is proposed to describe accurately the local structure of eye regions. By comparing the lacunarity values of two eye regions within each eye-pair candidate, the eye-pair candidate with minimum lacunarity value difference is identified as the true one. Numerical experiments demonstrate the effectiveness and reliability of this method. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Content-based image retrieval using Legendre chromaticity distribution moments

    Page(s): 17 - 24
    Save to Project icon | Click to expandQuick Abstract | PDF file iconPDF (1136 KB)  

    It is a well-known fact that the direct storing and comparison of the histogram for the purpose of content-based image retrieval (CBIR) is inefficient in terms of memory space and query processing time. It is shown that the set of Legendre chromaticity distribution moments (LCDM) provides a compact, fixed-length and computation effective representation of the colour contents of an image. Only a small fixed number of compact LCDM features need to be stored to effectively characterise the colour content of an image. The need to store the whole chromaticity histogram is circumvented. Consequently the time involved in database querying is reduced. It is also shown that LCDM can be computed directly from the chromaticity space without first having to evaluate the chromaticity histogram. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Wavelet domain image resolution enhancement

    Page(s): 25 - 30
    Save to Project icon | Click to expandQuick Abstract | PDF file iconPDF (320 KB)  

    A wavelet-domain image resolution enhancement algorithm which is based on the estimation of detail wavelet coefficients at high resolution scales is proposed. The method exploits wavelet coefficient correlation in a local neighbourhood sense and employs linear least-squares regression to estimate the unknown detail coefficients. Results show that the proposed method is considerably superior to conventional image interpolation techniques, both in objective and subjective terms, while it also compares favourably with competing methods operating in the wavelet domain. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Lifting-based wavelet domain adaptive Wiener filter for image enhancement

    Page(s): 31 - 36
    Save to Project icon | Click to expandQuick Abstract | PDF file iconPDF (802 KB)  

    A method of applying lifting-based wavelet domain Wiener filter (LBWDMF) in image enhancement is proposed. Lifting schemes have emerged as a powerful method for implementing biorthogonal wavelet filters. They exploit the similarity of the filter coefficients between the low-pass and high-pass filters to provide a higher speed of execution, compared to classical wavelet transforms. LBWDMF not only helps in reducing the number of computations but also achieves lossy to lossless performance with finite precision. The proposed method utilises the multi-scale characteristics of the wavelet transform and the local statistics of each subband. The proposed method transforms an image into the wavelet domain using lifting-based wavelet filters and then applies a Wiener filter in the wavelet domain and finally transforms the result into the spatial domain. When the peak signal-to-noise ratio (PSNR) is low, transforming an image to the lifting-based wavelet domain and applying the Wiener filter in the wavelet domain produces better results than directly applying Wiener filter in spatial domain. In other words each subband is processed independently in the wavelet domain by a Wiener filter. Moreover, in order to validate the effectiveness of the proposed method the result obtained using the proposed method is compared to those using the spatial domain Wiener filter (SDWF) and classical wavelet domain Wiener filter (CWDWF). Experimental results show that the proposed method has better performance over SDWF and CWDWF both visually and in terms of PSNR. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Car tracking by quantised input LMS, QX-LMS algorithm in traffic scenes

    Page(s): 37 - 45
    Save to Project icon | Click to expandQuick Abstract | PDF file iconPDF (255 KB)  

    The tracking algorithm is an important tool for motion analysis in computer vision. A new car tracking algorithm is proposed which is based on a new clipping technique in the field of adaptive filter algorithms. The uncertainty and occlusion of vehicles increase the noise in vehicle tracking in a traffic scene, so the new clipping technique can control noise in prediction of vehicle positions. The authors present a new quantised version of the LMS, namely the QX-LMS algorithm, which has a better tracking capability in comparison with the clipped LMS (CLMS) and the LMS and also involves less computation. The threshold parameter of the QX-LMS algorithm causes controllability and the increase of tracking and convergence properties, whereas the CLMS and LMS algorithms do not have these capabilities. The QX-LMS algorithm is used for estimation of a noisy chirp signal, for system identification and in car tracking applications. Simulation results for noisy chirp signal detection show that this algorithm yields a considerable error reduction in comparison to the LMS and CLMS algorithms. The proposed algorithm, in tracking some 77 vehicles in different traffic scenes, shows a reduction of the tracking error relative to the LMS and CLMS algorithms. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Low complexity deblocking method for DCT coded video signals

    Page(s): 46 - 56
    Save to Project icon | Click to expandQuick Abstract | PDF file iconPDF (991 KB)  

    A new method to remove blocking artefacts in low bit-rate block-based video coding, such as MPEG-4 and H.264, is presented. A low computational deblocking filter with five modes is proposed, including three frequency-related modes (smooth mode, intermediate mode, and complex mode for low-frequency, mid-frequency, and high-frequency regions, respectively), one special mode (steep mode for a large offset between two blocks) and a refined mode (corner mode for the corner of four blocks). A mode decision procedure is also needed to decide which mode is given by observing pixel behaviour around the block boundary. To take the masking effect of the human visual system (HVS) into consideration, the filter for smooth mode is designed to be much stronger than that for complex mode, because human eyes are more sensitive to smooth regions. Experimental results show that, in most cases, the proposed algorithm removes more blocking artefacts than MPEG-4 deblocking filters do, and improves both subjective and objective image quality. The proposed algorithm keeps the computation lower than MPEG-4 and is suitable for most block-based image and video coding systems. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Registration algorithm based on image matching for outdoor AR system with fixed viewing position

    Page(s): 57 - 62
    Save to Project icon | Click to expandQuick Abstract | PDF file iconPDF (255 KB)  

    An image registration method based on the Fourier-Mellin transform is introduced for an outdoor augmented reality (AR) system. For this type of AR system, the observation position is fixed, and a complex 3-D registration problem can be reduced to a 2-D image registration for this fixed viewing position system. An observation globe model for this method is proposed. Under this supposition, a Fourier-Mellin transform is used in image registration, and the architecture of this system is illustrated. Experimental results show that this image registration algorithm is accurate and robust. It is effective for an outdoor AR system with a fixed viewing position. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Recovery of motion vectors for error concealment based on an edge-detection approach

    Page(s): 63 - 69
    Save to Project icon | Click to expandQuick Abstract | PDF file iconPDF (345 KB)  

    A new approach to error concealment is presented which exploits both spatial and temporal information from the current and previous frames. The technique consists of two stages, motion vector estimation and enhancement of the estimated motion vector. In the first stage the proposed method estimates a replacement for the corrupted motion vector by applying dynamic weights to related motion vectors from the top, bottom, left and right sub-macroblocks. The estimated motion vectors are then enhanced using a new approach based on edge detection in the second stage. The experimental results for several test video sequences are compared with conventional error concealment methods and higher performance is achieved in both objective peak signal-to-noise ratio measurements and subjective visual quality. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Design and parallel computation of regularised fast Hartley transform

    Page(s): 70 - 78
    Save to Project icon | Click to expandQuick Abstract | PDF file iconPDF (177 KB)  

    The paper describes the design and parallel computation of a regularised fast Hartley transform (FHT), to be used for computation of the discrete Fourier transform (DFT) of real-valued data. For the processing of such data, the FHT has attractions over the fast Fourier transform (FFT) in terms of reduced arithmetic operation counts and reduced memory requirement, whilst its bilateral property means it may be straightforwardly applied to both forward and inverse DFTs. A drawback, however, of conventional FHT algorithms lies in the loss of regularity arising from the need for two sizes of 'butterfly' for efficient fixed-radix implementations. A generic double butterfly is therefore developed for the radix-4 FHT which overcomes the problem in an elegant fashion. The result is a recursive single-butterfly solution, referred to as the regularised FHT, which lends itself naturally to parallelisation and to mapping onto a regular computational structure for implementation with algorithmically specialised hardware. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Digital integrator design using Simpson rule and fractional delay filter

    Page(s): 79 - 86
    Save to Project icon | Click to expandQuick Abstract | PDF file iconPDF (164 KB)  

    The IIR digital integrator is designed by using the Simpson integration rule and fractional delay filter. To improve the design accuracy of a conventional Simpson IIR integrator at high frequency, the sampling interval is reduced from T to 0.5T. As a result, a fractional delay filter needed to be designed in the proposed Simpson integrator. However, this problem can be solved easily by applying well-documented design techniques of the FIR and all-pass fractional delay filters. Several design examples are illustrated to demonstrate the effectiveness of the proposed method. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.