By Topic

Vision, Image and Signal Processing, IEE Proceedings -

Issue 2 • Date April 2002

Filter Results

Displaying Results 1 - 9 of 9
  • New family of lapped biorthogonal transform via lifting steps

    Publication Year: 2002 , Page(s): 91 - 96
    Cited by:  Papers (1)  |  Patents (5)
    Save to Project icon | Click to expandQuick Abstract | PDF file iconPDF (403 KB)  

    By scaling all discrete cosine transform (DCT) intermediate output coefficients of the lapped transform and employing the type-II and type-IV DCT based on lifting steps, a new family of lapped biorthogonal transform is introduced, called the IntLBT. When all the elements with a floating point of each lifting matrix in the IntLBT are approximated by binary fractions, the IntLBT is implemented by a series of dyadic lifting steps and provides very fast, efficient in-place computation of the transform coefficients, and all internal nodes have finite precision. When each lifting step in the IntLBT is implemented using the same nonlinear operations as those used in the well known integer-to-integer wavelet transform, the IntLBT maps integers to integers, so it can express lossless image information. As an application of the novel IntLBT to lossy image compression, simulation results demonstrate that the IntLBT has significantly less blocking artefacts, higher peak signal-to-noise ratio, and better visual quality than the DCT. More importantly, the IntLBT's coding performance is approximately the same as that of the much more complex Cohen-Daubechies-Feauveau (CDF) 9/7-tap biorthogonal wavelet with floating-point coefficients, and in some cases even surpasses that of the CDF 9/7-tap biorthogonal wavelet. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Efficient multilevel successive elimination algorithms for block matching motion estimation

    Publication Year: 2002 , Page(s): 73 - 84
    Cited by:  Papers (6)  |  Patents (5)
    Save to Project icon | Click to expandQuick Abstract | PDF file iconPDF (690 KB)  

    The authors present fast algorithms to reduce the computations of block matching algorithms, for motion estimation in video coding. Efficient multilevel successive elimination algorithms are based on the multilevel successive elimination. Efficient multilevel successive elimination algorithms consist of four algorithms. The first algorithm is given by the sum of absolute difference between the sum norms of sub-blocks in a multilevel successive elimination algorithm (MSEA) using the partial distortion elimination technique. By using the first algorithm, computations of MSEA can be reduced further. In the second algorithm, the sum of absolute difference (SAD) is calculated adaptively from large value to small value according to the absolute difference values between pixels of blocks. By using the second algorithm, the partial distortion elimination in the SAD calculation can occur early, therefore the computations of MSEA can be reduced. The second algorithm is useful not only with MSEA, but also with all kinds of block matching algorithms. In the third algorithm, the elimination level of the MSEA can be estimated. Accordingly, the computations of the MSEA related to the level lower than the estimated level can be reduced. The fourth algorithm is, first of all, to search the motion vector over the half sampled search points. At the second search, the authors search the unsampled search points around the tested search points where the motion vector may exist from the first search results. The motion estimation accuracy of the fourth algorithm is nearly 100% and the computations can be greatly reduced. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • New matrix formulation for two-dimensional DCT/IDCT computation and its distributed-memory VLSI implementation

    Publication Year: 2002 , Page(s): 97 - 107
    Cited by:  Papers (4)
    Save to Project icon | Click to expandQuick Abstract | PDF file iconPDF (545 KB)  

    A direct method for the computation of 2-D DCT/IDCT on a linear-array architecture is presented. The 2-D DCT/IDCT is first converted into its corresponding I-D DCT/IDCT problem through proper input/output index reordering. Then, a new coefficient matrix factorisation is derived, leading to a cascade of several basic computation blocks. Unlike other previously proposed high-speed 2-D N × N DCT/IDCT processors that usually require intermediate transpose memory and have computation complexity O(N3), the proposed hardware-efficient architecture with distributed memory structure has computation complexity O(N2 log2 N) and requires only log2 N multipliers. The new pipelinable and scalable 2-D DCT/IDCT processor uses storage elements local to the processing elements and thus does not require any address generation hardware or global memory-to-array routing. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Minimum-span constant modulus array for a smart antenna testbed

    Publication Year: 2002 , Page(s): 120 - 127
    Cited by:  Papers (1)
    Save to Project icon | Click to expandQuick Abstract | PDF file iconPDF (576 KB)  

    The realisation of an efficient algorithm for a real-time smart antenna testbed based on DECT technology is considered. The testbed is based on a multistage constant modulus array, a blind adaptive beamformer and an adaptive signal canceller that removes the captured signal from the input signals. Based on the Wiener model of convergence, a reduced form of the CM array has been developed that takes advantage of the reduced rank of the input correlation matrix for additional stages. Computer simulations confirm the functionality of this algorithm, and convergence and sensitivity to fading are also discussed. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Optimal search in Hough parameter hyperspace for estimation of complex motion in image sequences

    Publication Year: 2002 , Page(s): 63 - 71
    Cited by:  Patents (2)
    Save to Project icon | Click to expandQuick Abstract | PDF file iconPDF (550 KB)  

    The paper is a contribution to the theory of multiparameter motion estimation using the Hough transform technique and its main feature is the analytic development of closed-form solutions for the optimal estimation strategy. Motion estimation in image sequences using the Hough transform is reviewed. Various motion models are considered offering a more realistic portrayal of camera and scene motion compared with the purely translational models adopted by standardised video coding algorithms. Since the computational complexity arising from the use of such sophisticated models is prohibitive, standard optimisation techniques are used for the search of minima of the motion estimation error function in the Hough parameter hyperspace. In contrast to previously published work, second-order Taylor expansions of the error function are considered. By taking into account such second-order terms the optimal iterative step for a gradient search in the Hough parameter hyperspace is determined for all the motion models of interest. For a purely translational model it is demonstrated that the analysis provides a similar estimate to that of the well-known Netravali and Robbins algorithm. For more complex motion models it is shown that the computation of the optimal solution involves modest complexity. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Offline signature verification with generated training samples

    Publication Year: 2002 , Page(s): 85 - 90
    Cited by:  Papers (13)
    Save to Project icon | Click to expandQuick Abstract | PDF file iconPDF (494 KB)  

    It is often difficult to obtain sufficient signature samples to train up a signature verification system. An elastic matching method to generate additional samples is proposed to expand the limited training set so that a better estimate of the statistical variations can be obtained. The method differs from existing ones in that it is more suitable for the generation of signature samples. Besides this, a set of peripheral features, which is useful in describing both the internal and external structures of signatures, is employed to represent the signatures in the verification process. Results showed that the additional samples generated by the proposed method could reduce the error rate from 15.6% to 11.4%. It also outperformed another existing method which estimates the class covariance matrix through optimisation techniques. Results also demonstrated that the peripheral features are useful for signature verification. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Toward secure public-key blockwise fragile authentication watermarking

    Publication Year: 2002 , Page(s): 57 - 62
    Cited by:  Papers (19)  |  Patents (27)
    Save to Project icon | Click to expandQuick Abstract | PDF file iconPDF (504 KB)  

    The authors describe some weaknesses of public-key blockwise fragile authentication watermarkings and the means to make them secure. Wong's (1997) original algorithm as well as a number of its variant techniques are not secure against a mere block cut-and-paste or the well known birthday attack. To make them secure, some schemes have been proposed to make the signature of each block depend on the contents of its neighbouring blocks. The authors attempt to maximise the change localisation resolution using only one dependency per block with a scheme they call hash block chaining version 1 (HBC1). They then show that HBC1, as well as any neighbour content-dependent scheme, are susceptible to another forgery technique that they have named a transplantation attack. They also show a new kind of birthday attack that can be effectively mounted against HBC1. To thwart these attacks, they propose using a nondeterministic digital signature together with a signature-dependent scheme (HBC2). Finally, they discuss the advantages of using discrete logarithm signatures instead of RSA for watermarking. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Robust speech features based on wavelet transform with application to speaker identification

    Publication Year: 2002 , Page(s): 108 - 114
    Cited by:  Papers (7)
    Save to Project icon | Click to expandQuick Abstract | PDF file iconPDF (555 KB)  

    An effective and robust speech feature extraction method is presented. Based on the time-frequency multiresolution property of the wavelet transform, the input speech signal is decomposed into various frequency channels. For capturing the characteristics of an individual speaker, the linear predictive cepstral coefficients of the approximation channel and entropy value of the detail channel for each decomposition process are calculated. In addition, an adaptive thresholding technique for each lower resolution is also applied to remove the influence of noise interference. Experimental results show that using this mechanism not only effectively reduces the influence of noise interference but also improves the recognition performance. Finally, the proposed method is evaluated on the MAT telephone speech database for text-independent speaker identification using the group vector quantisation identifier. Some popular existing methods are also evaluated for comparison, and the results show that the proposed feature extraction algorithm is more effective and robust than the other existing methods. In addition, the performance of the proposed method is very satisfactory even in a low SNR environment corrupted by Gaussian white noise. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Performance analysis of adaptive filter structure employing wavelet and sparse subfilters

    Publication Year: 2002 , Page(s): 115 - 119
    Cited by:  Papers (8)
    Save to Project icon | Click to expandQuick Abstract | PDF file iconPDF (392 KB)  

    A new structure for adaptive filtering which employs a wavelet transform and sparse filters is presented. A description of the algorithm, including particularities such as the delay introduced in the output signal and the minimum number of adaptive coefficients required, is provided. A theoretical analysis of the convergence speed of the adaptation algorithm is developed, which allows one to choose the adequate wavelet transform according to the input signal statistics. Simulations with coloured input signals are presented to illustrate the convergence behaviour of the proposed structure with different wavelets and to verify the theoretical analysis. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.