By Topic

Vision, Image and Signal Processing, IEE Proceedings -

Issue 6 • Date Dec 1995

Filter Results

Displaying Results 1 - 9 of 9
  • Modified forward-backward overdetermined Prony method and its application in modelling heart sounds

    Page(s): 375 - 380
    Save to Project icon | Click to expandQuick Abstract | PDF file iconPDF (568 KB)  

    Prony's method is found to be a very effective method for the analysis-synthesis of transient data. However, straightforward application of this method can lead to poor performance, especially for short and noisy data records. The authors present a new over-determined forward-backward Prony method (MFBPM) and its application to the analysis of the first and second heart sounds. The accuracy of the method is measured using both cross-correlation and the normalised-mean-square-error (NMRSE) between a real signal and a synthetic one. Results from more than 80 different subjects show that the MFBPM is highly stable and gives very good performance with an average cross-correlation coefficient of 99.62%. Comparison of the results based on the NMRSE criterion show that the MFBPM is more precise than the modified backward Prony method (MBPM) with an accuracy improvement of upto 10%, and upto 20%, when compared with the conventional forward-backward Prony method (FBPM). Furthermore, a new method for dynamic estimation of model order is proposed for the case of heart sounds based on a subset of synthesised heart sounds which best approximates the observed data using NMRSE View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Channel-effect-cancellation method for speech recognition over telephone systems

    Page(s): 395 - 399
    Save to Project icon | Click to expandQuick Abstract | PDF file iconPDF (424 KB)  

    The performance degradation of speech recognition in telephone systems is due to the additive noise and the filtering effect of telephone channels. The authors propose a probabilistic technique to overcome the filtering effect in telephone systems. A set of reference filters, represented in terms of the cepstrum, is generated by clustering the cepstra of inverse telephone channels. A channel-effect-cancellation filter is then approximated by the convex combination of these reference filters. The convex combination coefficients are automatically determined according to the accumulated observation probabilities when a test utterance passes through the reference filters. The experiments on speech through telephone channels show that the channel effect can be mostly cancelled, and the recognition performance can be significantly improved View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Index

    Save to Project icon | Click to expandQuick Abstract | PDF file iconPDF (260 KB)  

    First Page of the Article
    View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • New line-based thinning algorithm

    Page(s): 351 - 358
    Save to Project icon | Click to expandQuick Abstract | PDF file iconPDF (868 KB)  

    Thinning algorithms can be classified into two general types: sequential and parallel. Most of them peel off the boundaries until the objects have been reduced to thin lines. The process is performed iteratively, the number of iterations being approximately equal to half the maximum line width of the object. Several sequential boundary based algorithms have been proposed, but they have limitations. A new line-base algorithm is presented. the thinning element of the algorithm is a line and not, as more common, a point. The algorithm is based on a new line thinning model and is applicable to objects of general shape. The line-based thinning algorithm gives the freedom of choosing the deletion width at each iteration, and thus significantly reduces the number of iterations. The selection of the deletion width is a trade-off between speed and quality of skeletons. Experimental results are used to compare this new algorithm to other sequential algorithms and their relative performances are assessed. The new algorithm is shown to be computationally more efficient View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Continuous-time envelope constrained filter design via orthonormal filters

    Page(s): 389 - 394
    Save to Project icon | Click to expandQuick Abstract | PDF file iconPDF (568 KB)  

    The envelope constrained (EC) filtering problem is concerned with designing a filter which minimises the gain to input noise while its response to a given signal fits into a prescribed envelope. This problem had been formulated as a constrained optimisation problem in Hilbert space. By restricting these filters to the span of a finite orthonormal set, the EC filtering problem can be posed as a finite dimensional optimisation problem with a continuum of constraints. The constrained problem is approximated by an unconstrained problem which is then solved by descent direction based algorithms. It is shown that these algorithms converge globally, and one in particular has a quadratic rate of convergence. Numerical examples using the orthonormal Laguerre series approximation are studied View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Building irregular pyramids by dual-graph contraction

    Page(s): 366 - 374
    Save to Project icon | Click to expandQuick Abstract | PDF file iconPDF (804 KB)  

    Many image analysis tasks lead to, or make use of, graph structures that are related through the analysis process with the planar layout of a digital image. The author presents a theory that allows the building of different types of hierarchies on top of such image graphs. The theory is based on the properties of a pair of dual-image graphs that the reduction process should preserve, e.g. the structure of a particular input graph. The reduction process is controlled by decimation parameters, i.e. a selected subset of vertices, called survivors and a selected subset of the graph's edges; the parent-child connections. It is formally shown that two phases of contractions transform a dual-image graph to a dual-image graph built by the surviving vertices. Phase one operates on the original (neighbourhood) graph, and eliminates all nonsurviving vertices. Phase two operates on the dual (face) graph, and eliminates all degenerated faces that have been created in phase one. The resulting graph preserves the structure of the survivors; it is minimal and unique with respect to the selected decimation parameters. The result is compared with two modified specifications already in use for building stochastic and adaptive irregular pyramids View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • New transform using the Mersenne numbers

    Page(s): 381 - 388
    Save to Project icon | Click to expandQuick Abstract | PDF file iconPDF (600 KB)  

    Number theoretic transforms (NTTs) find applications in the calculation of convolutions and correlations. They can perform these calculations without introducing additional noise in the processing due to rounding or truncation. Among all NTTs, Fermat and Mersenne number transforms have been given particular attention. However, the main drawback of these transforms is the inconvenient word length for the Fermat number transforms, and lack of fast algorithms for the Mersenne number transforms. The authors aim to introduce a new real transform defined modulo Mersenne numbers with long transform length equal to a power of two. This is achieved by dropping the condition that α_ should be ±2 and using a new definition for NTTs that departs from the usual Fourier-like definition. The new transform is suitable for fast algorithms. It has the cyclic convolution property and hence can be applied to the calculation of convolutions and correlations. The transform is extended to the two-dimensional case and then generalised to the multidimensional case. Examples are given for the one-dimensional and two-dimensional cases View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Efficient representation of object shape for silhouette intersection

    Page(s): 359 - 365
    Save to Project icon | Click to expandQuick Abstract | PDF file iconPDF (796 KB)  

    A new shape representation-the radial intersection set (RIS)-is presented. The RIS is an object-centred model in which 2-D and 3-D boundaries are represented via their intersection with radial lines from some specific origin. The RIS representation allows efficient 3-D reconstruction using silhouette intersection from an arbitrary number of 2-D perspective views. The relationship between the visual hull (Laurentini, 1994) of the 3-D object and the silhouettes it may generate is defined as a set intersection operator. This operator allows the direct generation of 3-D RIS models from silhouettes. The RIS method is shown to compare favourably, in terms of both speed and storage, with existing octree techniques. Examples of images rendered from RIS models are presented. These show high visual fidelity View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Two-dimensional median filter algorithm for parallel reconfigurable computers

    Page(s): 345 - 350
    Save to Project icon | Click to expandQuick Abstract | PDF file iconPDF (816 KB)  

    A fast and flexible median algorithm is presented which scales well with window size. It is based on the use of image histograming. Two accumulator arrays are used to determine the median value of a discrete sequence of numbers. Speed-up factors of 3 and 4 are achieved over conventional histograming methods (those using single accumulator arrays). Two approaches have been implemented with optimisation in mind. Worst- and best case machine performance boundaries are defined. Timings are given for both types of parallel architecture View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.