System Maintenance:
There may be intermittent impact on performance while updates are in progress. We apologize for the inconvenience.
By Topic

Vision, Image and Signal Processing, IEE Proceedings -

Issue 5 • Date Oct 1995

Filter Results

Displaying Results 1 - 12 of 12
  • Spectrum shaping in N-channel QPSK-OFDM systems

    Publication Year: 1995 , Page(s): 333 - 338
    Cited by:  Papers (2)
    Save to Project icon | Click to expandQuick Abstract | PDF file iconPDF (412 KB)  

    A compact time-domain treatment of a complete spectrum shaped N-channel orthogonal FDM system is presented, enabling practical DSP algorithms for modulation and demodulation to be clearly identified. The resulting DSP architecture is an alternative to previously described OQPSK-OFDM systems and directly provides the two complex samples per symbol required for symbol timing recovery. The paper also discusses parameter selection for the polyphase shaping filters and compares simulation results of power spectral density for shaped and unshaped systems. Simulation shows that shaped systems can closely approach the ideal transmission spectrum even for modest values of N View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Very low-bit-rate segmentation-based video coding using contour and texture prediction

    Publication Year: 1995 , Page(s): 253 - 261
    Cited by:  Patents (4)
    Save to Project icon | Click to expandQuick Abstract | PDF file iconPDF (1564 KB)  

    The most efficient video coding standard for low bit rates (around 64 kb/s) is the H.261 algorithm recommended by ITU-TS. However, in certain applications such as mobile audiovisual communications and videophone through PSTN, the available transmission bandwidth is very limited. Therefore codecs working at very low bit rates are required. The paper presents a segmentation-based video coding algorithm that can work at rates as low as 10 kb/s. A novel representation of the contour information using a number of control points is proposed to estimate the contour shapes and locations from the previous frame by using the motion information. The texture parameters are also predicted and only the residual values are entropy coded. In addition two novel postprocessing techniques for edge-profile smoothing and jagged-edge rectification are described View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Two-stage decomposition of the DCT

    Publication Year: 1995 , Page(s): 319 - 326
    Cited by:  Papers (1)
    Save to Project icon | Click to expandQuick Abstract | PDF file iconPDF (560 KB)  

    The DCT kernel matrix is first decomposed into a block diagonal structure (BDS) with diagonal skew-circular correlated (SCCR) sub-matrices of length 2, 4, ..., N/2 by coset decomposition, and then each of these independent SCCR sub-matrices is further split into two stages by decomposing its elements into a linear combination of other simple basis functions. The preprocessing stage can be treated as a new transform approximated to the DCT, and is suitable for image compression. Various preprocessing stages are obtained by choosing various basis functions. The postprocessing stage is used for converting the preprocessing stage back to the DCT. Both the preprocessing stage and the postprocessing stage are BDSs containing independent diagonal SCCR sub-matrices, thus the fast and parallel computation of both the preprocessing and the postprocessing stages is feasible using methods such as a semisystolic array or distributed arithmetic implementation View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Detection of curved edges at subpixel accuracy using deformable models

    Publication Year: 1995 , Page(s): 304 - 312
    Cited by:  Papers (5)
    Save to Project icon | Click to expandQuick Abstract | PDF file iconPDF (1004 KB)  

    One approach to the detection of curves at subpixel accuracy involves the reconstruction of such features from subpixel edge data points. A new technique is presented for reconstructing and segmenting curves with subpixel accuracy using deformable models. A curve is represented as a set of interconnected Hermite splines forming a snake generated from the subpixel edge information that minimises the global energy functional integral over the set. While previous work on the minimisation was mostly based on the Euler-Lagrange transformation, the authors use the finite element method to solve the energy minimisation equation. The advantages of this approach over the Euler-Lagrange transformation approach are that the method is straightforward, leads to positive m-diagonal symmetric matrices, and has the ability to cope with irregular geometries such as junctions and corners. The energy functional integral solved using this method can also be used to segment the features by searching for the location of the maxima of the first derivative of the energy over the elementary curve set View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Stereo calibration from correspondences of OTV projections

    Publication Year: 1995 , Page(s): 289 - 296
    Cited by:  Papers (3)  |  Patents (3)
    Save to Project icon | Click to expandQuick Abstract | PDF file iconPDF (868 KB)  

    Stereo images have to be calibrated before stereo vision can recover three-dimensional information of the imaged scene. Position constraints over image point correspondences are traditionally used to solve the calibration problem. A method is described that uses angle constraints over correspondences of a particular type of image features, the projections of orthogonal trihedral vertices (OTV), for calibration. Computations of the rotation matrix and the translation vector are separable and the method has a closed-form solution. It also requires correspondences of only two vertex projections at minimum to recover all the transformation parameters which are recoverable from a stereo image pair. Extensive experimental results, including those on real images, are presented and they show that use of angle constraints is generally more accurate than position constraints alone View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Speaker recognition using hidden Markov models, dynamic time warping and vector quantisation

    Publication Year: 1995 , Page(s): 313 - 318
    Cited by:  Papers (12)  |  Patents (2)
    Save to Project icon | Click to expandQuick Abstract | PDF file iconPDF (620 KB)  

    The authors evaluate continuous density hidden Markov models (CDHMM), dynamic time warping (DTW) and distortion-based vector quantisation (VQ) for speaker recognition, emphasising the performance of each model structure across incremental amounts of training data. Text-independent (TI) experiments are performed with VQ and CDHMMs, and text-dependent (TD) experiments are performed with DTW, VQ and CDHMMs. For TI speaker recognition, VQ performs better than an equivalent CDHMM with one training version, but is outperformed by CDHMM when trained with ten training versions. For TD experiments, DTW outperforms VQ and CDHMMs for sparse amounts of training data, but with more data the performance of each model is indistinguishable. The performance of the TD procedures is consistently superior to TI, which is attributed to subdividing the speaker recognition problem into smaller speaker-word problems. It is also shown that there is a large variation in performance across the different digits, and it is concluded that digit zero is the best digit for speaker discrimination View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Novel detection of conics using 2-D Hough planes

    Publication Year: 1995 , Page(s): 262 - 270
    Cited by:  Papers (3)
    Save to Project icon | Click to expandQuick Abstract | PDF file iconPDF (1008 KB)  

    The authors present a new approach to the use of the Hough transform for the detection of ellipses in a 2-D image. In the proposed algorithm, the conventional 5-D Hough voting space is replaced by four 2-D Hough planes which require only 90 kbytes of memory for a 384×256 image. One of the main differences between the proposed transform and other techniques is the way to extract feature points from the image under question. For the accumulation process in the Hough domain, an inherent property of the suggested algorithm is its capability to effect verification. Experimental results from the authors' work on real and synthetic images show that a significant improvement of the recognition is achieved as compared to other algorithms. Furthermore, the proposed algorithm is applicable to the detection of both circular and elliptical objects concurrently View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Thresholding based on histogram approximation

    Publication Year: 1995 , Page(s): 271 - 279
    Cited by:  Papers (9)  |  Patents (2)
    Save to Project icon | Click to expandQuick Abstract | PDF file iconPDF (972 KB)  

    The authors propose two automatic threshold-selection schemes, based on functional approximation of the histogram. The first method is based on minimising the sum of square errors, and the second one is based on minimising the variance of the approximated histogram. Experimental results show that, on average, the latter scheme gives better results than the former one, at a small extra computational cost. A `goodness' measure is proposed to measure the effectiveness of the two schemes, and to compare them against the entropy-based approach and the moment-based approach View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Fast pipelined CORDIC-based adaptive lattice predictor: algorithms and architecture

    Publication Year: 1995 , Page(s): 339 - 344
    Cited by:  Papers (5)
    Save to Project icon | Click to expandQuick Abstract | PDF file iconPDF (492 KB)  

    The authors present a novel CORDIC-based adaptive algorithm and a pipelined architecture for unnormalised lattice prediction filter. Previously, they have presented a CORDIC-based adaptive lattice filtering (CALF) algorithm for normalised lattice filters which features a sign-sign direct (rotation) angle updating scheme (Hu and Liao, 1992). The authors consider a delayed CALF (DeCALF) algorithm in which the rotation angle is updated based on `delayed' prediction errors. In doing so, they are able to develop a fully pipelined implementation of DeCALF which achieves B-fold throughput rate increase where B is the number of CORDIC iterations (stages). This is accomplished at insignificant hardware overhead and minor parameter tracking performance degradation View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Linear adaptive decorrelator for signal separation

    Publication Year: 1995 , Page(s): 327 - 332
    Cited by:  Papers (2)
    Save to Project icon | Click to expandQuick Abstract | PDF file iconPDF (504 KB)  

    Signal separation is a problem often encountered in many practical applications. A novel technique is described that can guarantee signal separation by output decorrelation. In particular, it is shown that a simple transformation of the variables enables the nonlinear set of equations to be solved efficiently using the standard least squares technique. Moreover, the uniqueness of the solution is analytically determined and the system is shown to guarantee separation. The algorithm is modified to estimate the parameters adaptively, and the results clearly show the improved performance of this algorithm in separating two signals View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Real-time image processing approach to measure traffic queue parameters

    Publication Year: 1995 , Page(s): 297 - 303
    Cited by:  Papers (16)  |  Patents (2)
    Save to Project icon | Click to expandQuick Abstract | PDF file iconPDF (980 KB)  

    The real-time measurement of various traffic parameters including queue parameters is required in many traffic situations such as accident and congestion monitoring and adjusting the timings of the traffic lights. In case of the queue detection, at least two algorithms have been proposed by previous researchers. Those algorithms are used for queue detection and are unable to measure queue parameters. The authors propose a method based on applying the combination of noise insensitive and simple algorithms on a number of sub-profiles (a one-pixel-wide key-region) along the road. The proposed queue detection algorithm consists of motion detection and vehicle detection operations, both based on extracting edges of the scene, to reduce the effects of variation of lighting conditions. To reduce the computation time, the motion detection operation continuously operates on all the sub-profiles, but the vehicle detection is only applied to the tail of the queue. The proposed algorithms have been implemented on an 80386-based microcomputer system and the whole system works in real-time View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Curve segmentation and representation by superellipses

    Publication Year: 1995 , Page(s): 280 - 288
    Cited by:  Papers (11)  |  Patents (1)
    Save to Project icon | Click to expandQuick Abstract | PDF file iconPDF (1196 KB)  

    A method of segmenting curves into series of superelliptical arcs is presented. A superellipse is the 2-dimensional form of the superquadratic and can describe circles, ellipses, crosses, parallelograms and rounded rectangles with the same number of parameters. The superellipses are fitted using Powell's technique to minimise an appropriate error metric. A tree is used to represent a number of interpretations and the concept of significance used to choose the most perceptually correct description. Results show that perceptually good features are chosen to represent the various shapes that occur in images View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.