Notification:
We are currently experiencing intermittent issues impacting performance. We apologize for the inconvenience.
By Topic

Communications, Speech and Vision, IEE Proceedings I

Issue 2 • Date April 1993

Filter Results

Displaying Results 1 - 11 of 11
  • Using Weibull density function in importance sampling for digital communication system simulation

    Publication Year: 1993 , Page(s): 163 - 168
    Cited by:  Papers (1)  |  Patents (1)
    Save to Project icon | Click to expandQuick Abstract | PDF file iconPDF (354 KB)  

    A Weibull density function is proposed as the biasing input noise distribution in importance sampling for digital communication system simulation. With this density function, a sample saving factor 2-4 orders larger than a previous composite technique could be obtained. Robustness with respect to the threshold setting is analysed.<> View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Effects of modulator deficiencies and amplifier nonlinearities on the phase accuracy of GMSK signalling

    Publication Year: 1993 , Page(s): 157 - 162
    Cited by:  Papers (1)
    Save to Project icon | Click to expandQuick Abstract | PDF file iconPDF (406 KB)  

    Both simulated and experimental results are presented for the phase trajectory and spectrum of a GMSK signal generated by imperfect quadrature modulation. It is shown that modulator deficiencies produce phase distortion and, from this, the RMS phase error of an imperfectly generated signal can be predicted. It is also shown that spectral spreading of the bandlimited signal occurs and the RMS phase error increases significantly when the imperfect GMSK signal is passed through any nonlinear two-port such as a class-C HPA. Consequently, both sources of error have to be considered together when evaluating a communications system that employs nonlinearly amplified GMSK.<> View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Successive-erasure decoding of RS-coded MPSK schemes in a Rayleigh fading channel

    Publication Year: 1993 , Page(s): 152 - 156
    Cited by:  Papers (1)
    Save to Project icon | Click to expandQuick Abstract | PDF file iconPDF (399 KB)  

    A soft-decision decoding algorithm, termed successive-erasure decoding (SED), is presented for decoding of RS-coded MPSK schemes. This decoding procedure, originally introduced by G.D. Forney (see Concatenated Codes, MIT Press, Cambridge, MA, USA, 1963) in connection with generalised minimum-distance decoding, uses channel measurement information in an algebraic decoding algorithm. The performance evaluation of some examples of RS-coded MPSK schemes shows that the use of such information bridges the gap in performance between the hard-decision decoding and maximum likelihood decoding (MLD). For example, for RS View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Frame replenishment coding over noisy channels

    Publication Year: 1993 , Page(s): 144 - 151
    Save to Project icon | Click to expandQuick Abstract | PDF file iconPDF (671 KB)  

    The authors propose a combined source-channel coding scheme that provides error protection to improve the performance in noisy channels. Two new frame replenishment coding techniques based on vector quantisation (VQ) have been proposed, namely, label replenishment coding using vector quantisation (LRVQ) and codeword replenishment coding using vector quantisation (CWRVQ). The authors compare the performances of LRVQ and CWRVQ in a noiseless channel with that of the basic frame replenishment (BFR) technique, where the performance of BFR is used as a baseline for comparison. In the presence of channel noise, the LRVQ and CWRVQ techniques exhibit serious error propagation and noise effects, resulting in poor performance. The authors propose to use a combined source-channel coding technique with three different error protection schemes, where the bit rates for the source code and channel code are adjusted so as to minimise the mean square error. Simulation results demonstrate that the three error protection schemes provide a significant improvement in performance without sacrificing transmission bandwidth.<> View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Neural model for Karhunen-Loeve transform with application to adaptive image compression

    Publication Year: 1993 , Page(s): 135 - 143
    Cited by:  Papers (6)  |  Patents (4)
    Save to Project icon | Click to expandQuick Abstract | PDF file iconPDF (893 KB)  

    A neural model approach to perform adaptive calculation of the principal components (eigenvectors) of the covariance matrix of an input sequence is proposed. The algorithm is based on the successive application of the modified Hebbian learning rule proposed by Oja (see J. Math. Biol., vol.15, p.267-73, 1982) on every new covariance matrix that results after calculating the previous eigenvectors. The approach is shown to converge to the next dominant component that is linearly independent of all previously determined eigenvectors. The optimal learning rate is calculated by minimising an error function of the learning rate along the gradient descent direction. The approach is applied to encode grey-level images adaptively, by calculating a limited number of the Karhunen-Loeve transform coefficients that meet a specified performance criterion. The effect of changing the size of the input sequence (number of image subimages), the maximum number of coding coefficients on the bit-rate values, the compression ratio, the signal-to-noise ratio, and the generalisation capability of the model to encode new images are investigated.<> View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Predictive image coding using multiresolution multiplicative autoregressive models

    Publication Year: 1993 , Page(s): 127 - 134
    Cited by:  Papers (2)
    Save to Project icon | Click to expandQuick Abstract | PDF file iconPDF (943 KB)  

    The authors present a new image coding technique called multiresolution multiplicative autoregressive coding. It is a hybrid method that integrates multiresolution interpolative coding and two-dimensional predictive coding based on multiplicative autoregressive models. In addition, the proposed scheme utilises newly developed methods for block adaptive quantisation of prediction errors and fixed-to-variable-length block coding. The performance of the proposed scheme is tested on three different images and, in all cases, reconstructed images of reasonably good quality are obtained at bit rates less than 0.35 bits per pixel. The bit rates and the signal-to-noise-ratios of the reconstructed images compare favourably with the existing low bit rate image coding methods.<> View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Orthogonal transformations of stacked feature vectors applied to HMM speech recognition

    Publication Year: 1993 , Page(s): 121 - 126
    Cited by:  Patents (3)
    Save to Project icon | Click to expandQuick Abstract | PDF file iconPDF (501 KB)  

    The authors report improvements in speech recognition accuracy by using more sophisticated time analysis as part of the feature selection process. The recognition methodology utilises hidden Markov modelling with continuous density functions. The authors propose using, as speech features, linear transformations of the vector consisting of successive time samples of the cepstrum. Taylor series, the Legendre polynomial transform and the discrete cosine transform share several properties with principal components analysis. These transforms are expected to improve speech recognition accuracy by incorporating higher-order time derivatives (such as the second time derivative) of spectral information while at the same time producing an essentially diagonal covariance. In an experimental evaluation of these ideas, accuracy in speaker-independent recognition of the E-set of the alphabet improved from 55%, with no time varying information, to 68% with first-order time varying information, and 74%, by including second-order time varying information.<> View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Performance of output-modulator-structured linear almost periodically time-varying adaptive filter

    Publication Year: 1993 , Page(s): 114 - 120
    Cited by:  Papers (1)
    Save to Project icon | Click to expandQuick Abstract | PDF file iconPDF (493 KB)  

    Extending the existing results of the performance analysis for the LMS adaptive filter, the authors study the performance of the output-modulator-structured linear periodically time-varying adaptive filter, proposed elsewhere by the authors (see ibid., vol.139, no.4, p.429-36, 1992) for adaptively processing almost cyclostationary signals. Transient and steady-state behaviours and convergence conditions are studied. Simulation results agree well with analytical derivations.<> View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Windowed Huffman coding algorithm with size adaptation

    Publication Year: 1993 , Page(s): 109 - 113
    Cited by:  Papers (1)  |  Patents (10)
    Save to Project icon | Click to expandQuick Abstract | PDF file iconPDF (402 KB)  

    The windowed Huffman algorithm is introduced. The Huffman code tree is constructed based on the probabilities of symbols' occurrences within finite history in this windowed algorithm. A window buffer is used to store the most recently processed symbols. Experimental results show that by choosing a suitable window size the length of codes generated by the windowed Huffman algorithm is shorter than those generated by the static Huffman algorithm, dynamic algorithms, and the residual Huffman algorithm, and even smaller than the first-order entropy. Furthermore, three policies to adjust window size dynamically are also discussed. The windowed Huffman algorithm with an adaptive-size window performs as well as, or better than, that with an optimal fixed-size window. The new algorithm is well suited for online encoding and decoding of data with varying probability distributions.<> View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • New class of runlength-limited error-control codes with minimum distance 4

    Publication Year: 1993 , Page(s): 104 - 108
    Cited by:  Papers (1)
    Save to Project icon | Click to expandQuick Abstract | PDF file iconPDF (332 KB)  

    A new class of runlength-limited error-control codes with minimum distance 4 is identified. These codes are simple to implement since they are formed by modifying linear codes. A comparison with the conventional solution of cascading a runlength-limited code with an error-control code shows that runlength-limited error-control coding schemes both have considerably reduced complexity and in many cases also attain a rate improvement.<> View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Error control for low-bit-rate speech communication systems

    Publication Year: 1993 , Page(s): 97 - 103
    Cited by:  Papers (1)  |  Patents (2)
    Save to Project icon | Click to expandQuick Abstract | PDF file iconPDF (718 KB)  

    The incorporation of low-bit-rate speech coders into emerging land and satellite mobile communication systems presents problems which previous communication systems have never encountered. One of the most important of these problems is the degradation experienced in speech quality as a result of corruption of the transmitted speech information by channel errors. The authors present a systematic survey of the various techniques that are being adopted to mitigate this condition.<> View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.