By Topic

Adaptive statistical and grammar models of language for application to speech recognition

Sign In

Cookies must be enabled to login.After enabling cookies , please use refresh or reload or ctrl+f5 on the browser for the login options.

Formats Non-Member Member
$33 $33
Learn how you can qualify for the best price for this item!
Become an IEEE Member or Subscribe to
IEEE Xplore for exclusive pricing!
close button

puzzle piece

IEEE membership options for an individual and IEEE Xplore subscriptions for an organization offer the most affordable access to essential journal articles, conference papers, standards, eBooks, and eLearning courses.

Learn more about:

IEEE membership

IEEE Xplore subscriptions

3 Author(s)
G. J. F. Jones ; Centre for Commun. Res., Bristol Univ., UK ; H. Lloyd-Thomas ; J. H. Wright

The statistical and syntactic approaches to the modelling of language are consolidated in order to improve performance in speech recognition. The authors also aim to minimise the need for human intervention in the training of the language model from a corpus. Hybrid speech recognition systems using both bigram and grammar models can yield improved performance compared with the use of either model alone, but performance is still sub-optimal because the grammar is abandoned completely for sentences which fail to parse overall. Extending the concept of a bigram to the most informative (rather than the immediate) previous word leads to a reduction in perplexity: a purely statistical approach is presented. Incorporating syntax from a substring parser will require these principles to be extended to strings of nonterminal symbols, raising important training issues but opening the way towards a language model with greater capacity for adaptive enhancement of performance

Published in:

Grammatical Inference: Theory, Applications and Alternatives, IEE Colloquium on

Date of Conference:

22-23 Apr 1993