Skip to Main Content
There are some speech understanding applications in which training transcriptions are unavailable, and hence the vocabulary is unknown, but the task is to recognise key words and phrases within an utterance rather than to attempt a complete, accurate transcription. An example of such a task is call-routing, when transcriptions of training utterances (which are very expensive to produce) are unavailable. In such cases, phoneme rather than word recognition is appropriate. However, phoneme recognition of spontaneous speech spoken by a large multi-accent population over telephone connections is very inaccurate. To improve accuracy, we describe a technique in which we segment the waveform into subword-like units and use clustering and an iteratively refined language model to correct the errors in the recognised phonemes. The method was shown to work well on telephone quality spontaneous speech, raising the phoneme accuracy from 28.1% after the first iteration to 47.3% after three iterations.