Skip to Main Content
For efficient variable-rate speech coding, Karhunen-Loeve transform based adaptive entropy-constrained vector quantization (KLT-AECVQ) is proposed. The proposed method consists of backward-adaptive linear predictive coding (LPC) analysis, KLT estimation based on LPC coefficients, and lattice vector quantization followed by Huffman coding according to KLT statistics. As different statistics in an original-signal domain can be mapped into identical statistics in a KLT domain, only a few classified Huffman codebooks are sufficient to represent KLT-domain source statistics. KLT-AECVQ with 32 Huffman codebooks has comparable rate-distortion performance with theoretically optimal AECVQ with infinite number of Huffman codebooks. KLT-AECVQ also produces superior perceptual quality to KLT-based classified vector quantization (KLTCVQ) that yielded better quality than conventional code excited linear predictive (CELP) codec. Under five-sample delay constraints, KLT-AECVQ has also three times lower complexity than CELP codec.