By Topic

Minimum discrimination information-based language model adaptation using tiny domain corpora for intelligent personal assistants

Sign In

Cookies must be enabled to login.After enabling cookies , please use refresh or reload or ctrl+f5 on the browser for the login options.

The purchase and pricing options are temporarily unavailable. Please try again later.
3 Author(s)
Gil-Jin Jang ; Sch. of Electr. & Comput. Eng., Ulsan Nat. Inst. of Sci. & Technol., Ulsan, South Korea ; Saejoon Kim ; Ji-Hwan Kim

This paper proposes a novel Language Model (LM) adaptation method based on Minimum Discrimination Information (MDI). In the proposed method, a background LM is viewed as a discrete distribution and an adapted LM is built to be as close as possible to the background LM, while satisfying unigram constraint. This is due to the fact that there is a limited amount of domain corpus available for the adaptation of a natural language-based intelligent personal assistant system. Two unigram constraint estimation methods are proposed: one based on word frequency in the domain corpus, and one based on word similarity estimated from WordNet. In terms of the adapted LM's perplexity using word frequency in tiny domain corpora (ranging from 30~120 seconds in length) the relative performance improvements are measured at 13.9%~16.6%. Further relative performance improvements (1.5%~2.4%) are observed when WordNet is used to generate word similarities. These successes express an efficient ways for re-scaling and normalizing the conditional distribution, which uses an interpolation-based LM.

Published in:

Consumer Electronics, IEEE Transactions on  (Volume:58 ,  Issue: 4 )