Skip to Main Content
Spoken dialog system can provide an interface between the user and a computer-based application that permits spoken interaction with the application in a relatively natural manner. However extracting user's intention from such spoken queries is a very difficult challenge. This paper presents a mixed approach to spoken language understanding that tries to make best use of the statistical and knowledge-based approaches. The method was test with real data from users, and resulted in a task error rate of 1.94% and a semantic concept error rate of 5.73%. Furthermore, this paper briefly introduces the MLISS - multi-lingual information services system for the Beijing 2008 Olympic Game information service.