Skip to Main Content
A natural user interface (NUI), where a user can type or speak a request, is a good complement to the well-known graphical user interface (GUI). Accurately extracting user intent from such typed or spoken queries is a very difficult challenge. Statistical and knowledge-based are the two opposite kinds of possible approaches. Both of them have advantages and disadvantages. This paper presents a mixed approach to spoken language understanding that tries to make best use of the both algorithms. The method was tested with real data from users, and resulted in a task error rate of 1.94% and a semantic concept error rate of 5.73%.