Skip to Main Content
Unlike written documents, spoken documents are difficult to display on the screen; it is also difficult for users to browse these documents during retrieval. It has been proposed recently to use interactive multi-modal dialogues to help the user navigate through a spoken document archive to retrieve the desired documents. This interaction is based on a topic hierarchy constructed by the key terms extracted from the retrieved spoken documents. In this paper, the efficiency of the user interaction in such a system is further improved by a key term ranking algorithm using reinforcement learning with simulated users. Extensive simulation analysis was performed, and significant improvements in retrieval efficiency were observed. These improvements show the relative robustness to speech recognition errors.