1. INTRODUCTION
For nearly two decades, our group has been conducting research leading to the development of multilingual spoken dialogue systems that combine multiple human language technologies (HLTs) to enable humans and machines to carry on a mixed-initiative conversation for interactive problem solving and information access [1], [2]. To ensure that these systems can easily be generalized to languages other than English, we have made two design choices. First, we assume that it is possible to extract a common, language-independent semantic representation from the languages of interest. Second, we require that each component in the system be as language transparent as possible to promote portability. As illustrated in Figure 1, the dialogue manager, discourse component, and the meaning representation are designed to be independent of the input or output language. Where language-dependent information is required, we have isolated it in the form of external models, tables, or rules.