Skip to Main Content
In this paper, we survey and analyze state-of-the-art statistical machine translation (SMT) techniques for speech translation (ST). We review key learning problems, and investigate essential model structures in SMT, taking a unified perspective to reveal both connections and contrasts between automatic speech recognition (ASR) and SMT. We show that phrase-based SMT can be viewed as a sequence of finite-state transducer (FST) operations, similar in spirit to ASR. We further inspect the synchronous context-free grammar (SCFG)-based formalism that includes hierarchical phrase-based and many linguistically syntax-based models. Decoding for ASR, FST-based, and SCFG-based translation is also presented from a unified perspective as different realizations of the generic Viterbi algorithm on graphs or hypergraphs. These consolidated perspectives are helpful to catalyze tighter integrations for improved ST, and we discuss joint decoding and modeling toward coupling ASR and SMT.