Skip to Main Content
Discriminative learning methods have achieved many successes in speech and language processing during the past decades. Discriminative learning of generative models is a typical optimization problem, where efficient optimization methods play a critical role. For many widely used statistical models, discriminative learning normally leads to nonconvex optimization problems. In this article we used three representative examples to showcase how to use a proper convex relaxation method to convert discriminative learning of HMMs and MMMs into standard convex optimization problem so that it can be solved effectively and efficiently even for large-scale statistical models. We believe convex optimization will continue to play important role in discriminative learning of other statistical models in other application domains, such as statistical machine translation, computer vision, biometrics, and informatics.