Skip to Main Content
State-of-the-art pattern recognition methods have difficulties dealing with problems where the dimension of the output space is large. In this article, we propose a framework based on deep architectures (e. g. deep neural networks) in order to deal with this issue. Deep architectures have proven to be efficient for high dimensional input problems such as image classification, due to their ability to embed the input space. The main contribution of this article is the extension of the embedding procedure to both the input and output spaces to easily handle complex outputs. Using this extension, inter-output dependencies can be modelled efficiently. This provides an interesting alternative to probabilistic models such as HMM and CRF. Preliminary experiments on toy datasets and USPS character reconstruction show promising results.