Loading [MathJax]/extensions/MathMenu.js
Learning Cross-Modal Aligned Representation With Graph Embedding | IEEE Journals & Magazine | IEEE Xplore

Learning Cross-Modal Aligned Representation With Graph Embedding


Overview of the proposed graph embedding learning framework. The framework combines a supervised learning with a graph context learning process.

Abstract:

The main task of cross-modal analysis is to learn discriminative representation shared across different modalities. In order to pursue aligned representation, conventiona...Show More

Abstract:

The main task of cross-modal analysis is to learn discriminative representation shared across different modalities. In order to pursue aligned representation, conventional approaches tend to construct and optimize a linear projection or train a complex architecture of deep layers, yet it is difficult to compromise between accuracy and efficiency on modeling multimodal data. This paper proposes a novel graph-embedding learning framework implemented by neural networks. The learned embedding directly approximates the cross-modal aligned representation to perform cross-modal retrieval and image classification combining text information. Proposed framework extracts learned representation from a graph model and, simultaneously, trains a classifier under semi-supervised settings. For optimization, unlike previous methods based on the graph Laplacian regularization, a sampling strategy is adopted to generate training pairs to fully explore the inter-modal and intra-modal similarity relationship. Experimental results on various datasets show that the proposed framework outperforms other state-of-the-art methods on crossmodal retrieval. The framework also demonstrates convincing improvements on the new issue of image classification combining text information on Wiki dataset.
Overview of the proposed graph embedding learning framework. The framework combines a supervised learning with a graph context learning process.
Published in: IEEE Access ( Volume: 6)
Page(s): 77321 - 77333
Date of Publication: 25 November 2018
Electronic ISSN: 2169-3536

Funding Agency:


References

References is not available for this document.