Abstract:
Recently, multi-label deep cross-modal hashing (MDCH), which incorporates deep neural networks, hashing and multi-label learning for cross-modal retrieval tasks, has achi...Show MoreMetadata
Abstract:
Recently, multi-label deep cross-modal hashing (MDCH), which incorporates deep neural networks, hashing and multi-label learning for cross-modal retrieval tasks, has achieved excellent cross-modal retrieval results and thus became a highly popular area of research. Nevertheless, many existing MDCH methods concentrate on extracting information from multi-modal data, while neglecting the abundant semantic information in multiple labels. Few MDCH methods incorporate multi-label information, but they often treat labels as independent entities, ignoring the relationships between categories, which hinders the establishment of semantic connections among multi-modal data. In order to tackle the aforementioned challenges, we propose a graph convolutional networks based multi-label deep cross-modal hashing method (GMCH) in this paper. GMCH leverages two deep neural networks to generate hash representations from the original image-text pairs, during this process, a graph convolutional network is introduced to capture the category correlations of multi-labels and supervise the training of hash mapping. Experimental results on two commonly employed datasets validate the efficacy of the proposed GMCH method. You can find the code for our proposed GCMH at https://github.com/licher12/GMCH.git.
Date of Conference: 08-14 December 2023
Date Added to IEEE Xplore: 29 December 2023
ISBN Information: