PFAN++: Bi-Directional Image-Text Retrieval With Position Focused Attention Network | IEEE Journals & Magazine | IEEE Xplore

PFAN++: Bi-Directional Image-Text Retrieval With Position Focused Attention Network


Abstract:

Bi-directional image-text retrieval and matching attract much attention recently. This cross-domain task demands a fine understanding of both modalities for learning a me...Show More

Abstract:

Bi-directional image-text retrieval and matching attract much attention recently. This cross-domain task demands a fine understanding of both modalities for learning a measure of different modality data. In this paper, we propose a novel position focused attention network to investigate the relation between the visual and the textual views. This work integrates the prior object position to enhance the visual-text joint-embedding learning. The image is first split into blocks, which are treated as the basic position cells, and the position of an image region is inferred. Then, we propose a position attention to model the relations between the image region and position cells. Finally, we generate a valuable position feature to further enhance the region expression and model a more reliable relationship between the visual image and the textual sentence. Experiments on the popular datasets Flickr30K and MS-COCO show the effectiveness of the proposed method. Besides the public datasets, we also conduct experiments on our collected practical large-scale news dataset (Tencent-News) to validate the practical application value of the proposed method. As far as we know, this is the first attempt to test the performance on the practical application. Our method achieves the competitive performance on all of these three datasets.
Published in: IEEE Transactions on Multimedia ( Volume: 23)
Page(s): 3362 - 3376
Date of Publication: 18 September 2020

ISSN Information:

Funding Agency:


Contact IEEE to Subscribe

References

References is not available for this document.