Abstract:
In the real-world social networks, hashtags are widely applied for understanding the content of an individual microblog. However, users do not always take the initiative ...Show MoreMetadata
Abstract:
In the real-world social networks, hashtags are widely applied for understanding the content of an individual microblog. However, users do not always take the initiative in attaching hashtags when posting a microblog so that much effort has been invested for automatically hashtag recommendation. As a new trend, users no longer only post texts but prefer to share with multimodal data, such as images. To deal with these situations, we propose an attention-based multimodal neural network model (AMNN) to learn the representations of multimodal microblogs and recommend relevant hashtags. In this article, we convert the hashtag recommendation task into a sequence generation problem. Then, we propose a hybrid neural network approach to extract the features of both texts and images and incorporate them into the sequence-to-sequence model for hashtag recommendation. Experimental results on the data set collected on Instagram and two public data sets demonstrate that the proposed method outperforms state-of-the-art methods. Our model achieves the best performance in three different metrics: precision, recall, and accuracy. The source code of this article can be obtained from “https://github.com/w5688414/AMNN.”
Published in: IEEE Transactions on Computational Social Systems ( Volume: 7, Issue: 3, June 2020)
Funding Agency:
This article includes datasets hosted on IEEE DataPort(TM), a data repository created by IEEE to facilitate research reproducibility or another IEEE approved repository. Click the dataset name below to access it on the data repository
Dataset Name: MULTIMODAL DATASET FROM INSTRAGRAM