Abstract:
The rapid spread of fake news on social media has a great impact on people’s lives and social stability. Existing multimodal fake news detection methods are very concerne...Show MoreMetadata
Abstract:
The rapid spread of fake news on social media has a great impact on people’s lives and social stability. Existing multimodal fake news detection methods are very concerned about whether the cross-modal information matches. In order to better understand the relationship between different modalities, this paper proposes the Contrastive Learning Based on Feature Enhancement model(CONLFE), which consists of three core modules: the High-quality Feature Extraction based on CLIP module, the Feature Interaction Based on Transformer module and the Cross-modal Contrastive Learning module. Firstly, the pre-train CLIP model is used to extract richer semantic features from text and images. Then, the attention mechanism in transformer is used to process and optimize the interaction between enhanced uni-modal features. Additionally, by learning matched and mismatched text-image pairs in real news, the representation of different modalities is aligned to a certain extent. This method effectively improves the accuracy and efficiency of multimodal fake news detection.
Published in: 2024 43rd Chinese Control Conference (CCC)
Date of Conference: 28-31 July 2024
Date Added to IEEE Xplore: 17 September 2024
ISBN Information: