Abstract:
With the explosive growth of information, recommendation systems have emerged to alleviate the problem of information overload. In order to improve the performance of rec...Show MoreMetadata
Abstract:
With the explosive growth of information, recommendation systems have emerged to alleviate the problem of information overload. In order to improve the performance of recommendation systems, many existing methods introduce Large Language Models to extract textual information from description text. However, Large Language Models are trained on large-scale generic textual data and may face a semantic gap for downstream recommendation tasks. To address the above issues, we propose Contrastive Learning for Adapting Language Model to Sequential Recommendation (CLA-Rec). In CLA-Rec, we first extract text embeddings from description text using Large Language Models and align the text embeddings learned by Large Language Models with the collaborative information through contrastive learning to obtain high-quality item representations. Through semantic alignment, we bridge the semantic gap between Large Language Models and the recommendation task. To map textual information and collaborative information into user representations, we utilize a Transformer model to learn user representations and capture user preferences by combining the semantically aligned item representations. Extensive experiments on three public datasets demonstrate that our method outperforms state-of-the-art approaches on multiple evaluation metrics, illustrating the effectiveness of the CLA-Rec model in adapting Large Language Models to recommendation tasks.
Published in: 2024 IEEE International Conference on Data Mining (ICDM)
Date of Conference: 09-12 December 2024
Date Added to IEEE Xplore: 21 February 2025
ISBN Information: