Loading [a11y]/accessibility-menu.js
Multi-Behavior Enhanced Recommendation with Cross-Interaction Collaborative Relation Modeling | IEEE Conference Publication | IEEE Xplore

Multi-Behavior Enhanced Recommendation with Cross-Interaction Collaborative Relation Modeling


Abstract:

Many previous studies aim to augment collaborative filtering with deep neural network techniques, so as to achieve better recommendation performance. However, most existi...Show More

Abstract:

Many previous studies aim to augment collaborative filtering with deep neural network techniques, so as to achieve better recommendation performance. However, most existing deep learning-based recommender systems are designed for modeling singular type of user-item interaction behavior, which can hardly distill the heterogeneous relations between user and item. In practical recommendation scenarios, there exist multi-typed user behaviors, such as browse and purchase. Due to the overlook of user’s multi-behavioral patterns over different items, existing recommendation methods are insufficient to capture heterogeneous collaborative signals from user multi-behavior data. Inspired by the strength of graph neural networks for structured data modeling, this work proposes a Graph Neural Multi-Behavior Enhanced Recommendation (GNMR) framework which explicitly models the dependencies between different types of user-item interactions under a graph-based message passing architecture. GNMR devises a relation aggregation network to model interaction heterogeneity, and recursively performs embedding propagation between neighboring nodes over the user-item interaction graph. Experiments on real-world recommendation datasets show that our GNMR consistently outperforms state-of-the-art methods. The source code is available at https://github.com/akaxlh/GNMR.
Date of Conference: 19-22 April 2021
Date Added to IEEE Xplore: 22 June 2021
ISBN Information:

ISSN Information:

Conference Location: Chania, Greece

Funding Agency:


References

References is not available for this document.