Loading [MathJax]/extensions/MathMenu.js
Hierarchical Reinforcement Learning Recommendation Method based on Deep Interest Networks | IEEE Conference Publication | IEEE Xplore
Scheduled Maintenance: On Monday, 30 June, IEEE Xplore will undergo scheduled maintenance from 1:00-2:00 PM ET (1800-1900 UTC).
On Tuesday, 1 July, IEEE Xplore will undergo scheduled maintenance from 1:00-5:00 PM ET (1800-2200 UTC).
During these times, there may be intermittent impact on performance. We apologize for any inconvenience.

Hierarchical Reinforcement Learning Recommendation Method based on Deep Interest Networks


Abstract:

With the rapid development of MOOC platforms, course recommendation has become a focal point in the research field of recommendations. Effectively modeling user preferenc...Show More

Abstract:

With the rapid development of MOOC platforms, course recommendation has become a focal point in the research field of recommendations. Effectively modeling user preferences is a crucial task within this context. Despite the notable achievements of reinforcement learning methods in simulating user preferences, there are still existing issues with current approaches. This paper introduces a hierarchical reinforcement learning method based on deep interest networks. By incorporating an adaptive weighting unit to represent the correlation between the current course and historical courses, the method aims to better simulate user preferences. Experimental results on two real datasets demonstrate the superiority of our approach over baseline methods, and it exhibits strong generalization in other recommendation domains.
Date of Conference: 15-17 December 2023
Date Added to IEEE Xplore: 15 May 2024
ISBN Information:
Conference Location: Wuhan, China

I. Introduction

Massive Open Online Course (MOOC) platforms are rapidly evolving, and open online courses are becoming increasingly popular. However, due to the vast number of courses in MOOCs, there is a need for an effective way to filter and sort through massive amounts of data, understanding user preferences for courses, and providing personalized recommendations. Course recommendation can be understood as the goal of suggesting the courses that users are most likely to choose at time t + 1 based on their course selection history up to time t. Many recommendation methods address how to capture user preferences. The Hyperedge Graph Neural Network (HGNN) [1] adopts a graph neural network for the recommendation task. It inputs the time series of embedding vectors of selected courses into a Gated Recurrent Unit (GRU) [2] model, outputting the last embedding vector as user preference. Neural Attentive Item Similarity (NAIS) [3] and Neural Attentive Session-based Recommendation (NASR) [4] simulate user preferences through attention coefficients on the history courses. Building upon this, Hierarchical Reinforcement Learning (HRL) [5] enhances the accuracy of simulating user preferences by modifying user course selection records to eliminate noisy courses. This approach eliminates the need to assign attention coefficients to each course, streamlining the process and improving accuracy in simulating user preferences.

Contact IEEE to Subscribe

References

References is not available for this document.