Loading [MathJax]/extensions/MathZoom.js
SPAKT: A Self-Supervised Pre-TrAining Method for Knowledge Tracing | IEEE Journals & Magazine | IEEE Xplore

SPAKT: A Self-Supervised Pre-TrAining Method for Knowledge Tracing


Detailed illustration of our self-supervised learning based pre-training network architecture.

Abstract:

Knowledge tracing (KT) is the core task of computer-aided education systems, and it aims at predicting whether a student can answer the next exercise (i.e., question) cor...Show More

Abstract:

Knowledge tracing (KT) is the core task of computer-aided education systems, and it aims at predicting whether a student can answer the next exercise (i.e., question) correctly based on his/her historical answer records. In recent years, deep neural network-based approaches have been widely developed in KT and achieved promising results. More recently, several researches further boost these KT models via exploiting plentiful relationships including exercise-skill relations (E-S), the exercise similarity (E-E) as well as skill similarity (S-S). However, these relationship information are frequently absent in many real-world educational applications, and it is a labor-intensive work for human experts to label it. Inspired by recent advances in natural language processing domain, we propose a novel pre-training approach, namely as SPAKT, and utilize self-supervised learning to pre-train exercise embedding representation without the need for expensive human-expert annotations in this paper. Contrary to existing pre-training methods that highly rely on manually labeling knowledge about the E-E, S-S, or E-S relationships, the core idea of the proposed SPAKT is to design three self-attention modules to model the E-S, E-E, and S-S relationships, respectively, and all of these three modules can be trained in the self-supervised setting. As a pre-training approach, our SPAKT can be effortlessly incorporated into existing deep neural network-based KT frameworks. We experimentally show that, even without using expensive annotations about the aforementioned three kinds of relationships, our model achieves competitive performance compared with state-of-the-arts. Our algorithm implementations have been made publicly available at https://github.com/Vinci-hp/pretrainKT.
Detailed illustration of our self-supervised learning based pre-training network architecture.
Published in: IEEE Access ( Volume: 10)
Page(s): 72145 - 72154
Date of Publication: 04 July 2022
Electronic ISSN: 2169-3536

Funding Agency:


References

References is not available for this document.