Estimating Self-Confidence in Video-Based Learning Using Eye-Tracking and Deep Neural Networks | IEEE Journals & Magazine | IEEE Xplore
Scheduled Maintenance: On Monday, 30 June, IEEE Xplore will undergo scheduled maintenance from 1:00-2:00 PM ET (1800-1900 UTC).
On Tuesday, 1 July, IEEE Xplore will undergo scheduled maintenance from 1:00-5:00 PM ET (1800-2200 UTC).
During these times, there may be intermittent impact on performance. We apologize for any inconvenience.

Estimating Self-Confidence in Video-Based Learning Using Eye-Tracking and Deep Neural Networks


0 seconds of 0 secondsVolume 90%
Press shift question mark to access a list of keyboard shortcuts
Keyboard Shortcuts
Play/PauseSPACE
Increase Volume
Decrease Volume
Seek Forward
Seek Backward
Captions On/Offc
Fullscreen/Exit Fullscreenf
Mute/Unmutem
Seek %0-9
00:00
00:00
00:00
 
Graphical Abstract of Self-Confidence Estimation Using Eye-tracking and Deep Neural Network.

Abstract:

Self-confidence is a crucial trait that significantly influences performance across various life domains, leading to positive outcomes by enabling quick decision-making a...Show More

Abstract:

Self-confidence is a crucial trait that significantly influences performance across various life domains, leading to positive outcomes by enabling quick decision-making and prompt action. Estimating self-confidence in video-based learning is essential as it provides personalized feedback, thereby enhancing learners’ experiences and confidence levels. This study addresses the challenge of self-confidence estimation by comparing traditional machine-learning techniques with advanced deep-learning models. Our study involved a diverse group of thirteen participants (N=13), each of whom viewed and provided responses to seven distinct videos, generating eye-tracking data that was subsequently analyzed to gain insights into their visual attention and behavior. To assess the collected data, we compare three different algorithms: a Long Short-Term Memory (LSTM), a Support Vector Machine (SVM), and a Random Forest (RF), thereby providing a comprehensive evaluation of the data. The achieved outcomes demonstrated that the LSTM model outperformed conventional hand-crafted feature-based methods, achieving the highest accuracy of 76.9% with Leave-One-Category-Out Cross-Validation (LOCOCV) and 70.3% with Leave-One-Participant-Out Cross-Validation (LOPOCV). Our results underscore the superior performance of the deep-learning model in estimating self-confidence in video-based learning contexts compared to hand-crafted feature-based methods. The outcomes of this research pave the way for more personalized and effective educational interventions, ultimately contributing to improved learning experiences and outcomes.
0 seconds of 0 secondsVolume 90%
Press shift question mark to access a list of keyboard shortcuts
Keyboard Shortcuts
Play/PauseSPACE
Increase Volume
Decrease Volume
Seek Forward
Seek Backward
Captions On/Offc
Fullscreen/Exit Fullscreenf
Mute/Unmutem
Seek %0-9
00:00
00:00
00:00
 
Graphical Abstract of Self-Confidence Estimation Using Eye-tracking and Deep Neural Network.
Published in: IEEE Access ( Volume: 12)
Page(s): 192219 - 192229
Date of Publication: 11 December 2024
Electronic ISSN: 2169-3536

References

References is not available for this document.