Evaluating the Impact of Replay-Based Continual Learning on Long-Term sEMG Pattern Recognition in Instance-Incremental Learning | IEEE Journals & Magazine | IEEE Xplore

Evaluating the Impact of Replay-Based Continual Learning on Long-Term sEMG Pattern Recognition in Instance-Incremental Learning


Overview of preprocessing of sEMG data per trial and continual learning of instance-incremental learning scenarios. Two replay-based continual learning methods (ER and A-...

Abstract:

The field of surface electromyogram (sEMG)-based human-computer interfaces is rapidly evolving through the integration of wearable sensors and deep learning models. To en...Show More

Abstract:

The field of surface electromyogram (sEMG)-based human-computer interfaces is rapidly evolving through the integration of wearable sensors and deep learning models. To ensure practical long-term usability, these models must adapt to changing sEMG data, thereby accommodating an instance-incremental learning (instance-IL) scenario. Continual learning (CL) methods assist pre-trained models in learning new information without compromising existing knowledge. Although regularization-based CL methods have proven effective in ensuring robust sEMG pattern recognition in instance-IL scenarios, the effectiveness of replay-based CL methods in instance-IL remains uncertain, particularly regarding the optimal sample number for replay. This study compared two replay-based CL methods—experience replay (ER) and averaged gradient episodic memory (A-GEM) with two regularization-based CL methods (synaptic intelligence (SI) and learning without forgetting (LwF))—to update a backbone model. A convolutional neural network was employed as the backbone model. To evaluate the impact of CL methods, a publicly available sEMG dataset comprising 30-day data from five subjects was used. Although A-GEM, a replay-based method, exhibited lower accuracy than SI and LwF in certain instances, ER consistently outperformed all methods across the board. Notably, ER with only one sample replay outperformed the regularization-based CL methods. Furthermore, ER demonstrated enhanced accuracy with an increased number of samples. In instance-IL scenarios, the utilization of a direct replay-based method, such as ER can achieve practical, daily, and long-term sEMG-based human-computer interfaces, even when retaining and replaying only one sample from each class.
Overview of preprocessing of sEMG data per trial and continual learning of instance-incremental learning scenarios. Two replay-based continual learning methods (ER and A-...
Published in: IEEE Access ( Volume: 12)
Page(s): 182469 - 182482
Date of Publication: 29 November 2024
Electronic ISSN: 2169-3536

Funding Agency:


References

References is not available for this document.