Abstract:
Split learning (SL) is widely regarded as a promising distributed machine learning framework with superior privacy-preserving properties, lower communication and computat...Show MoreMetadata
Abstract:
Split learning (SL) is widely regarded as a promising distributed machine learning framework with superior privacy-preserving properties, lower communication and computation costs. However, in real Internet of Things (IoT) scenarios, existing SL may not perform well because the local data of IoT devices often do not follow the same distribution. This leads to the model continuously adapting to the current data distribution in each training epoch, resulting in a catastrophic forgetting phenomenon. Existing methods typically attempt to add raw or generated data from previous devices in the current training epoch to review knowledge, but direct access to the local data of other devices carries serious privacy risks. Data augmentation techniques based on generative networks often have poor robustness and increase the computation cost on the device side. To address these challenges, we propose a new SL framework called SL without Forgetting (SLwF). To mitigate catastrophic forgetting without accessing any previous data, we propose a contrastive learning-based training method that leverages current training data to review previous knowledge, and learn new knowledge better. Furthermore, we adopt an exponential moving average (EMA)-based model update strategy to preserve lost knowledge, further alleviating the forgetting problem. We implement the SLwF framework in real IoT scenarios and extensively evaluated its performance using four publicly available datasets. Compared to other related research (e.g., IoTSL), SLwF performs better in terms of final accuracy and robustness while avoiding excessive device energy consumption.
Published in: IEEE Internet of Things Journal ( Volume: 12, Issue: 9, 01 May 2025)