Loading [MathJax]/extensions/MathMenu.js
GHTO: Improving Byzantine Robustness in Distributed Machine Learning Through Gradient Heterogeneity Twofold Optimization | IEEE Conference Publication | IEEE Xplore

GHTO: Improving Byzantine Robustness in Distributed Machine Learning Through Gradient Heterogeneity Twofold Optimization


Abstract:

Heterogeneous datasets induce local gradient variance and global gradient variance in distributed learning, collectively referred to as gradient heterogeneity. Gradient h...Show More

Abstract:

Heterogeneous datasets induce local gradient variance and global gradient variance in distributed learning, collectively referred to as gradient heterogeneity. Gradient heterogeneity increases the system's vulnerability to Byzantine attacks, affecting performance. To address this issue, the paper proposes a Gradient Heterogeneity Twofold Optimization (GHTO) algorithm to enhance the Byzantine robustness of the system in non-independent and identically distributed environments. Specifically, a Local Gradient Correction (LGC) algorithm is proposed, which aims to reduce the local gradient variance. By further integrating LGC with momentum and bucketing techniques to achieve dual optimization of local and global gradient variance. Experimental results demonstrate the proposed LGC algorithm diminishes local gradient variance, and the GHTO method optimizes both local and global gradient variance, which further strengthens the robustness of the system against Byzantine attacks.
Date of Conference: 13-15 December 2024
Date Added to IEEE Xplore: 13 March 2025
ISBN Information:
Conference Location: Qingdao, China

Contact IEEE to Subscribe

References

References is not available for this document.