Stochastic Gradient Coding for Straggler Mitigation in Distributed Learning | IEEE Journals & Magazine | IEEE Xplore

Stochastic Gradient Coding for Straggler Mitigation in Distributed Learning


Abstract:

We consider distributed gradient descent in the presence of stragglers. Recent work on gradient coding and approximate gradient coding have shown how to add redundancy in...Show More

Abstract:

We consider distributed gradient descent in the presence of stragglers. Recent work on gradient coding and approximate gradient coding have shown how to add redundancy in distributed gradient descent to guarantee convergence even if some workers are stragglers-that is, slow or non-responsive. In this work we propose an approximate gradient coding scheme called Stochastic Gradient Coding (SGC), which works when the stragglers are random. SGC distributes data points redundantly to workers according to a pair-wise balanced design, and then simply ignores the stragglers. We prove that the convergence rate of SGC mirrors that of batched Stochastic Gradient Descent (SGD) for the ℓ2 loss function, and show how the convergence rate can improve with the redundancy. We also provide bounds for more general convex loss functions. We show empirically that SGC requires a small amount of redundancy to handle a large number of stragglers and that it can outperform existing approximate gradient codes when the number of stragglers is large.
Published in: IEEE Journal on Selected Areas in Information Theory ( Volume: 1, Issue: 1, May 2020)
Page(s): 277 - 291
Date of Publication: 29 April 2020
Electronic ISSN: 2641-8770

Funding Agency:


Contact IEEE to Subscribe

References

References is not available for this document.