AutoDDL: Automatic Distributed Deep Learning With Near-Optimal Bandwidth Cost | IEEE Journals & Magazine | IEEE Xplore

AutoDDL: Automatic Distributed Deep Learning With Near-Optimal Bandwidth Cost


Abstract:

Recent advances in deep learning are driven by the growing scale of computation, data, and models. However, efficiently training large-scale models on distributed systems...Show More

Abstract:

Recent advances in deep learning are driven by the growing scale of computation, data, and models. However, efficiently training large-scale models on distributed systems requires an intricate combination of data, operator, and pipeline parallelism, which exerts heavy burden on machine learning practitioners. To this end, we propose AutoDDL, a distributed training framework that automatically explores and exploits new parallelization schemes with near-optimal bandwidth cost. AutoDDL facilitates the description and implementation of different schemes by utilizing OneFlow's Split, Broadcast, and Partial Sum (SBP) abstraction. AutoDDL is equipped with an analytical performance model combined with a customized Coordinate Descent algorithm, which significantly reduces the scheme searching overhead. We conduct evaluations on Multi-Node-Single-GPU and Multi-Node-Multi-GPU machines using different models, including VGG and Transformer. Compared to the expert-optimized implementations, AutoDDL reduces the end-to-end training time by up to 31.1% and 10% for Transformer and up to 17.7% and 71.5% for VGG on the two parallel systems, respectively.
Published in: IEEE Transactions on Parallel and Distributed Systems ( Volume: 35, Issue: 8, August 2024)
Page(s): 1331 - 1344
Date of Publication: 07 May 2024

ISSN Information:

Funding Agency:


Contact IEEE to Subscribe

References

References is not available for this document.