By Topic

A practically constant-time MPI Broadcast Algorithm for large-scale InfiniBand Clusters with Multicast

Sign In

Cookies must be enabled to login.After enabling cookies , please use refresh or reload or ctrl+f5 on the browser for the login options.

Formats Non-Member Member
$33 $13
Learn how you can qualify for the best price for this item!
Become an IEEE Member or Subscribe to
IEEE Xplore for exclusive pricing!
close button

puzzle piece

IEEE membership options for an individual and IEEE Xplore subscriptions for an organization offer the most affordable access to essential journal articles, conference papers, standards, eBooks, and eLearning courses.

Learn more about:

IEEE membership

IEEE Xplore subscriptions

3 Author(s)
Torsten Hoefler ; Dept. of Computer Science, Chemnitz University of Technology, Strasse der Nationen 62, Chemnitz, 09107 GERMANY; Open Systems Laboratory, Indiana University, 501 N. Morton Street, Bloomington, IN 47404 USA., ; Christian Siebert ; Wolfgang Rehm

An efficient implementation of the MPI_BCAST operation is crucial for many parallel scientific applications. The hardware multicast operation seems to be applicable to switch-based infiniband cluster systems. Several approaches have been implemented so far, however there has been no production-ready code available yet. This makes optimal algorithms to a subject of active research. Some problems still need to be solved in order to bridge the semantic gap between the unreliable multicast and MPI_BCAST. The biggest of those problems is to ensure the reliable data transmission in a scalable way. Acknowledgement-based methods that scale logarithmically with the number of participating MPI processes exist, but they do not meet the supernormal demand of high-performance computing. We propose a new algorithm that performs the MPI_BCAST operation in a practically constant time, independent of the communicator size. This method is well suited for large communicators and (especially) small messages due to its good scaling and its ability to prevent parallel process skew. We implemented our algorithm as a collective component for the Open MPI framework using native infiniband multicast and we show its scalability on a cluster with 116 compute nodes, where it saves up to 41% MPI_BCAST latency in comparison to the "TUNED" OpenMPl collective.

Published in:

2007 IEEE International Parallel and Distributed Processing Symposium

Date of Conference:

26-30 March 2007