By Topic

Efficient and scalable all-to-all personalized exchange for InfiniBand-based clusters

Sign In

Cookies must be enabled to login.After enabling cookies , please use refresh or reload or ctrl+f5 on the browser for the login options.

Formats Non-Member Member
$31 $13
Learn how you can qualify for the best price for this item!
Become an IEEE Member or Subscribe to
IEEE Xplore for exclusive pricing!
close button

puzzle piece

IEEE membership options for an individual and IEEE Xplore subscriptions for an organization offer the most affordable access to essential journal articles, conference papers, standards, eBooks, and eLearning courses.

Learn more about:

IEEE membership

IEEE Xplore subscriptions

3 Author(s)
Sur, S. ; Dept. of Comput. & Inf. Sci., Ohio State Univ., Columbus, OH, USA ; Hyun-Wook Jin ; Panda, D.K.

The all-to-all personalized exchange is the most dense collective communication function offered by the MPI specification. The operation involves every process sending a different message to all other participating processes. This collective operation is essential for many parallel scientific applications. With increasing system and message sizes, it becomes challenging to offer a fast, scalable and efficient implementation of this operation. InfiniBand is an emerging modern interconnect. It offers very low latency, high bandwidth and one-sided operations like RDMA write. Its advanced features like RDMA write gather allow us to design and implement all-to-all algorithms much more efficiently than in the past. Our aim in This work is to design efficient and scalable implementations of traditional personalized exchange algorithms. We present two novel approaches towards designing all-to-all algorithms for short and long messages respectively. The hypercube RDMA write gather and direct eager schemes effectively leverage the RDMA and RDMA with write gather mechanisms offered by InfiniBand. Performance evaluation of our design and implementation reveals that it is able to reduce the all-to-all communication time by upto a factor of 3.07 for 32 byte messages on a 16 node InfiniBand cluster. Our analytical models suggest that the proposed designs perform 64% better on InfiniBand clusters with 1024 nodes for 4k message size.

Published in:

Parallel Processing, 2004. ICPP 2004. International Conference on

Date of Conference:

15-18 Aug. 2004