By Topic

Designing Non-blocking Broadcast with Collective Offload on InfiniBand Clusters: A Case Study with HPL

Sign In

Cookies must be enabled to login.After enabling cookies , please use refresh or reload or ctrl+f5 on the browser for the login options.

Formats Non-Member Member
$31 $13
Learn how you can qualify for the best price for this item!
Become an IEEE Member or Subscribe to
IEEE Xplore for exclusive pricing!
close button

puzzle piece

IEEE membership options for an individual and IEEE Xplore subscriptions for an organization offer the most affordable access to essential journal articles, conference papers, standards, eBooks, and eLearning courses.

Learn more about:

IEEE membership

IEEE Xplore subscriptions

7 Author(s)
Kandalla, K. ; Dept. of Comput. Sci. & Eng., Ohio State Univ., Columbus, OH, USA ; Subramoni, H. ; Vienne, J. ; Raikar, S.P.
more authors

The upcoming MPI-3.0 standard is expected to include non-blocking collective operations. Non-blocking collectives offer a new MPI interface, using which an application can decouple the initiation and completion of collective operations. However, to be effective, the MPI library should provide a high performance and scalable implementation. One of the major challenges in designing an effective non-blocking collective operation is to ensure progress of the operation while processors are busy in application-level computation. The recently introduced Mellanox ConnectX-2 InfiniBand adapters offer a task offload interface (CORE-Direct) that enables communication progress without requiring CPU cycles. In this paper, we present the design of a non-blocking broadcast operation (MPI Ibcast) using the CORE-Direct offload interface. Our experimental evaluations show that our implementation delivers near perfect overlap, without penalizing the latency of the MPI Ibcast operation. Since existing MPI implementations do not provide non-blocking collective communication, scientific applications have been modified to implement collectives on top of MPI point-to-point operations to achieve overlap. HPL is an example of an application use case scenario for non-blocking collectives. We have explored the benefits of our proposed network offload based MPI Ibcast implementation with HPL and we observe that HPL can achieve its peak throughput with significantly smaller problem sizes, which also leads to an improvement in its run-time by up to 78%, with 512 processors. We also observe that our proposed designs can minimize the impact of system noise on applications.

Published in:

High Performance Interconnects (HOTI), 2011 IEEE 19th Annual Symposium on

Date of Conference:

24-26 Aug. 2011