Scheduled System Maintenance:
Some services will be unavailable Sunday, March 29th through Monday, March 30th. We apologize for the inconvenience.
By Topic

Reducing Connection Memory Requirements of MPI for InfiniBand Clusters: A Message Coalescing Approach

Sign In

Cookies must be enabled to login.After enabling cookies , please use refresh or reload or ctrl+f5 on the browser for the login options.

Formats Non-Member Member
$31 $13
Learn how you can qualify for the best price for this item!
Become an IEEE Member or Subscribe to
IEEE Xplore for exclusive pricing!
close button

puzzle piece

IEEE membership options for an individual and IEEE Xplore subscriptions for an organization offer the most affordable access to essential journal articles, conference papers, standards, eBooks, and eLearning courses.

Learn more about:

IEEE membership

IEEE Xplore subscriptions

3 Author(s)
Koop, M.J. ; Dept. of Comput. Sci. & Eng., Ohio State Univ., Columbus, OH ; Jones, T. ; Panda, D.K.

Clusters in the area of high-performance computing have been growing in size at a considerable rate. In these clusters, the dominate programming model is the Message Passing Interface (MPI), so the MPI library has a key role in resource usage and performance. To obtain maximal performance, many clusters deploy a high-speed interconnect between compute nodes. One such interconnect, InfiniBand, has been gaining in popularity due to its various features including Remote Data Memory Access (RDMA), and high-performance. As a result, it is being deployed in a significant number of clusters and has been chosen as the standard interconnect for capacity clusters within the DOE Tri-Labs. As these clusters grow in size, care must be taken to ensure the resource usage does not increase too significantly with scale. In particular, the MPI library resource usage should not grow at a rate which will exhaust the node memory or starve user applications. In this paper we present our findings of current memory usage when all connections are created and design a message coalescing method to decrease memory usage significantly. Our models show that the default configuration of MVAPICH can grow to 1GB per process for 8K processes, while our enhancements reduce usage by an order of magnitude to around 120 MB per process while maintaining near-equal performance. We have validated our design on a 575-node cluster and shown no performance degradation for a variety of applications. We also increase the message rate attainable by over 150%.

Published in:

Cluster Computing and the Grid, 2007. CCGRID 2007. Seventh IEEE International Symposium on

Date of Conference:

14-17 May 2007