By Topic

Multiple Virtual Lanes-aware MPI collective communication in multi-core clusters

Sign In

Cookies must be enabled to login.After enabling cookies , please use refresh or reload or ctrl+f5 on the browser for the login options.

Formats Non-Member Member
$33 $13
Learn how you can qualify for the best price for this item!
Become an IEEE Member or Subscribe to
IEEE Xplore for exclusive pricing!
close button

puzzle piece

IEEE membership options for an individual and IEEE Xplore subscriptions for an organization offer the most affordable access to essential journal articles, conference papers, standards, eBooks, and eLearning courses.

Learn more about:

IEEE membership

IEEE Xplore subscriptions

4 Author(s)
Bo Li ; National Research Center for Intelligent Computing Systems, Institute of Computing Technology, Chinese Academy of Sciences, Beijing, China ; Zhigang Huo ; Panyong Zhang ; Dan Meng

The widespread adoption of multi-core processors in supercomputing arena results in multiple processes in one node competing for the limited resources of the network interface. This is especially true for Collective communication in MPI. InfiniBand, as a prevailing high speed network, provides fine-grained Quality of Service (QoS) through Virtual Lanes (VLs) mechanism. In this paper, we study the possibility of enhancing the performance of MPI collective communication by using multiple Virtual Lanes. The utilization of multiple VLs may equalize the priorities of simultaneous send requests, accelerate the transmission of small messages and increase the utilization of network and memory bandwidth. These benefits speed up the MPI Collective communication. Factors that affect the utilization of multiple VLs are disscussed as well. Evaluations show that Alltoall, Reduce, Allreduce and Reduce_scatter operations benefit from our multiple Virtual Lanes aware design with about 10%~20% performance enhancement. Application evaluations show that our design increases the Fast Fourier Transform performance by 11% in the 1024-core cluster.

Published in:

2009 International Conference on High Performance Computing (HiPC)

Date of Conference:

16-19 Dec. 2009