By Topic

Evaluating the Effect of Inter Process Communication Efficiency on High Performance Distributed Scientific Computing

Sign In

Cookies must be enabled to login.After enabling cookies , please use refresh or reload or ctrl+f5 on the browser for the login options.

Formats Non-Member Member
$33 $13
Learn how you can qualify for the best price for this item!
Become an IEEE Member or Subscribe to
IEEE Xplore for exclusive pricing!
close button

puzzle piece

IEEE membership options for an individual and IEEE Xplore subscriptions for an organization offer the most affordable access to essential journal articles, conference papers, standards, eBooks, and eLearning courses.

Learn more about:

IEEE membership

IEEE Xplore subscriptions

3 Author(s)

Scientific applications like weather forecasting require high performance and fast response time. But this ideal requirement has always been constrained by peculiarities of underlying platforms specially distributed platforms. One such constraint is the efficiency of communication between geographically dispersed and physically distributed processes running these applications, that is the efficiency of inter process communication (IPC) mechanisms. This paper provides hard evidence that an operating system kernel-level implementation of IPC on multi-computers reduces the execution time of a weather forecasting model by nearly half on average compared to when the IPC mechanism is implemented at library level. A well known non-hydrostatic version of the Penn state/NCAR mesoscale model, called MM5, is executed on a networked cluster. The performance of MM5 is measured with two distributed implementations of IPC, a kernel-level implementation called DIPC2006 and a renowned library level implementation called MPI. It is both shown how and argued why the performance of MM5 on a DIPC2006 configured cluster is by far better than its performance on an MPI configured similar cluster. Even ignoring the favorable points of kernel-level implementations, like safety, privilege, reliability, and primitiveness, the insight is twofold. Scientist may look for more efficient distributed implementations of IPC to run their simulations faster, and computer engineers may try harder to develop more efficient distributed implementations of IPC for scientists.

Published in:

Embedded and Ubiquitous Computing, 2008. EUC '08. IEEE/IFIP International Conference on  (Volume:1 )

Date of Conference:

17-20 Dec. 2008