By Topic

Scientific Application Performance on HPC, Private and Public Cloud Resources: A Case Study Using Climate, Cardiac Model Codes and the NPB Benchmark Suite

Sign In

Cookies must be enabled to login.After enabling cookies , please use refresh or reload or ctrl+f5 on the browser for the login options.

Formats Non-Member Member
$31 $13
Learn how you can qualify for the best price for this item!
Become an IEEE Member or Subscribe to
IEEE Xplore for exclusive pricing!
close button

puzzle piece

IEEE membership options for an individual and IEEE Xplore subscriptions for an organization offer the most affordable access to essential journal articles, conference papers, standards, eBooks, and eLearning courses.

Learn more about:

IEEE membership

IEEE Xplore subscriptions

4 Author(s)
Strazdins, P.E. ; Res. Sch. of Comput. Sci., Australian Nat. Univ., Canberra, ACT, Australia ; Jie Cai ; Atif, Muhammad ; Antony, J.

The ubiquity of on-demand cloud computing resources enables scientific researchers to dynamically provision and consume compute and storage resources in response to science needs. Whereas traditional HPC compute resources are often centrally managed with a priori CPU-time allocations and use policies. A long term goal of our work is to assess the efficacy of preserving the user environment (compilers, support libraries, runtimes and application codes) available at a traditional HPC facility for deployment into a VM environment, which can then be subsequently used in both private and public scientific clouds. This would afford greater flexibility to users in choosing hardware resources that suit their science needs better, as well as aiding them in transitioning onto private/public cloud resources. In this paper we present work in-progress performance results for a set of benchmark kernels and scientific applications running in a traditional HPC environment, a private VM cluster and an Amazon HPC EC2 cluster. These are the OSU MPI micro-benchmark, the NAS Parallel macro-benchmarks and two large scientific application codes (the UK Met Office's MetUM global climate model and the Chaste multi-scale computational biology code) respectively. We discuss parallel scalability and runtime information obtained using the IPM performance monitoring framework for MPI applications. We were also able to successfully build application codes in a traditional HPC environment and package these into VMs which ran on both private and public cloud resources.

Published in:

Parallel and Distributed Processing Symposium Workshops & PhD Forum (IPDPSW), 2012 IEEE 26th International

Date of Conference:

21-25 May 2012