By Topic

Runtime Support for Memory Adaptation in Scientific Applications via Local Disk and Remote Memory

Sign In

Cookies must be enabled to login.After enabling cookies , please use refresh or reload or ctrl+f5 on the browser for the login options.

Formats Non-Member Member
$33 $13
Learn how you can qualify for the best price for this item!
Become an IEEE Member or Subscribe to
IEEE Xplore for exclusive pricing!
close button

puzzle piece

IEEE membership options for an individual and IEEE Xplore subscriptions for an organization offer the most affordable access to essential journal articles, conference papers, standards, eBooks, and eLearning courses.

Learn more about:

IEEE membership

IEEE Xplore subscriptions

4 Author(s)
Chuan Yue ; Dept. of Comput. Sci., Coll. of William & Mary, Williamsburg, VA ; R. T. Mills ; A. Stathopoulos ; D. Nikolopoulos

The ever increasing memory demands of many scientific applications and the complexity of today's shared computational resources still require the occasional use of virtual memory, network memory, or even out-of-core implementations, with well known drawbacks in performance and usability. In this paper, we present a general framework, based on our earlier MML B prototype, that enables fully customizable, memory malleability in a wide variety of scientific applications. We provide several necessary enhancements to the environment sensing capabilities of MMLIB and introduce a remote memory capability, based on MPI communication of cached memory blocks between `compute nodes' and designated memory servers. We show experimental results from three important scientific applications that require the general MML B framework. Under constant memory pressure, we observe execution time improvements of factors between three and five over relying solely on the virtual memory system. With remote memory employed, these factors are even larger and significantly better than other, system-level remote memory implementations

Published in:

2006 15th IEEE International Conference on High Performance Distributed Computing

Date of Conference:

0-0 0