Cart (Loading....) | Create Account
Close category search window
 

Enabling GPU and Many-Core Systems in Heterogeneous HPC Environments Using Memory Considerations

Sign In

Cookies must be enabled to login.After enabling cookies , please use refresh or reload or ctrl+f5 on the browser for the login options.

Formats Non-Member Member
$31 $13
Learn how you can qualify for the best price for this item!
Become an IEEE Member or Subscribe to
IEEE Xplore for exclusive pricing!
close button

puzzle piece

IEEE membership options for an individual and IEEE Xplore subscriptions for an organization offer the most affordable access to essential journal articles, conference papers, standards, eBooks, and eLearning courses.

Learn more about:

IEEE membership

IEEE Xplore subscriptions

4 Author(s)
Guim, F. ; Comput. Archit. Dept., Tech. Univ. of Catalonia (UPC), Barcelona, Spain ; Rodero, I. ; Corbalan, J. ; Parashar, M.

Increasing the utilization of many-core systems has been one of the forefront topics these last years. Although many-cores architectures were merely theoretical models few years ago, they have become an important part of the high performance computing market. The semiconductor industry has developed Graphical Processing Units (GPU) systems that provide access to many cores (i.e: Larrabee, Fermi or Tesla) that can be used for General Purpose (GP) computing. In this paper, we propose and evaluate a scheduling strategy for GPU and many-core architectures for HPC environments. Specifically, our strategy is a variant of the backfilling scheduling policy with resource sharing considerations. We propose a scheduling strategy that considers the differences between GP processors and GPU computing elements in terms of computational capacity and memory bandwidth. To do this, our approach uses a resource model that predicts how shared resources are used in both GP and GPU/many-core elements. Furthermore, it considers the differences between these elements in terms of performance. First, it models their differences in terms of computational power and how they share the access to the node's memory bandwidth. Second, it characterizes how the processes are allocated to the GPU. Using this resource model, we design the Power Aware resource selection policy, which we combine with the LessConsume scheduling policy. Our strategy tries to allocate jobs aiming at reducing the memory contention and the energy consumption. Results show that the scheduling strategies proposed in this work are able to save over 40% of energy and improve the system performance up to 30% with respect to traditional backfilling strategies.

Published in:

High Performance Computing and Communications (HPCC), 2010 12th IEEE International Conference on

Date of Conference:

1-3 Sept. 2010

Need Help?


IEEE Advancing Technology for Humanity About IEEE Xplore | Contact | Help | Terms of Use | Nondiscrimination Policy | Site Map | Privacy & Opting Out of Cookies

A not-for-profit organization, IEEE is the world's largest professional association for the advancement of technology.
© Copyright 2014 IEEE - All rights reserved. Use of this web site signifies your agreement to the terms and conditions.