Scheduled System Maintenance on May 29th, 2015:
IEEE Xplore will be upgraded between 11:00 AM and 10:00 PM EDT. During this time there may be intermittent impact on performance. We apologize for any inconvenience.
By Topic

Investigating a Dynamic Loop Scheduling with Reinforcement Learning Approach to Load Balancing in Scientific Applications

Sign In

Cookies must be enabled to login.After enabling cookies , please use refresh or reload or ctrl+f5 on the browser for the login options.

Formats Non-Member Member
$31 $13
Learn how you can qualify for the best price for this item!
Become an IEEE Member or Subscribe to
IEEE Xplore for exclusive pricing!
close button

puzzle piece

IEEE membership options for an individual and IEEE Xplore subscriptions for an organization offer the most affordable access to essential journal articles, conference papers, standards, eBooks, and eLearning courses.

Learn more about:

IEEE membership

IEEE Xplore subscriptions

3 Author(s)
Rashid, M. ; Dept. of Comput. Sci. & Eng., Mississippi State Univ., Starkville, MS, USA ; Banicescu, I. ; Cario, R.L.

The advantages of integrating reinforcement learning (RL) techniques into scientific parallel time-stepping applications have been revealed in research work over the past few years. The object of the integration was to automatically select the most appropriate dynamic loop scheduling (DLS) algorithm from a set of available algorithms with the purpose of improving the application performance via load balancing during the application execution. This paper investigates the performance of such a dynamic loop scheduling with reinforcement learning (DLS-with-RL) approach to load balancing. The DLS-with-RL is most suitable for use in time-stepping scientific applications with large number of steps. The RL agent's characteristics depend on a learning rate parameter and a discount factor parameter. An application simulating wavepacket dynamics that incorporates a DLS-with-RL approach is allowed to execute on a cluster of workstations to investigate the influences of these parameters. The RL agent implemented two RL algorithms: QLEARN and SARSA learning. Preliminary results indicate that on a fixed number of processors, the simulation completion time is not sensitive to the values of the learning parameters used in the experiments. The results also indicate that for this application, there is no advantage of choosing one RL technique over another, even though the techniques differed significantly in the number of times they selected the various DLS algorithms.

Published in:

Parallel and Distributed Computing, 2008. ISPDC '08. International Symposium on

Date of Conference:

1-5 July 2008