By Topic

Generalized multiprocessor scheduling for directed acyclic graphs

Sign In

Cookies must be enabled to login.After enabling cookies , please use refresh or reload or ctrl+f5 on the browser for the login options.

Formats Non-Member Member
$31 $13
Learn how you can qualify for the best price for this item!
Become an IEEE Member or Subscribe to
IEEE Xplore for exclusive pricing!
close button

puzzle piece

IEEE membership options for an individual and IEEE Xplore subscriptions for an organization offer the most affordable access to essential journal articles, conference papers, standards, eBooks, and eLearning courses.

Learn more about:

IEEE membership

IEEE Xplore subscriptions

2 Author(s)
Srinivasa Prasanna, G.N. ; AT&T Bell Labs., Murray Hill, NJ, USA ; Musicus, B.R.

In the 3rd Annual ACM Symposium on Parallel Algorithms and Architectures, pp. 216-228 (JuIy 1991), we presented several new results in the theory of homogeneous multiprocessor scheduling. A directed acyclic graph (DAG) of tasks was to be scheduled. Tasks were assumed to be parallelizable-as more processors are applied to a task, the time taken to compute it decreases, yielding some speedup. Because of communication, synchronization and task scheduling overheads, this speedup increases less than linearly with the number of processors applied. The optimal scheduling problem is to determine the number of processors assigned to each task, and to the task sequencing, to minimise the finishing time. Using optimal control theory, in the special case where the speedup function of each task is pα (where p is the amount of processing power applied to the task), a closed form solution for task graphs formed from parallel and series connections was derived. This paper considerably extends these techniques for arbitrary DAGs and applies them to matrix arithmetic compilation. The optimality conditions impose nonlinear constraints on the flow of processing power from predecessors to successors, and on the finishing times of siblings. This paper presents a fast algorithm for determining and solving these nonlinear equations. The algorithm utilizes the structure of the finishing time equations to efficiently run a conjugate gradient minimization leading to the optimal solution. The algorithm has been tested on a variety of DAGs. The results presented show that it is superior to alternative heuristic approaches

Published in:

Supercomputing '94., Proceedings

Date of Conference:

14-18 Nov 1994