By Topic

Performance analysis and prediction of processor scheduling strategies in multiprogrammed shared-memory multiprocessors

Sign In

Cookies must be enabled to login.After enabling cookies , please use refresh or reload or ctrl+f5 on the browser for the login options.

Formats Non-Member Member
$31 $13
Learn how you can qualify for the best price for this item!
Become an IEEE Member or Subscribe to
IEEE Xplore for exclusive pricing!
close button

puzzle piece

IEEE membership options for an individual and IEEE Xplore subscriptions for an organization offer the most affordable access to essential journal articles, conference papers, standards, eBooks, and eLearning courses.

Learn more about:

IEEE membership

IEEE Xplore subscriptions

2 Author(s)
Yue, K.K. ; Dept. of Comput. Sci., Minnesota Univ., Minneapolis, MN, USA ; Lilja, D.J.

Small-scale shared-memory multiprocessors are commonly used in a workgroup environment where multiple applications, both parallel and sequential, are executed concurrently while sharing the processors and other system resources. To utilize the processors efficiently, an effective scheduling strategy is required. We use performance data obtained from an SGI multiprocessor to evaluate several processor scheduling strategies. We examine gang scheduling (coscheduling), static space sharing (space partitioning), and a dynamic allocation scheme called loop-level process control (LLPC) with three new dynamic allocation heuristics. We use regression analysis to quantify the measured data and thereby explore the relationship between the degree of parallelism of the application, the size of the system, the processor allocation strategy and the resulting performance. We also attempt to predict the performance of an application in a multiprogrammed environment. While the execution time predictions are relatively coarse, the models produce a reasonable rank-ordering of the scheduling strategies for each application. This study also shows that dynamically partitioning the system using LLPC or similar heuristics provides better performance for applications with a high degree of parallelism than either gang scheduling or static space sharing

Published in:

Parallel Processing, 1996. Vol.3. Software., Proceedings of the 1996 International Conference on  (Volume:3 )

Date of Conference:

12-16 Aug 1996