Skip to Main Content
Scheduling of processes onto processors of a parallel machine has always been an important and challenging area of research. The issue becomes even more crucial and difficult as we gradually progress to the use of off-the-shelf workstations, operating systems, and high bandwidth networks to build cost-effective clusters for demanding applications. Clusters are gaining acceptance not just in scientific applications that need supercomputing power, but also in domains such as databases, web service, and multimedia which place diverse Quality-of-Service (QoS) demands on the underlying system. Further, these applications have diverse characteristics in terms of their computation, communication, and I/O requirements, making conventional parallel scheduling solutions, such as space sharing or gang scheduling, unattractive. At the same time, leaving it to the native operating system of each node to make decisions independently can lead to ineffective use of system resources whenever there is communication. Instead, an emerging class of dynamic coscheduling mechanisms that attempt to take remedial actions to guide the system toward coscheduled execution without requiring explicit synchronization offers a lot of promise for cluster scheduling. Using a detailed simulator, this paper evaluates the pros and cons of different dynamic coscheduling alternatives while comparing their advantages over traditional gang scheduling (and not performing any coordinated scheduling at all). The impact of dynamic job arrivals, job characteristics, and different system parameters on these alternatives is evaluated in terms of several performance criteria. In addition, heuristics to enhance one of the alternatives even further are identified, classified, and evaluated. It is shown that these heuristics can significantly outperform the other alternatives over a spectrum of workload and system parameters and is thus a much better option for clusters than conventional gang scheduling.