Skip to Main Content
Bursty workloads are often observed in a variety of systems such as grid services, multi-tier architectures, and large storage systems. Studies have shown that such burstiness can dramatically degrade system performance because of overloading, increased response time, and unavailable service. Computing grids, which often use distributed, autonomous resource management, are particularly susceptible to load imbalances caused by bursty workloads. In this paper, we use a simulation environment to investigate the performance of decentralized schedulers under various intensity levels of burstiness. We first demonstrate a significant performance degradation in the presence of strong and moderate bursty workloads. Then, we describe two new hybrid schedulers, based on duplication-invalidation, and assess the effectiveness of these schedulers under different intensities of burstiness. Our simulation results show that compared to the conventional decentralized methods, the proposed schedulers achieve a 40% performance improvement under the bursty condition while obtaining similar performance in non-bursty conditions.