By Topic

A Scheduling Method for Avoiding Kernel Lock Thrashing on Multi-cores

Sign In

Cookies must be enabled to login.After enabling cookies , please use refresh or reload or ctrl+f5 on the browser for the login options.

Formats Non-Member Member
$31 $13
Learn how you can qualify for the best price for this item!
Become an IEEE Member or Subscribe to
IEEE Xplore for exclusive pricing!
close button

puzzle piece

IEEE membership options for an individual and IEEE Xplore subscriptions for an organization offer the most affordable access to essential journal articles, conference papers, standards, eBooks, and eLearning courses.

Learn more about:

IEEE membership

IEEE Xplore subscriptions

4 Author(s)
Yan Cui ; Dept. of Comput. Sci. & Technol., Tsinghua Univ., Beijing, China ; Weida Zhang ; Yu Chen ; Yuanchun Shi

Multi-core architectures have been adopted in various computing environments. Predictions based on Moore's Law state that thousands of cores can be integrated on a single chip within 10 years. To achieve better performance and scalability on multi-cores, applications should be multi-threaded, and therefore threads assigned on different cores can execute concurrently. However, lock contention in kernels can affect the scalability so significantly that the speedup decreases with the increasing number of cores (thrashing). Existing efforts to address this problem mainly focus on deferring lock thrashing, and therefore these techniques cannot prevent thrashing fundamentally. In this paper, we propose to use lock-aware scheduling to avoid thrashing. Our method detects thrashing on a per-thread basis and migrates contended threads to a smaller set of cores. The optimal number of cores is determined by maximizing the proposed normalized throughput model of migrated threads. The proposed method is implemented in Linux 2.6.29.4 and evaluated on a 32-core system. Experimental results on a series of lock-intensive micro- and macro-benchmarks show the effectiveness: for 3 of 5 workloads exhibiting thrashing behaviour, lock-aware scheduling can detect the speedup decrease accurately and sustain the maximal speedup, for the remaining 2 workloads, the performance can be improved greatly although the maximal speedup is not sustained, for 1 workload which does not suffer thrashing, the method introduces negligible runtime overhead.

Published in:

Parallel and Distributed Systems (ICPADS), 2010 IEEE 16th International Conference on

Date of Conference:

8-10 Dec. 2010