By Topic

Dynamic Cache Reservation to Maximize Efficiency in Shared Cache Multicores

Sign In

Cookies must be enabled to login.After enabling cookies , please use refresh or reload or ctrl+f5 on the browser for the login options.

Formats Non-Member Member
$31 $13
Learn how you can qualify for the best price for this item!
Become an IEEE Member or Subscribe to
IEEE Xplore for exclusive pricing!
close button

puzzle piece

IEEE membership options for an individual and IEEE Xplore subscriptions for an organization offer the most affordable access to essential journal articles, conference papers, standards, eBooks, and eLearning courses.

Learn more about:

IEEE membership

IEEE Xplore subscriptions

4 Author(s)
Qing Wang ; Dept. of Comput. Sci. & Technol., Harbin Inst. of Technol., Harbin, China ; ZhenzHou Ji ; Tao Liu ; SuXia Zhu

Extracting performance from modern multicore architectures requires that parallel sections be divided into many threads of execution. In order to fully utilize these threads effectively, load balancing has become one of the most important factors that affect the performance of applications on multicores. In this paper, we have shown that the threads that belong to a single, multithreaded application can exhibit a poorly balancing performance. We propose a dynamic cache reservation scheme which can redistribute the reserved cache space to the critical thread for speeding up during the applications running. In our implementation, we balance performance of these threads belonging to the same application based on runtime information. Our experimental evaluation indicates that, the proposed dynamic cache reservation yields benefits up to 21% over a shared cache without cache reservation, up to 6% over a statically partitioned cache scheme.

Published in:

Instrumentation, Measurement, Computer, Communication and Control, 2011 First International Conference on

Date of Conference:

21-23 Oct. 2011