By Topic

SHARP control: Controlled shared cache management in chip multiprocessors

Sign In

Cookies must be enabled to login.After enabling cookies , please use refresh or reload or ctrl+f5 on the browser for the login options.

Formats Non-Member Member
$31 $13
Learn how you can qualify for the best price for this item!
Become an IEEE Member or Subscribe to
IEEE Xplore for exclusive pricing!
close button

puzzle piece

IEEE membership options for an individual and IEEE Xplore subscriptions for an organization offer the most affordable access to essential journal articles, conference papers, standards, eBooks, and eLearning courses.

Learn more about:

IEEE membership

IEEE Xplore subscriptions

3 Author(s)
Srikantaiah, S. ; Dept. of CSE, Pennsylvania State Univ., University Park, PA, USA ; Kandemir, M. ; Qian Wang

Shared resources in a chip multiprocessors (CMPs) pose unique challenges to the seamless adoption of CMPs in virtualization environments and high performance computing systems. While sharing resources like on-chip last level cache is generally beneficial due to increased resource utilization, lack of control over management of these resources can lead to loss of determinism, faded performance isolation, and an overall lack of the notion of quality of service (QoS) provided to individual applications. This has direct ramifications on adhering to service level agreements in environments involving consolidation of multiple heterogeneous workloads. Although providing QoS in presence of shared resources has been addressed in the literature, it has been commonly observed that reservation of resources for QoS leads to under-utilization of resources. This paper proposes the use of formal control theory for dynamically partitioning the shared last level cache in CMPs by optimizing the last level cache space utilization among multiple concurrently executing applications with well defined service level objectives. The advantage of using formal feedback control lies in the theoretical guarantee we can provide about maximizing the utilization of the cache space in a fair manner. Using feedback control, we demonstrate that our fair speedup improvement scheme regulates cache allocation to applications dynamically such that we achieve a high fair speedup (global performance fairness metric). We also propose an adaptive, feedback control based cache partitioning scheme that achieves service differentiation among various applications with minimal impact on the fair speedup. Extensive simulations using a full system simulator with accurate timing models and a set of diverse multiprogrammed workloads show that our fair speedup improvement scheme achieves 21.9% improvement on the fair speedup metric across various benchmarks and our service differentiation scheme achieves well regulated- service differentiation.

Published in:

Microarchitecture, 2009. MICRO-42. 42nd Annual IEEE/ACM International Symposium on

Date of Conference:

12-16 Dec. 2009