By Topic

Latency-aware Utility-based NUCA Cache Partitioning in 3D-stacked multi-processor systems

Sign In

Cookies must be enabled to login.After enabling cookies , please use refresh or reload or ctrl+f5 on the browser for the login options.

Formats Non-Member Member
$31 $13
Learn how you can qualify for the best price for this item!
Become an IEEE Member or Subscribe to
IEEE Xplore for exclusive pricing!
close button

puzzle piece

IEEE membership options for an individual and IEEE Xplore subscriptions for an organization offer the most affordable access to essential journal articles, conference papers, standards, eBooks, and eLearning courses.

Learn more about:

IEEE membership

IEEE Xplore subscriptions

3 Author(s)
Jongpil Jung ; Dept. of Electr. Eng. & Comput. Sci., Korea Adv. Inst. of Sci. & Technol., Daejeon, South Korea ; Seonpil Kim ; Chong-Min Kyung

Increasing number of processor cores on a chip is a driving force to move to three-dimensional integration. On the other hand, as the number of processor cores increases, non-uniform cache architecture (NUCA) receives growing attention. Reducing effective memory access time, including cache hit time and miss penalty, is crucial in such multi-processor systems. In this paper, we propose a Latency-aware Utility-based Cache Partitioning (LUCP) method which reduces memory access time in a 3D-stacked NUCA. To reduce the memory access time, the proposed method partitions shared NUCA cache for each processor core according to latency variation (depending on the physical distance from processor core to cache bank) and cache access characteristic of application programs. Experimental results show that the proposed method reduces memory access time by up to 32.6% with an average of 14.9% compared to conventional method.

Published in:

VLSI System on Chip Conference (VLSI-SoC), 2010 18th IEEE/IFIP

Date of Conference:

27-29 Sept. 2010