Cart (Loading....) | Create Account
Close category search window
 

Fast Hierarchical Cache Directory: A Scalable Cache Organization for Large-Scale CMP

Sign In

Cookies must be enabled to login.After enabling cookies , please use refresh or reload or ctrl+f5 on the browser for the login options.

Formats Non-Member Member
$31 $13
Learn how you can qualify for the best price for this item!
Become an IEEE Member or Subscribe to
IEEE Xplore for exclusive pricing!
close button

puzzle piece

IEEE membership options for an individual and IEEE Xplore subscriptions for an organization offer the most affordable access to essential journal articles, conference papers, standards, eBooks, and eLearning courses.

Learn more about:

IEEE membership

IEEE Xplore subscriptions

5 Author(s)
Chongmin Li ; Dept. of Comput. Sci. & Technol., Tsinghua Univ., Beijing, China ; Haixia Wang ; Yibo Xue ; Xi Zhang
more authors

As more processing cores are integrated into one chip and the feature size continues to shrink, the increasing on-chip access latency complicates the design of the on-chip last-level cache for chip multiprocessors. At the same time, the overhead of maintaining on-chip directory cannot be ignored as the number of processing cores increasing. There is an urgent need for scalable organization of on-chip last-level cache. In this work, we propose fast hierarchical cache directory for tiled CMP, which divides CMP tiles into multiple regions hierarchically, and combines it with data replication. Multi-level directory is used to record the share information within a region and assist the regional home node to complete operation efficiently. Fast directory is used to get lower L2 slice access latency at the same time. Most cache requests to last-level cache can be handled within the local level-1 region. Evaluation indicates this architecture is highly scalable. Simulation results show that for a 16-core CMP, hierarchical cache directory reduces average access latency to last-level cache by 46.35% and average on-chip network traffic by 19.25% respectively. The system performance is increased by 20.82% at the same time.

Published in:

Networking, Architecture and Storage (NAS), 2010 IEEE Fifth International Conference on

Date of Conference:

15-17 July 2010

Need Help?


IEEE Advancing Technology for Humanity About IEEE Xplore | Contact | Help | Terms of Use | Nondiscrimination Policy | Site Map | Privacy & Opting Out of Cookies

A not-for-profit organization, IEEE is the world's largest professional association for the advancement of technology.
© Copyright 2014 IEEE - All rights reserved. Use of this web site signifies your agreement to the terms and conditions.