Cart (Loading....) | Create Account
Close category search window
 

Towards Smaller-Sized Cache for Mobile Processors Using Shared Set-Associativity

Sign In

Cookies must be enabled to login.After enabling cookies , please use refresh or reload or ctrl+f5 on the browser for the login options.

Formats Non-Member Member
$31 $13
Learn how you can qualify for the best price for this item!
Become an IEEE Member or Subscribe to
IEEE Xplore for exclusive pricing!
close button

puzzle piece

IEEE membership options for an individual and IEEE Xplore subscriptions for an organization offer the most affordable access to essential journal articles, conference papers, standards, eBooks, and eLearning courses.

Learn more about:

IEEE membership

IEEE Xplore subscriptions

2 Author(s)
Davanam, N. ; Dept. of Electr. & Comput. Eng., Univ. of Texas at San Antonio, San Antonio, TX, USA ; Byeong Kil Lee

As multi-core trends are becoming dominant, cache structures are complicated and bigger shared level-2 caches are demanded. Also, in mobile processors, multi-core design is being applied. To achieve higher cache performance, lower power consumption and smaller chip area in multi-core mobile processors, cache configuration should be re-organized and re-analyzed. The MID (Mobile Internet Devices) which are embedding mobile processors are becoming one of major platforms and demanding to have a capability to run more general-purpose workload in new platforms (eg., Netbook). In this paper, we proposed a novel cache mechanism to provide performance improvement without increasing cache memory size. Most of applications (workloads) have spatial locality in cache behaviors which means small boundary of cache locations tend to be used in a given piece of time. Considering this concept of locality reversely, logically farthest sets will have relatively lower correlation in terms of locality. The possibility that these two sets are used in same basic block would be very low. With this observation, we investigate the feasibility of sharing two sets of cache blocks for data fill and replacement within a cache. By sharing the sets, certain amount of acceptable performance improvement could be expected without increasing cache size. Based on our simulation with sampled SPEC CPU2000 workloads, the proposed cache mechanism shows average reduction in cache miss rate up to 8.5% (depending on cache size and baseline set associativity), compared to the baseline cache.

Published in:

Information Technology: New Generations (ITNG), 2010 Seventh International Conference on

Date of Conference:

12-14 April 2010

Need Help?


IEEE Advancing Technology for Humanity About IEEE Xplore | Contact | Help | Terms of Use | Nondiscrimination Policy | Site Map | Privacy & Opting Out of Cookies

A not-for-profit organization, IEEE is the world's largest professional association for the advancement of technology.
© Copyright 2014 IEEE - All rights reserved. Use of this web site signifies your agreement to the terms and conditions.