By Topic

Design of a predictive filter cache for energy savings in high performance processor architectures

Sign In

Cookies must be enabled to login.After enabling cookies , please use refresh or reload or ctrl+f5 on the browser for the login options.

Formats Non-Member Member
$33 $13
Learn how you can qualify for the best price for this item!
Become an IEEE Member or Subscribe to
IEEE Xplore for exclusive pricing!
close button

puzzle piece

IEEE membership options for an individual and IEEE Xplore subscriptions for an organization offer the most affordable access to essential journal articles, conference papers, standards, eBooks, and eLearning courses.

Learn more about:

IEEE membership

IEEE Xplore subscriptions

3 Author(s)
Weiyu Tang ; Dept. of Inf. & Comput. Sci., California Univ., Irvine, CA, USA ; R. Gupta ; A. Nicolau

Filter cache has been proposed as an energy saving architectural feature. A filter cache is placed between the CPU and the instruction cache (I-cache) to provide the instruction stream. Energy savings result from accesses to a small cache. There is however loss of performance when instructions are not found in the filter cache. The majority of the energy savings from the filter cache are due to the temporal reuse of instructions in small loops. We examine subsequent fetch addresses to predict whether the next fetch address is in the filter cache dynamically. In case a miss is predicted, we reduce miss penalty by accessing the I-cache directly. Experimental results show that our next fetch prediction reduces performance penalty by more than 91% and is more energy efficient than a conventional filter cache. Average I-cache energy savings of 31 % can be achieved by our filter cache design with around 1 % performance degradation

Published in:

Computer Design, 2001. ICCD 2001. Proceedings. 2001 International Conference on

Date of Conference: