By Topic

Selective prefetching: prefetching when only required

Sign In

Cookies must be enabled to login.After enabling cookies , please use refresh or reload or ctrl+f5 on the browser for the login options.

Formats Non-Member Member
$33 $13
Learn how you can qualify for the best price for this item!
Become an IEEE Member or Subscribe to
IEEE Xplore for exclusive pricing!
close button

puzzle piece

IEEE membership options for an individual and IEEE Xplore subscriptions for an organization offer the most affordable access to essential journal articles, conference papers, standards, eBooks, and eLearning courses.

Learn more about:

IEEE membership

IEEE Xplore subscriptions

2 Author(s)
R. Pendse ; Dept. of Electr. Eng., Wichita State Univ., KS, USA ; H. Katta

Cache memories are commonly used to reduce the number of slower lower-level memory accesses, thereby improving the memory hierarchy performance. However, high cache miss-ratio can severely degrade system performance. It is therefore necessary to anticipate the cache misses to reduce their frequency. Prefetching, is one such technique, which allows memory systems to import data into the cache before the processor needs it. Aggressive prefetching can significantly reduce cache misses but may lead to cache pollution and also increase memory traffic. An innovative hardware-based prefetching technique is proposed which overcomes these problems while providing improved cache hit-rate. Our trace driven simulations show an improvement of around 60% in the miss rates for an instruction cache employing the LRU-FP block replacement algorithm with the proposed pre-fetch scheme as compared to the miss rates obtained due to a sequential prefetch mechanism

Published in:

Circuits and Systems, 1999. 42nd Midwest Symposium on  (Volume:2 )

Date of Conference: