By Topic

Tolerating medium latencies on data caches with hardware-based prefetching

Sign In

Cookies must be enabled to login.After enabling cookies , please use refresh or reload or ctrl+f5 on the browser for the login options.

Formats Non-Member Member
$33 $13
Learn how you can qualify for the best price for this item!
Become an IEEE Member or Subscribe to
IEEE Xplore for exclusive pricing!
close button

puzzle piece

IEEE membership options for an individual and IEEE Xplore subscriptions for an organization offer the most affordable access to essential journal articles, conference papers, standards, eBooks, and eLearning courses.

Learn more about:

IEEE membership

IEEE Xplore subscriptions

3 Author(s)
E. D. Moreno ; Dept. of Electron. Eng., Sao Paulo Univ., Brazil ; S. T. Kofuji ; C. A. P. S. Martins

Prefetching caches has been proposed as an important technique to hide and tolerate the average latency of memory accesses by exploiting the overlap of processor computations with data accesses. In this paper, we analyze a single-bus multiprocessor using Stochastic Timed Petri Net (STPN) model to study the effects of various parameters such as latency (memory and network) and degree of prefetching on speed-up of the system and the network contention. Our results indicate that fixed sequential prefetching with degree of prefetching equal to four, would improve the speed-up for medium latencies (64 pcycles with processor of 80 MHz) whenever the probability of useful prefetched data into the buffers is high, superior to 0.5

Published in:

Electrical and Computer Engineering, 1997. Engineering Innovation: Voyage of Discovery. IEEE 1997 Canadian Conference on  (Volume:2 )

Date of Conference:

25-28 May 1997