By Topic

Improving performance by cache driven memory management

Sign In

Cookies must be enabled to login.After enabling cookies , please use refresh or reload or ctrl+f5 on the browser for the login options.

Formats Non-Member Member
$33 $13
Learn how you can qualify for the best price for this item!
Become an IEEE Member or Subscribe to
IEEE Xplore for exclusive pricing!
close button

puzzle piece

IEEE membership options for an individual and IEEE Xplore subscriptions for an organization offer the most affordable access to essential journal articles, conference papers, standards, eBooks, and eLearning courses.

Learn more about:

IEEE membership

IEEE Xplore subscriptions

4 Author(s)
K. Westerholz ; Corp. Res. & Dev., Siemens AG, Munich, Germany ; S. Honal ; J. Plankl ; C. Hafer

The efficient utilization of caches is crucial for a competitive memory hierarchy. Access times required by modern processors are continuously decreasing. Direct mapped caches provide the shortest access time. Using them yields reduced hardware costs and fast memory access but implies additional misses in the cache, resulting in performance degradation. Another source of conflicts is the addressing scheme if caches are physically addressed. For such caches, memory management affects cache utilization. Enhancements in virtual memory management as presented in this paper reduce cache misses by as much as 80% for real-indexed caches. We developed three algorithms that use runtime information. All of them are suitable for direct-mapped and set associative caches. Applied to SPECint92 benchmark suite, we measured a performance improvement of 6.9% in a multiprogramming environment for a R4000 based UNIX workstation. This figure also includes the overhead caused by the more complex memory management

Published in:

High-Performance Computer Architecture, 1995. Proceedings., First IEEE Symposium on

Date of Conference:

1995