By Topic

Speeding up the memory hierarchy in Flat COMA multiprocessors

Sign In

Cookies must be enabled to login.After enabling cookies , please use refresh or reload or ctrl+f5 on the browser for the login options.

Formats Non-Member Member
$31 $13
Learn how you can qualify for the best price for this item!
Become an IEEE Member or Subscribe to
IEEE Xplore for exclusive pricing!
close button

puzzle piece

IEEE membership options for an individual and IEEE Xplore subscriptions for an organization offer the most affordable access to essential journal articles, conference papers, standards, eBooks, and eLearning courses.

Learn more about:

IEEE membership

IEEE Xplore subscriptions

2 Author(s)
Yang, L. ; Center for Supercomput. Res. & Dev., Illinois Univ., Urbana, IL, USA ; Torrellas, J.

Scalable Flat Cache Only Memory Architectures (Flat COMA) are designed for reduced memory access latencies while minimizing programmer and operating system involvement. Indeed, to keep memory access latencies low, neither the programmer needs to perform clever data placement nor the operating system needs to perform page migration. The hardware automatically replicates the data and migrates it to the attraction memories of the nodes that use it. Unfortunately, part of the latency of memory accesses is superfluous. In particular, reads often perform unnecessary attraction memory accesses, require too many network hops, or perform necessary attraction memory accesses inefficiently. In this paper, we propose relatively inexpensive schemes that address these three problems. To eliminate unnecessary attraction memory accesses, we propose a small direct-mapped cache called Invalidation Cache (IVC). To reduce the number of network hops, the IVC is augmented with hint pointers to processors. These hint pointers are faster and have more applicability than in older hint schemes. Finally, to speed up necessary accesses to set-associative attraction memories, we optimize the locality of windows in page-mode DRAMs. We evaluate these optimizations with 32-processor simulations of 8 Splash and Perfect Suite applications. We show that these optimizations speed up the applications by an average of 20% at a modest cost

Published in:

High-Performance Computer Architecture, 1997., Third International Symposium on

Date of Conference:

1-5 Feb 1997