By Topic

Integrating Memory Compression and Decompression with Coherence Protocols in Distributed Shared Memory Multiprocessors

Sign In

Cookies must be enabled to login.After enabling cookies , please use refresh or reload or ctrl+f5 on the browser for the login options.

Formats Non-Member Member
$33 $13
Learn how you can qualify for the best price for this item!
Become an IEEE Member or Subscribe to
IEEE Xplore for exclusive pricing!
close button

puzzle piece

IEEE membership options for an individual and IEEE Xplore subscriptions for an organization offer the most affordable access to essential journal articles, conference papers, standards, eBooks, and eLearning courses.

Learn more about:

IEEE membership

IEEE Xplore subscriptions

2 Author(s)
Lakshmana Rao Vittanala ; Intel Technology India Pvt. Ltd., India ; Mainak Chaudhuri

Ever-increasing memory footprint of applications and increasing mainstream popularity of shared memory parallel computing motivate us to explore memory compression potential in distributed shared memory (DSM) multiprocessors. This paper for the first time integrates on-the-fly cache block compression/decompression algorithms in the cache coherence protocols by leveraging the directory structure already present in these scalable machines. Our proposal is unique in the sense that instead of employing custom compression/decompression hardware, we use a simple on-die protocol processing core in dual-core nodes for running our directory-based coherence protocol suitably extended with compression/decompression algorithms. We design a low-overhead compression scheme based on frequent patterns and zero runs present in the evicted dirty L2 cache blocks. Our compression algorithm examines the first eight bytes of an evicted dirty L2 block arriving at the home memory controller and speculates which compression scheme to invoke for the rest of the block. Our customized algorithm for handling completely zero cache blocks helps hide a significant amount of memory access latency. Our simulation-based experiments on a 16-node DSM multiprocessor with seven scientific computing applications show that our best design achieves, on average, 16% to 73% storage saving per evicted dirty L2 cache block for four out of the seven applications at the expense of at most 15% increased parallel execution time.

Published in:

2007 International Conference on Parallel Processing (ICPP 2007)

Date of Conference:

10-14 Sept. 2007