By Topic

Memory energy minimization by data compression: algorithms, architectures and implementation

Sign In

Cookies must be enabled to login.After enabling cookies , please use refresh or reload or ctrl+f5 on the browser for the login options.

Formats Non-Member Member
$31 $13
Learn how you can qualify for the best price for this item!
Become an IEEE Member or Subscribe to
IEEE Xplore for exclusive pricing!
close button

puzzle piece

IEEE membership options for an individual and IEEE Xplore subscriptions for an organization offer the most affordable access to essential journal articles, conference papers, standards, eBooks, and eLearning courses.

Learn more about:

IEEE membership

IEEE Xplore subscriptions

4 Author(s)
Benini, L. ; Dipartamento di Elettronica, Informatica e Sistemistica, Univ. di Bologna, Italy ; Bruni, D. ; Macii, A. ; Macii, E.

Storing data in compressed form is becoming common practice in high-performance systems, where memory bandwidth constitutes a serious bottleneck to program execution speed. In this paper, we suggest hardware-assisted data compression as a tool for reducing energy consumption of processor-based systems. We propose a novel and efficient architecture for on-the-fly data compression and decompression whose field of operation is the cache-to-memory path. Uncompressed cache lines are compressed before they are written back to main memory, and decompressed when cache refills take place. We explore two classes of table-based compression schemes. The first, based on offline data profiling, is particularly suitable to embedded systems, where predictability of the data set is usually higher than in general-purpose systems. The second solution we introduce is adaptive, that is, it takes decisions on whether data words should be compressed according to the data statistics of the program being executed. We describe in details the architecture of the compression/decompression unit and we provide an insight about its implementation as a hardware (HW) block. We present experimental results concerning memory traffic and energy consumption in the cache-to-memory path of a core-based system running standard benchmark programs. The obtained energy savings range from 8%-39% when profile-driven compression is adopted, and from 7%-26% when the adaptive scheme is used. Performance improvements are also achieved as a by-product, showing the practical applicability of the proposed approach.

Published in:

Very Large Scale Integration (VLSI) Systems, IEEE Transactions on  (Volume:12 ,  Issue: 3 )