By Topic

Frequent value compression in packet-based NoC architectures

Sign In

Cookies must be enabled to login.After enabling cookies , please use refresh or reload or ctrl+f5 on the browser for the login options.

Formats Non-Member Member
$31 $13
Learn how you can qualify for the best price for this item!
Become an IEEE Member or Subscribe to
IEEE Xplore for exclusive pricing!
close button

puzzle piece

IEEE membership options for an individual and IEEE Xplore subscriptions for an organization offer the most affordable access to essential journal articles, conference papers, standards, eBooks, and eLearning courses.

Learn more about:

IEEE membership

IEEE Xplore subscriptions

7 Author(s)
Ping Zhou ; ECE Dept., Univ. of Pittsburgh, Pittsburgh, PA ; Bo Zhao ; Yu Du ; Yi Xu
more authors

The proliferation of chip multiprocessors (CMPs) has led to the integration of large on-chip caches. For scalability reasons, a large on-chip cache is often divided into smaller banks that are interconnected through packet-based network-on-chip (NoC). With increasing number of cores and cache banks integrated on a single die, the on-chip network introduces significant communication latency and power consumption. In this paper, we propose a novel scheme that exploits frequent value compression to optimize the power and performance of NoC. Our experimental results show that the proposed scheme reduces the router power by up to 16.7%, with CPI reduction as much as 23.5% in our setting. Comparing to the recent zero pattern compression scheme, the frequent value scheme saves up to 11.0% more router power and has up to 14.5% more CPI reduction. Hardware design of the FV table and its overhead are also presented.

Published in:

Design Automation Conference, 2009. ASP-DAC 2009. Asia and South Pacific

Date of Conference:

19-22 Jan. 2009