By Topic

Dynamically Adaptive I-Cache Partitioning for Energy-Efficient Embedded Multitasking

Sign In

Cookies must be enabled to login.After enabling cookies , please use refresh or reload or ctrl+f5 on the browser for the login options.

Formats Non-Member Member
$31 $13
Learn how you can qualify for the best price for this item!
Become an IEEE Member or Subscribe to
IEEE Xplore for exclusive pricing!
close button

puzzle piece

IEEE membership options for an individual and IEEE Xplore subscriptions for an organization offer the most affordable access to essential journal articles, conference papers, standards, eBooks, and eLearning courses.

Learn more about:

IEEE membership

IEEE Xplore subscriptions

2 Author(s)
Paul, M. ; Dept. of Electr. & Comput. Eng., Univ. of Maryland, College Park, MD, USA ; Petrov, P.

The ever increasing importance of battery-powered devices coupled with high performance requirements and shrinking process geometries have further exacerbated the problem of energy efficiency in modern embedded systems. The cache memories are a major contributor to the system power consumption, and as such have been a primary target for energy reduction techniques. Recent advances in configurable cache architectures have enabled an entirely new set of approaches for application-driven energy- and cost-efficient cache resource utilization. We propose a run-time and adaptive instruction cache partitioning methodology, which leverages configurable cache architectures to achieve an energy- and performance-conscious adaptive mapping of instruction cache resources to tasks in dynamic multi-task workloads sharing a processor core trough preemptive multitasking. Sizable leakage and dynamic power reductions are achieved with only a negligible and system-controlled performance impact. The methodology assumes no prior information regarding the dynamics and the structure of the workload. As the proposed dynamic cache partitioning alleviates the adverse effects of cache interference, performance is maintained very close to the baseline case, while achieving 50%-80% reductions in dynamic and leakage power for the on-chip instruction cache memory.

Published in:

Very Large Scale Integration (VLSI) Systems, IEEE Transactions on  (Volume:19 ,  Issue: 11 )