Skip to Main Content
The ever increasing importance of battery-powered devices coupled with high performance requirements and shrinking process geometries have further exacerbated the problem of energy efficiency in modern embedded systems. The cache memories are a major contributor to the system power consumption, and as such have been a primary target for energy reduction techniques. Recent advances in configurable cache architectures have enabled an entirely new set of approaches for application-driven energy- and cost-efficient cache resource utilization. We propose a run-time and adaptive instruction cache partitioning methodology, which leverages configurable cache architectures to achieve an energy- and performance-conscious adaptive mapping of instruction cache resources to tasks in dynamic multi-task workloads sharing a processor core trough preemptive multitasking. Sizable leakage and dynamic power reductions are achieved with only a negligible and system-controlled performance impact. The methodology assumes no prior information regarding the dynamics and the structure of the workload. As the proposed dynamic cache partitioning alleviates the adverse effects of cache interference, performance is maintained very close to the baseline case, while achieving 50%-80% reductions in dynamic and leakage power for the on-chip instruction cache memory.
Very Large Scale Integration (VLSI) Systems, IEEE Transactions on (Volume:19 , Issue: 11 )
Date of Publication: Nov. 2011