By Topic

A dynamic way cache locking scheme to improve the predictability of power-aware embedded systems

Sign In

Cookies must be enabled to login.After enabling cookies , please use refresh or reload or ctrl+f5 on the browser for the login options.

Formats Non-Member Member
$31 $13
Learn how you can qualify for the best price for this item!
Become an IEEE Member or Subscribe to
IEEE Xplore for exclusive pricing!
close button

puzzle piece

IEEE membership options for an individual and IEEE Xplore subscriptions for an organization offer the most affordable access to essential journal articles, conference papers, standards, eBooks, and eLearning courses.

Learn more about:

IEEE membership

IEEE Xplore subscriptions

3 Author(s)
Asaduzzaman, A. ; Dept. of Elec Eng & Comp Sci, Wichita State Univ., Wichita, KS, USA ; Sibai, F.N. ; Abonamah, A.

Cache memory challenges the power supply system by consuming a lot of power. Cache also increases execution time unpredictability and makes it difficult to support real-time applications. Recent studies indicate that way cache locking can be applied in embedded systems to improve predictability (and performance/power ratio). In this work, we propose a simple but effective dynamic way cache locking scheme for embedded systems which is effective for both single-core and multicore systems. This scheme is based on the analysis of applications' worst case execution time (WCET) and it allows changing the locked cache size during runtime to achieve the optimal predictability and performance/power ratio for the running application. Using Heptane WCET analyzer, we study MPEG4, FFT, MI, and DFT codes and generate workloads. Workloads provide miss information for the memory blocks (without cache locking). Using VisualSim tool, we model and simulate a system with four cores and two levels of caches. Experimental results show that our cache locking scheme significantly improves predictability by decreasing total misses more than 50%. It is also observed that predictability can be improved even further by locking more than 25% of the cache size at the expense of performance/power ratio.

Published in:

Electronics, Circuits and Systems (ICECS), 2011 18th IEEE International Conference on

Date of Conference:

11-14 Dec. 2011