By Topic

Feedback-based dynamic voltage and frequency scaling for memory-bound real-time applications

Sign In

Cookies must be enabled to login.After enabling cookies , please use refresh or reload or ctrl+f5 on the browser for the login options.

Formats Non-Member Member
$31 $13
Learn how you can qualify for the best price for this item!
Become an IEEE Member or Subscribe to
IEEE Xplore for exclusive pricing!
close button

puzzle piece

IEEE membership options for an individual and IEEE Xplore subscriptions for an organization offer the most affordable access to essential journal articles, conference papers, standards, eBooks, and eLearning courses.

Learn more about:

IEEE membership

IEEE Xplore subscriptions

3 Author(s)
Poellabauer, C. ; Comput. Sci. & Eng., Notre Dame Univ., IN, USA ; Singleton, L. ; Schwan, K.

Dynamic voltage and frequency scaling is increasingly being used to reduce the energy requirements of embedded and real-time applications by exploiting idle CPU resources, while still maintaining all application's real-time characteristics. Accurate predictions of task run-times are key to computing the frequencies and voltages that ensure that all tasks' real-time constraints are met. Past work has used feedback-based approaches, where applications' past CPU utilizations are used to predict future CPU requirements. Mispredictions in these approaches can lead to missed deadlines, suboptimal energy savings, or large overheads due to frequent changes to the chosen frequency or voltage. One shortcoming of previous approaches is that they ignore other 'indicators' of future CPU requirements, such as the frequency of I/O operations, memory accesses, or interrupts. This paper addresses the energy consumptions of memory-bound real-time applications via a feedback loop approach, based on measured task run-times and cache miss rates. Using cache miss rates as indicator for memory access rates introduces a more reliable predictor of future task run-times. Even in modern processor architectures, memory latencies can only be hidden partially, therefore, cache misses can be used to improve the run-time predictions by considering potential memory latencies. The results shown in this paper indicate improvements in both the number of deadlines met and the amount of energy saved.

Published in:

Real Time and Embedded Technology and Applications Symposium, 2005. RTAS 2005. 11th IEEE

Date of Conference:

7-10 March 2005