By Topic

A queuing analysis of an energy-saving mechanism in data centers

Sign In

Cookies must be enabled to login.After enabling cookies , please use refresh or reload or ctrl+f5 on the browser for the login options.

Formats Non-Member Member
$31 $13
Learn how you can qualify for the best price for this item!
Become an IEEE Member or Subscribe to
IEEE Xplore for exclusive pricing!
close button

puzzle piece

IEEE membership options for an individual and IEEE Xplore subscriptions for an organization offer the most affordable access to essential journal articles, conference papers, standards, eBooks, and eLearning courses.

Learn more about:

IEEE membership

IEEE Xplore subscriptions

3 Author(s)
Schwartz, C. ; Inst. of Comput. Sci., Univ. of Wurzburg, Wurzburg, Germany ; Pries, R. ; Tran-Gia, P.

The high energy costs for running a data center led to a rethinking towards an energy-efficient operation of a data center. Designed for supporting the expected peak traffic load, the goal of the data center provider such as Amazon or Google is now to dynamically adapt the number of offered resources according to the current traffic load. In this paper, we present a queuing theoretical model to evaluate the trade-off between waiting time and power consumption if only a subset of servers is active all the time and the remaining servers are enabled on demand. We develop a queuing model with thresholds to turn-on reserve servers when needed. Furthermore, the resulting system behavior under varying parameters and requirements for Pareto optimality are studied.

Published in:

Information Networking (ICOIN), 2012 International Conference on

Date of Conference:

1-3 Feb. 2012