Skip to Main Content
The high energy costs for running a data center led to a rethinking towards an energy-efficient operation of a data center. Designed for supporting the expected peak traffic load, the goal of the data center provider such as Amazon or Google is now to dynamically adapt the number of offered resources according to the current traffic load. In this paper, we present a queuing theoretical model to evaluate the trade-off between waiting time and power consumption if only a subset of servers is active all the time and the remaining servers are enabled on demand. We develop a queuing model with thresholds to turn-on reserve servers when needed. Furthermore, the resulting system behavior under varying parameters and requirements for Pareto optimality are studied.