Skip to Main Content
Network services are often provided by a server cluster. From the viewpoint of operational expenditure as well as the global environment, power consumption of a server cluster must be decreased. Power consumption is decreased by operating as few computers as possible to offer sufficiently good performance against changes in load. For this operation, it is necessary to decide exactly how many computers should be turned on or off for the measured load metrics. The number of computers should be determined by estimating multiple load metrics, because a single metric does not adequately represent the statuses of various bottlenecked resources. Moreover, the mapping from these metrics to the performance experienced by users is not simple. Thus, the number of computers must be calculated considering the complicated relationship among the metrics. In addition, decision rules should be appropriately updated if the service content or computer specification changes. To satisfy these requirements, this study proposes a scheme based on a machine learning technique as a method of deciding the number of server computers. This paper first presents how a machine learning technique is applied for deciding the number of computers. Then, its implementation is presented, and the effectiveness of the scheme is confirmed through experiments.