An abstracted cloud data center comprised by four application types and seven physical machines. There are two main decision types: job assignment decision for a centrali...
Abstract:
As the usage of mission-critical mobile applications increases in Industry 4.0, such as smart manufacturing and self-driving cars, the cloud computing paradigm and its su...Show MoreMetadata
Abstract:
As the usage of mission-critical mobile applications increases in Industry 4.0, such as smart manufacturing and self-driving cars, the cloud computing paradigm and its supporting data centers have become more crucial. However, a common practice in the cloud data center computing industry tends to supply a surfeit of computing resources mainly for a robust quality-of-service (QoS). In this paper, we propose a simple real-time algorithm which combines a power-aware job assignment policy for a centralized job dispatcher and a power- and QoS-aware dynamic speed scaling policy for each physical machine (PM). The job assignment policy is called “Join the Least Power Consuming (LPC) Server” that routes an incoming cloud job to a server spending minimum power upon request. The server-side adaptive speed scaling policy expedites energy efficiency and satisfies response time-associated QoS condition. We call this policy “Minimizing Earliness (ME)” since it manages the server speed towards finishing jobs at their deadlines as precisely as possible, reducing the earliness of job completion. The design principle of LPC-ME combination supports both energy efficiency and service quality required in cloud data centers. Numerical experiments compare the proposed algorithm’s power consumption and response time with those of existing popular policies and demonstrate better energy efficiency with negligible degradation of service quality.
An abstracted cloud data center comprised by four application types and seven physical machines. There are two main decision types: job assignment decision for a centrali...
Published in: IEEE Access ( Volume: 10)