Energy-Efficient Hybrid Framework for Green Cloud Computing

The increasing growth in the demand for cloud computing services, due to the increasing digital transformation and the high elasticity of the cloud, requires more efforts to improve the electrical energy efficiency of cloud data centers. In this paper, an energy-efficient hybrid (EEH) framework for improving the efficiency of consuming electrical energy in data centers is proposed and evaluated. The proposed framework is based on both the request scheduling and servers consolidation approaches rather than depending only on one approach as in the existing related works. The EEH framework sorts the customers’ requests (tasks) according to their time and power needs before performing the scheduling. It has a scheduling algorithm that considers power consumption when taking its scheduling decisions. It also has a consolidation algorithm that determines the underloaded servers to be slept or hibernated, the overloaded servers, the virtual machines to be migrated and the servers that will receive migrated virtual machines. In addition, the EEH framework includes a migration algorithm for transferring migrated virtual machines to new servers. Results of simulation experiments indicate the superiority of the EEH framework to the utilization of one approach only to reduce power consumption in terms of power usage effectiveness (PUE), data center energy productivity (DCEP), average execution time, throughput and cost saving.


I. INTRODUCTION
In general, most existing IT-based businesses employ cloud computing technology. Cloud computing is a growing technology and cloud vendors, such as Google, Amazon, and Microsoft, continuously add more services for their cloud environments to keep their chances of competition and meet the increasing requirements of customers. In addition, many different businesses shift to cloud-based models for their IT-based systems [1], [2].
As estimated by Cisco [3], about 94% of computing will be performed through cloud computing systems by the year 2021. Moreover, international data corporation (IDC) forecasts that the size of data created and manipulated will reach The associate editor coordinating the review of this manuscript and approving it for publication was Abdullah Iliyasu . 175 zetta bytes by 2025 [4]. This requires more facilities and services to be established by cloud vendors. The types of these facilities and services cause more data centers and resources to be provisioned in the cloud resulting in more amounts of electrical consumed power [5].
Resources of cloud computing systems are available for customers' services as virtual machines (VMs) that are deployed and run in data centers. The data centers comprise multiple physical servers, and each server has a set of resources. Thus, each cloud has a large number of resources that consume considerable amounts of electrical power resulting in high levels of CO 2 emissions [6].
In [7], N. Jones expected that information and communication technology activities will use 20.9% of the global demand for electricity by 2030. In addition, she stated that each year, data centers exhaust electrical power of 200 tera watt-hours, and they contribute to the overall CO 2 emissions by around 0.3%. Also, it is expected by 2020 that the industry of information and communication will generate about 12% of the total CO 2 emissions [8].
With respect to the above observations, how to realize the desired green computing is still a great challenge and a primary concern in cloud computing environments [9], [10]. It represents an essential trend for providers, customers and the environment with the objectives of reducing operational costs and emission levels of CO 2 [11]. The primary goal of green computing is to ensure better levels of consumption of electrical energy in computing systems like cloud and grid computing systems.
With this vision, the main contribution of this work is to provide an energy-efficient hybrid (EEH) framework for improving the efficiency of consuming electrical energy in data centers. The proposed framework depends on both the scheduling and consolidation approaches, and assumes that the amount of power consumed by the data center components varies with time. We can summarize the contributions of this paper in the following points: • A hybrid framework for providing green computing services in cloud computing environments is proposed. The framework is based on scheduling and consolidation techniques. It has a scheduling algorithm that considers reducing both time and power consumption and a consolidation algorithm, which is based on adaptive values of upper and lower utilization thresholds to reduce power consumption.
• The performance of the proposed framework is evaluated and compared with those of other related methods proposed in the literature using the CloudSim. The next sections are structured as follows. Section 2 provides some of the previous research works. Section 3 presents the details of the models employed. In Section 4, the proposed architecture is presented and the proposed techniques and algorithms are described in detail. In Section 5, simulation results are presented and examined. Section 6 provides the concluding remarks, and the open directions for future research.

II. RELATED WORK
Regardless of reducing operating costs and response time needed by service customers in cloud computing environments, reducing power consumption appears as a core challenge that could minimize the emission of CO 2 and save power for other services. From this perspective, there are three main approaches that have been used for reducing the amount of power consumed by services in cloud computing systems. These approaches include hardware, software and consolidation [12]- [14].
In the hardware approach, processors of physical machines can adapt their levels of voltage and frequency to rule the levels of power consumed [15], [16]. To accomplish this task, processors of computing nodes must have a feature called dynamic voltage and frequency scaling (DVFS). Software and the consolidation approaches are more popular than the hardware approach in green cloud computing, and our work considers them.

A. POWER EFFICIENCY BASED ON SOFTWARE
In the software approach, the consumed power can be reduced through the applied scheduling method. The scheduling method assigns customers' requests of services to VMs that can serve them within boundaries of the service level agreement (SLA). To achieve the target of reducing the consumption level of power, the scheduling method selects VMs, implemented on the physical machines (servers), which could consume less power. Many proposed methods depend on the scheduling or software approach [14].
In [17], a rack-aware scheduling algorithm was proposed with the objectives of reducing the service time, the power consumed by computing resources and the power consumed by non-computing resources. The algorithm is based on the genetic approach. The authors of [18] have proposed a flexible scheduling algorithm to resolve the trade-off between requirements of the time needed by the user and requirements of the energy consumption needed by the provider. To accomplish that task, a tuning parameter is adjusted to focus either on reducing the time or reducing the energy consumption. Kumar and Sharma [19] have presented a scheduling algorithm based on particle swarm optimization to reduce the energy consumed by servers of the data centers and to optimize both time and monetary costs. They have designed a mathematical model for resource allocation with a defined fitness function. In addition, they considered the tasks deadline as the quality of service factor based on the defined fitness function. In [20], a scheduling algorithm that considers the sorting of requests according to the type of workload of VMs in homogeneous clouds has been proposed. The type of workload could be I/O-bound or CPU-bound. The main target of the algorithm is to minimize the breakings of service level agreement and the level of consumed electrical energy.
In [21], the authors developed a scheduling algorithm that assumes executing requests of various workflows, in the form of directed acyclic graphs, on the same VMs. In addition, the algorithm tries to minimize power consumption by balancing the weighed frequencies of active hosts.
In [22], H. Yuan et al. have proposed an optimization method for minimizing the possibility of task loss and maximizing the profit for cloud data centers. Their method depends on determining the service rates of data centers and tasks split among different providers. In [23], the authors were concerned with the problem of optimizing costs for tasks. They have proposed a spatial scheduling algorithm, based on the stimulated annealing concept, for minimizing providers' costs. The authors of [24] have proposed a model for a bi-objective optimization problem in order to minimize the power consumption and maximize the provider revenue, and they have proposed an algorithm to solve this problem.

B. POWER-EFFICIENCY BASED ON CONSOLIDATION
The consolidation approach minimizes the number of active servers in order to reduce the levels of power consumption. In this approach, the servers that hold allocated VMs are monitored, and some of them are chosen to be slept according to their utilization levels. The requests allocated to VMs running on the slept servers are relocated and migrated to some selected VMs on other active servers. Many presented trends depend on the consolidation approach [25].
The authors of [26] have considered the reliability of physical servers, when taking decisions of consolidation. They have employed a model of Markov chain to define the server reliability in the cloud. Their approach selects the destination server according to the reliability and power consumption. In [27], A. Al-Dulaimy et al. have proposed a consolidationbased approach that considers the type of jobs assigned to VMs before migration. The approach depends on using the multiple-choice Knapsack problem for the initial scheduling of VMs and the placemat of the migrated VMs. The approach migrates VMs of different job types to the same server if possible.
Yavari et al. [28] have presented a consolidation-based technique that considers both the temperature and the energy of servers. The authors were concerned with minimizing the amount of heat emitted from servers and improving their utilization. Their technique depends on the firefly optimization approach. E. Arianyan et al. [29] have presented a multi-criterion method for selecting VMs to be migrated. The criteria include memory, CPU and bandwidth, and the method defines the weight of each of the resources. In [30], the authors have proposed a consolidation-based approach considering both the network structure and the cooling devices of data centers. In order to optimize the energy consumption, inactive servers, cooling and networking devices are switched off. Their approach contains two passes of VM allocation considering VMs of new requests and VMs of overloaded servers. In [8], the authors have proposed two consolidation-based techniques. The two techniques are based on the best fit decreasing approach. In the first technique, called enhanced-conscious task consolidation (ECTC), servers with the lowest power consumption are selected for consolidation. In the other technique, called the maximum utilization (MaxUtil) technique, servers with the highest capacity of computing are selected. In addition, the authors have determined a threshold value to avoid the violation of the SLA.
The authors of [31] developed a scheduling algorithm for scheduling of real-time requests. The algorithm proactively builds the schedule, which is dynamically repaired during the execution time. In order to improve energy consumption, the authors developed dynamic workload-based strategies to scale computing resources.
The authors of [32] have proposed three models to minimize both the violation of service level agreement and the power consumption. Two models that are concerned with detecting overloaded servers by setting a dynamic threshold value for CPU utilization were presented. The third presented model is concerned with selecting VMs to be migrated from the overloaded servers according to the network traffic. In [33], the authors have proposed an adaptive algorithm for setting the value of the upper threshold of CPU utilization and selecting VMs to be migrated in mobile cloud computing (MCC) environments. The authors of [34] have proposed a selection algorithm for VMs to be migrated based on application types and CPU utilization at different periods.
Most proposed techniques that target reducing the electrical power consumption in cloud computing environments depend on employing only one approach from the three above-mentioned approaches (hardware, software and consolidation). Also, most techniques are concerned with minimizing the service time, power consumption, or monetary costs. This paper presents a hybrid framework that depends on employing both software (scheduling) and consolidation approaches. The framework sorts the customer requests according to the requirements of both power consumption and service time. Therefore, the scheduling algorithm considers reducing both time and power consumption when making scheduling decisions. Then, the consolidation algorithm is based on adaptive values of upper and lower utilization thresholds to reduce the power consumption and the service time. In the last step, the migration algorithm migrates VMs between servers, while maintaining the requirements in terms of both power consumption and service time. So, the proposed framework considers the reduction of power consumption and service time at all of its components.

III. SYSTEM MODEL
This section describes the architectural configuration of the cloud computing system employed in this paper and the power model of each data center in that system.

A. ARCHITECTURAL CONFIGURATION
Each cloud contains one or more data centers and each data center has many VMs that can serve customers' requests. From this perspective, each data center has a set of physical servers or physical machines that represent platforms on which VMs are located and run. Assume S d = {S 0 , S 1 , . . . , S n } is the set of these servers and V s = {V 0 , V 1 , . . . , V m } is the set of VMs deployed on the server s. Each VM can have a set of computing and storage resources. Examples of these resources include CPUs and memory units.
Additionally, each data center should have a set of cooling devices to maintain servers and their components at safe levels of temperature. For each data center, it is assumed that the number of cooling devices is equal to the number of servers. Furthermore, networking components, such as switches, routers, etc., are required for communication purposes between resources in each data center.
All of the above-mentioned physical components, i.e. computing resources, storage resources, cooling devices, and networking devices share the total electrical power consumption with different percents.

B. THE POWER MODEL
The main objective of the green grid association [35] is to improve the performance efficiency of system resources by reducing the amount of total electrical power consumed by these resources. In this consideration, it defines a common formula for the power efficiency, denoted as power usage effectiveness (PUE) of a data center d as follows [35]: where the term P d Total is the amount of electrical power consumed by all components located in the data center d. These components include computing resources, storage resources, cooling devices, networking devices and generators. The term P d IT is the amount of electrical power consumed by all IT components located in the data center d. These components include computing, storage and networking devices.
The amount of P d Total can be expressed as: where the term P d nonIT refers to the amount of consumed electrical power by non-IT components, i.e. cooling devices and generators, of the data center d.
There is an important fact that the amount of electrical power consumed by a component varies from time to time according to the number of assigned requests. Consequently, it is essential to define this amount in terms of periods. Assuming that each data center has a set of n physical servers, the amount P d Total over a period of time t 2 − t 1 will be as follows [32], [33]: represents the consumed electrical power by a server s in computing over the period of time (t 2 −t 1 ), P s mem (t) represents the consumed electrical power by a server s in storage over the period of time (t 2 −t 1 ), P s net (t) is the amount of consumed electrical power by networking devices over the period of time (t 2 − t 1 ) and P s nonIT (t) is the amount of consumed electrical power by non-IT components over the period of time (t 2 − t 1 ).
In the same way, the amount P d nonIT consumed by a data center d over a period (t 2 − t 1 ) could be expressed as:

C. MIGRATION COST
The migration of VMs between servers of a data center is applied in consolidation-based approaches. These approaches assume that VMs of the underloaded servers are migrated to other servers and these underloaded servers should be hibernated in order to save electrical power. These hibernated servers are defined according to the levels of utilization. Also, some VMs from overloaded servers should be migrated to other servers in order to improve the power consumption levels.
Migration of VMs between servers causes an unfavorable influence on the required performance levels of serving requests assigned to the VMs to be migrated. The amount of performance degradation depends on the behavior of the requests in terms of the number of data updates and the available bandwidth.
In our work, it is assumed that the migration of a virtual machine v starts at time t 1 and ends at time t 2 . The time of migration can be defined as: where M v is the amount of data used by v and B v is the available bandwidth for migration of v.

IV. THE EEH FRAMEWORK
The architecture of data center components employed by the proposed EEH framework to cope with the power consumption problem is displayed in Figure 1. Most existing architectures have units for performing either scheduling or consolidation. However, our architecture includes units for both scheduling and consolidation. Service requests of customers accompanied with their quality requirements for each service are delivered to the data center. The service request represents a task that could be served by a VM in the data center.
Requests of services come from different customers with different needs of computing resources and service times.  Firstly, requests are received by the sorting unit that sorts them according to the requirements of both power consumption and service time. The main goal of this sorting is to reduce the time spent during the scheduling searche for a suitable resource to serve the request.
Assuming that a cloud data center d has N groups of VM types, denoted as VT = (vt 1 , vt 2 , . . . , vt N }, the incoming requests should be sorted into N different groups, denoted as RT = {rt 1 , rt 2 , . . . , rt N }. Different services in the same group need different requirements in terms of service time. As a result, the services of each group should be further divided into sub-groups according to the requirements of each request in terms of service time. In this work, the types of time requirements are assumed as follows: (1)  The scheduling unit receives the sorted groups of requests (tasks). Each request represents a task that could be served by a VM in the data center. For each group of requests Q d , the scheduling unit assigns each request q to the most appropriate VM that can serve the request. The assigned VM is selected from the corresponding group in VT groups for each request type. In order to achieve its objectives, the scheduling unit implements Algorithm 2.
The central database (DB) unit is the repository of the structural and operational information of all VMs and

Algorithm 2 The Scheduling Algorithm
Initialization: Q d is the set of requests submitted to d, V d is the set of VMs in d, V q d is the set of VMs in d that can serve the request q, T v q is the time of serving q on v, T u q is the customer required time of serving q, P v q is the power consumed for serving q on v.
Break; //proceed to the next request EndIf EndFor EndFor physical servers located in the data center. The information stored in this unit for each VM or each physical resource should include the computing speed, the memory capacity, the failure rate, the current utilization percentage, the power consumption rate, the availability, etc. The scheduling unit consults the central DB unit for defining the most appropriate VMs for each request. The most appropriate VM is the one that has enough resources to serve the request, the lowest power consumption and the shortest response time that satisfies the customer requirements. The central DB unit replies with one or more available VMs in the data center. In case of a reply with only one VM, the request is dispatched to that VM and the scheduling unit proceeds to the next request after updating the central DB. In case of a reply with multiple VMs, Algorithm 2 finds, for each request q, a list of VMs V q d that have enough resources to serve q. Thereafter, the V q d list is sorted in an ascending order according to the amount of power that could be consumed by each VM v, when serving the request q. Then, the algorithm selects from the list the first VM that satisfies the time requirements of the customer T u q . The consolidation unit has the responsibility for determining the underloaded servers to be slept or hibernated, the overloaded servers, the VMs to be migrated accompanied with their requests and the servers that will receive the migrated VMs with their requests. The scheduling algorithm, mentioned above, selects the most suitable VM for each customer's request in terms of power consumption and response time. At run time, some servers may be underloaded or overloaded because of the variability of the workload. One of the main reasons for this variability is the migration of the VMs and their related requests among servers. Underloaded and overloaded servers could cause more power consumption that could be avoided via consolidation. In the underloaded case, servers are active and they consume a considerable amount of electrical power for their operation in order to serve small amounts of loads. In the overloaded case, servers could consume more electrical power because of the required extra cooling conditions.
The first task of the consolidation algorithm, implemented in the consolidation unit, is to address the underloaded and overloaded cases of servers. The second task is to define the list of VMs to be migrated with their associated requests from the underloaded servers. The third task is to define the list of VMs to be migrated with their associated requests from the overloaded servers. The last task is to perform the migration for defined lists of VMs along with their requests and to issue a sleep order to freed servers. Shutting down a server would decrease the amount of power consumption to the lowest possible level. On the other hand, turning it back on will need more time and power [35].
In the first task, the consolidation unit determines the underloaded servers. To achieve that task, it asks the central DB for the current utilization of each server in the data center. The central DB can acquire this type of information from the server monitoring unit. The main function of the server monitoring unit is to monitor the servers operation and send periodic reports about their updated status to the central DB. An underloaded server is defined as the server that has a current utilization value less than its associated lower threshold utilization value. For each underloaded server, the consolidation unit lists the VMs to be migrated from that server and their associated requests. The consolidation unit implements Algorithm 3. Thereafter, it uses Algorithm 4 to migrate the VMs and their associated requests to new suitable servers.
Algorithm 4 sorts the VMs to be migrated according to their priorities. Then, for each VM number m s to be migrated, the algorithm searches in the set of VMs V k in active servers SA d for a VM number m that can perform the request q s assigned to m s within the requirements of time and power consumption needed by q s . If the algorithm finds a VM number m with the time and power requirements, it migrates m s into m, else it configures a new virtual machine for m s . Then, the algorithm continues to the next VM to be migrated.
In the overloaded case, the consolidation unit determines the list of overloaded servers. An overloaded server is defined as the server with a current utilization value greater than its associated upper threshold utilization value. For each overloaded server, the consolidation unit lists the

Algorithm 3 The Consolidation Algorithm
Initialization: SA d is the set of active servers in d, V s is the set of VMs implemented on the server s, U s is the utilization of the server s, U l s is the lower threshold value of utilization of s, U u s is the upper threshold value of utilization of s, SP is the set of servers to be passive, M s is the list of virtual machines to be migrated from server s. VMs and their associated requests to be migrated from that server. Thereafter, the migration algorithm is called to migrate the VMs and their associated requests to new suitable servers.
Setting fixed values for lower and upper utilization thresholds of servers is unsuitable for dynamic workloads of the cloud environments. Therefore, in this paper, adaptive values are used for both lower and upper utilization thresholds. These values are automatically adjusted according to the current workload patterns of servers. In our work, the dynamic thresholds algorithm of [37] is used for setting the values of both the lower and upper utilization thresholds. For each server, the values depend on the utilization created by all VMs of the server. This algorithm determines adaptive values for utilization thresholds of a server based on the utilization history of VMs run on the server. The algorithm assumes that the future utilization of a VM cannot be expected, but it is possible to calculate the distribution of utilization over a certain period of time. It assumes that the utilization of a VM v follows a random variable u v over a certain period of time, and that the utilization of a server s follows the random variable U s , which represents the sum of all utilizations of m VMs run on s. The utilization of a server s is modeled by the t-distribution with mean: and standard deviation: The value of the upper threshold value of utilization, U u s , of a server s is calculated as: where t inv n−1 represents the inverse cumulative probability function of the t-distribution with a degree of freedom equal to n-1, P uu represents the upper limit of the probability interval, P ul represents the lower limit of the probability interval and n represents the number of collected utilization data items. Similarly, the lower threshold value of utilization can be calculated, but with a single value for all servers in the data center. The provider can set a limit value, U l , to cap the decrease in the lower threshold value of utilization. The lower threshold value of utilization, U l s , of a server s is calculated as: whereŪ represents the mean and D U represents the standard deviation.
The overall time complexity of the proposed framework can be computed by computing the time complexity of each algorithm, and then getting the sum. Simply, we can com-

V. SIMULATION CONFIGURATION
There are several validated simulation tools developed to assess methods proposed in the field of cloud computing. Among them, CloudSim appears as one of the most famous open-source simulation tools. However, its library does not support green computing techniques. Consequently, it should be enhanced with a user-defined package to enable the simulation of green computing techniques. This package includes classes and methods required for creating cloudlets of VMs that are related to power consumption attributes such as power consumption rate, lower and upper threshold values of utilization needed for consolidation, power consumption cost, amounts of power consumption consumed by IT and non-IT components, etc. Also, it includes classes for implementing the logic of scheduling, consolidation and migration algorithms. The added package is called EnergyEfficientHybridScheduling and its classes and methods are shown in Table 1.
Before starting the evaluation of the proposed framework using CloudSim, the simulation environment should be configured. The configuration comprises the specifications of the requests to be served and the characterization of the employed cloud platform.
For our experiments, a data center with 200 VMs, 20 servers and 3000 computing resources is implemented. The electrical energy consumption for each computing resource ranges from 1 to 10 KW/h. The speed of computing resources ranges from 2000 to 4000 MIPS. The requests are uniformly generated with 1000 to 5000 requests. The arrival rate of requests follows the Poisson model [38], and the requests are independent. The number of realizations in our experiments is about 60 realizations for each experiment with an error factor less than or equal to 0.005.  Table 2 shows the environment on which the CloudSim is run. There are no fixed values assumed for lower and upper utilization thresholds of servers. They are adaptively calculated using the dynamic thresholds algorithm.

VI. RESULTS
This section presents and analyzes results produced by simulation experiments to show the performance evaluation of the proposed EEH framework. The performance of the proposed EEH framework is compared with those of the proactive and reactive scheduling (PRS) algorithm [31], the enhanced conscious task consolidation (ECTC) technique [8], the maximum Utilization (MaxUtil) technique [8] and the energy-performance trade-off multi-resource cloud task scheduling algorithm (ETMCTSA) [18]. The PRS has a scheduling algorithm, which builds the schedule, which is dynamically repaired during the execution time. The ETEC and MAxUtil techniques depend on the consolidation approach, while the ETMCTSA depends on the software scheduling approach.
The motivations for selecting each of these algorithms for comparison with the proposed EEH framework can be summarized as follows: PRS: The PRS focuses on reducing the power consumption for real-time requests by scaling up and down computing resources. The EEH framework is compared with the PRS to ensure that the EEH has better power performance for all types of requests including real-time ones.
ETCT and MaxUtil: In the ECTC technique, servers with the lowest power consumption are selected for consolidation. In the MaxUtil technique, servers with the highest capacity of computing are selected for consolidation. Both techniques are selected for comparison with the proposed EEH framework in order to test the effectiveness of the employed lower and upper threshold values of the EEH framework.
ETMCTSA: The ETMCTSA algorithm is selected for comparison as a scheduling-based algorithm, because it focuses on reducing the power consumption or saving time according to the customers' requirements. The ETMCTSA does not use the consolidation. So, comparing the proposed EEH framework with the ETMCTSA proves the effectiveness of the consolidation in the EEH framework.
The comparison metrics include PUE, DCEP, average execution time, throughput and cost-saving. The PUE, defined in equation (1), is one of the most known benchmarking metrics used to evaluate the data center efficiency in terms of electrical power consumption [6], [35], as noted by the International Energy Agency (IEA). A data center is considered to be more efficient with smaller values of PUE. The value of PUE must be greater than or equal to 1.  different numbers of VMs. The x-axis displays the number of requests issued by customers of the cloud and the y-axis displays the value of the PUE measured. It is shown that the proposed EEH framework has a smaller value of PUE than those of the other techniques. This means that the proposed framework has more power usage effectiveness than the other techniques. The main reason behind that is the application of both scheduling and consolidation approaches in the proposed EEH framework for saving electrical power. On the other hand, the other techniques consider one approach only for saving electrical power, which causes more power consumption. In the case of using the scheduling approach only, some servers could be affected by the run-time conditions. This could cause more power consumption due to underloaded and overloaded scenarios. In the case of using the consolidation approach only, requests may be firstly scheduled to inefficient servers from the perspective of power consumption. This could cause more power consumption.
The data center energy productivity (DCEP) relates the work performed to the amount of energy consumed in a data center over a certain period of time. It is defined as: where W d t is the amount of work (computations) performed in a data center d during a period of time t and E d t represents the total electrical energy consumed during that period. In this paper, the work is considered as computations performed in the data center. Figure 3(a-d) displays the DCEP of the proposed EEH framework, PRS, ECTC, MaxUtil and ETMCTSA. In this figure, the number of customers' requests is represented on the x-axis and the value of the DCEP is represented on the y-axis. In general, as the number of requests increases, the amount of power consumption increases, but the value of DCEP decreases, because the amount of power consumption represents the denominator of the DECP as shown in equation (10). Figure 3(a-d) clearly shows that the proposed EEH framework has more DCEP values than those of the other techniques, because the EEH framework schedules requests to the machines with the low power consumption, and minimizes the number of active servers in the data center to the most possible extent. Hence, the consumed electrical power is reduced and the DCEP is increased.
The average execution time (AET) is an important issue for customers. Customers always need their requests to be served with the minimal possible time. The average execution time could be calculated as follows.
where TL T i is the length (number of instructions) of the request (task) T i , VM MIPS is the speed of the VM that executes T i in million instructions per second (MIPS) and n is the number of requests.   y-axis. It is shown that the proposed EEH technique has a smaller value of AET than those of the other techniques, because the EEH framework considers the required service time when scheduling requests for VMs from the data center. In addition, the service time is considered, when taking the consolidation decision.
Throughput represents one of the most known metrics to assess the performance of a computing system. It is defined as the number of requests a data center can serve in a certain duration of time [39]. Throughput can be defined as: where Q d t is the number of requests served successfully by a data center d over a period of time t. Figure 5(a-d) displays the throughput of the proposed EEH framework, PRS, ECTC, MaxUtil and ETMCTSA. In this figure, the number of customers' requests is represented on the x-axis and the value of throughput is represented on the y-axis. The figure clearly shows that the proposed EEH framework has more throughput than those of the other techniques, because it sorts the incoming requests into groups according to the required service time, and then requests within each group are scheduled to the suitable VM group. This reduces the required time of scheduling, and then reduces the turn-around time of each request. Consequently, throughput increases.
Besides the response time, the money paid by customers for services provided by clouds represents an essential issue for these customers. It is one of the important competition parameters between cloud vendors. Customers always search for a cloud that can serve their requests with the required quality and the lowest price. Figure 6(a-d) shows the percentage of cost saved by customers, when applying the proposed EEH framework, PRS, ECTC, MaxUtil and ETMCTSA. In this figure, the number of customers' requests is represented on the x-axis and the percentage of saved cost is represented on the y-axis. The figure shows that the proposed EEH framework has the highest percentage of cost saving compared to the other techniques, because it depends on using two techniques for saving electrical power instead of depending on only one technique. This leads to utilization of a smaller number of resources by the EEH than those utilized by the other techniques. Consequently, some resources will be available for serving other requests. Thus, the EEH framework saves more cost compared to the other techniques.

VII. CONCLUSION
A hybrid framework for green cloud computing, which considers the time-based power consumption model, was proposed and evaluated in this research work. In contrast to the other techniques proposed in the literature, the proposed framework depends on applying both the scheduling and consolidation approaches. Firstly, customers' requests are sorted according to the requirements of both power consumption and service time. Then, a proposed scheduling algorithm assigns each request to the most appropriate VM that can serve that request. Thereafter, a proposed consolidation algorithm is applied to determine both the servers to be consolidated and the servers that receive VMs of the consolidated servers. Finally, a migration algorithm is employed to perform the migration of VMs from the consolidated servers. The EEH framework has superiority over the techniques that depend only on one approach to reduce power consumption in terms of PUE, DCEP, average execution time, throughput and cost saving.
In our future research, failure effects on the amount of consumed power will be studied. Deep learning techniques [40], [41] will be used in order to highly predicate the utilization of servers and learn different related parameters to the scheduling and consolidation. Additionally, we plan to enhance the scheduling algorithm with a load balancing technique.
ABDULAZIZ ALARIFI received the Ph.D. degree in information security from the University of Wollongong, Australia. He is currently an Assistant Professor with the Department of Computer Science, Community College, King Saud University (KSU), Saudi Arabia. He is also the Head of the Research Unit, Community College, KSU. His main research interests include information security, information technology management, cloud computing, big data processing, information privacy, risk assessment and management, e-governance, and mobile applications.
KALKA DUBEY received the B.E. degree in computer science and engineering from the MITS Gwalior Autonomous Institute, in 2010, and the M.Tech. degree in computer science and engineering from the ABV-IIITM Gwalior Institute, in 2013. He is currently pursuing the Ph.D. degree with IIT Roorkee, India. His research interests are focused on task scheduling and VM placement and allocation in cloud-based system, quantification and monitoring of security metrics, and enforcing security in cloud environments.
MOHAMMED AMOON received the B.Sc. degree in electronic engineering and the M.Sc. and Ph.D. degrees in computer science and engineering from Menoufia University, in 1996, 2001, and 2006, respectively. He is currently a Professor of computer science and engineering with the Department of Computer Science and Engineering, Menoufia University. He is also a Professor of computer science with the Department of Computer Science, King Saud University. His research interests include agent-based systems, fault tolerance, green computing, distributed computing, grid computing, cloud computing, scheduling, and fog computing.