Evaluating and Ranking Cloud IaaS, PaaS and SaaS Models based on Functional and Non-Functional Key Performance Indicators

With the recent maturity of Cloud computing technology and the flexibility of the Cloud services, the offering of Cloud services has spread exponentially. Different Cloud service providers offer their services highlighting different features of their services. Because of the diversity of Cloud services and their highlighted features, the choice of the most suitable Cloud service is a complex problem for the Cloud users. Many Cloud users are unable to identify the Cloud services that best suit their needs and thus choose an unsuitable Cloud service, which results in financial loses as well as time delays. To this end, in this study, we propose an efficient three layered framework for evaluating and ranking IaaS, PaaS and SaaS Cloud services. We identified the functional and non-functional key performance indicators (KPIs) for Cloud services from 6 KPI classes. We classified these KPIs with respect to their types and criticality so that the Cloud user can easily choose according to his needs. The relative importance of the KPIs was determined using CRITIC method. We combined the KPIs values and their relative importance for an overall evaluation and ranking of the Cloud services using Vikor method. A case study is also presented for a step by step demonstration of the proposed method.


I. INTRODUCTION
Cloud computing and virtualization technologies allow pooling of providers' distributed resources and offer them to remote users as a service over internet. Because of maintenance-free delivery, pay per use cost model, and ease of availability of Cloud services (CSs), the number of Cloud users (CUs) are increasing day by day. The increasing demand of Cloud services is attracting more and more Cloud service provides (CSPs) in the market. However, the CSPs differ from each other in various respects. Naturally, performance of some CSPs is better than others in some respects while others supersede in other respects. To maximize their benefits at the minimum cost, the CUs need to compare these CSs, and select the best CS that matches their needs.
Infrastructure as a Service (IaaS), Platform as a Service (PaaS) and Software as a Service (SaaS), besides others, are among the major service models in the Cloud market. CSs can be effectively evaluated in terms of their KPIs. However, because of lack of architectural and performance standardization of Cloud services, different CSPs provide different KPIs to differentiate their services from others, which makes the comparative evaluation of Cloud services a complex task. Common CUs are ignorant of the KPIs of CSs and thus can not comprehensively and effectively compare CSs. In addition, fuzzy nature of some KPIs makes this comparison even more difficult. Moreover, naturally, all the KPIs should not be treated equally because of different variations in their values and their supportive/conflicting behavior with each other. This behavior of KPIs makes the comparative evaluation of Cloud services even harder.
To provide a solution, in this study, we propose an efficient three-layered framework for evaluation and ranking for IaaS, PaaS and SaaS Cloud service models. The three layers address three needs of Cloud service evaluation and comparison. The layer-1 addresses the CUs' first need of identification of KPIs for evaluation of CSs. After KPIs identification, it is also important to consider the dispersion in the values of KPIs as well as their supporting/conflicing behavior of

II. FRAMEWORK FOR EVALUATION AND RANKING IAAS, PAAS AND SAAS CLOUD SERVICE MODELS
This section describes our three layered framework for evaluation and ranking of IaaS, PaaS and SaaS Cloud service models. Figure 1 shows our framework. In layer-1, we identify KPIs for the evaluation of IaaS, PaaS and SaaS service models. To make the selection of KPIs easy for common CUs, we classify KPIs as functional or non-functional. To further facilitate the CUs to select only some of the KPIs of their choice (for a simple and more effective comparison of CSs), we define the criticality of KPIs as high, medium and low. The criticality of KPIs is based on the functionality they provide. The KPIs targeting the core hardware/software resources and basic required features are rated as the most critical KPIs, and the KPIs for additional features are rated as medium or low. A CU may choose only highly critical KPIs or choose a combinational set of KPIs including high, medium and low criticality. It is noteworthy that to ease the CU for a simple yet effective evaluation of CSs, we do not target all performance indicators in this paper, we focus on KPIs only. In addition, we classify the KPIs as static or dynamic on the basis of whether they address static or dynamic aspects of CSs.
The layer-2 determines the relative importance of the KPIs in terms of KPI weights. The layer-3 ranks CSs considering KPIs' values and their weights. The details of these layers are described in following subsections II-A, II-B and II-C, respectively.

A. KPIS FOR IAAS, PAAS AND SAAS CLOUD SERVICE MODELS
This section provides a list of KPIs for evaluation of IaaS, PaaS and SaaS CS models. First, we present the KPIs common to these three models (referred to as general KPIs), and next, the KPIs specific to each of these models.  [6], [14], [15], [4]. • Network optimization: The CU needs to connect to the network in order to access the CSs. For many CUs, the network performance will be a key factor for overall performance of the CS. CSPs should provide an optimized network to ensure efficiency of their services. The connection latency, bandwidth, parallel transfers, etc. are important features to evaluate a Cloud network. Some CSPs may provide special optimized links to their data centers for an extra price. Some CSPs also provide network performance monitoring and optimization tools [8], [38], [19]. • Security: Security is a major issue for both CUs as well as the CSPs. The CSPs must employ necessary procedures to secure CUs' information and data during and after the service use. Besides the security features provided by the CSPs, the CUs may still want to secure their data from external attackers and internal snoopers. This problem can be mitigated by encrypting CUs' data. However, still there will be threats of data deletion and corruption by a third party. The CUs can use different data authentication technologies to overcome this issue [6], [11], [14], [15], [38], [9], [4], [26]. • Throughput: Throughput of a service tells us how efficiently the service is working. Rate of data storage on the disks and network data transfer are two commonly used measures for throughput. A higher throughput will increase the overall performance of a CS. While selecting a CS, CUs should carefully consider its throughput and mention any specific requirements of throughput in SLA [19]. • Elastic Scaling: The CSs are provided on-demand and can be scaled up and down as per CUs' requirements [32]. Some CSPs provide scaling on-demand while others provide it automatically. On one hand, this scaling down allows CUs to save cost and on other hand, small businesses can scale up to massive ones. The CU should select a CS which can scale up more easily and quickly. CU can also request CSP to design personalized strategies for scaling purposes. CU should ensure the provision of elastic scaling while confirming SLA [6], [37], [11], [14], [43], [4], [27]. • Availability / Reliability: CUs may access CSs from different parts of the world which fall in different time zones. Therefore, the CSPs must keep their services available for both domestic as well as international CUs. The storage systems of CSs are usually backed by several data servers. These servers require maintenance every now and then. It is important that CU has access to his data during maintenance, which is usually provided by keeping a backup of CU data on different data centers. To ensure high reliability, the CSPs should ensure that they have alternative arrangements in place, for example, for internet connection, power backup, storage, and computing servers, etc. [6], [11], [14], [44], [5], [27], [26], [32]. • Load balancing: It is important for the CU to consider whether an automatic load balancing is provided by the CSP or not and if it is, is it free or not. The authors in [32] discuss load balancing as an architectural requirement for the CSs. • Upgrades: To offer latest features to their clients, CSPs should regularly upgrade their services. The availability of the CS and CUs' data and may be affected during upgrades. While upgrading their services, CSPs should backup CUs' data and provide service from alternative sources. Any unavailability of CUs' data or service will affect the reliability and can also harm the sensitive domains like business, academia, etc. Therefore, CUs should understand these issues when they are agreeing on the term and conditions of CSP [16]. • Fault tolerance / Disaster recovery: The CSPs should have well defined mechanisms for fault tolerance and be well prepared for disaster recovery. There have been several cases of failure of CSs where CUs lost their sensitive data, e.g. failure of "Linkup" [14]. Therefore, CUs' data must be redundantly stored at different data centers to recover from disasters or faults. CSPs' capability for fault tolerance adds to reliability and trust of its services. CUs with sensitive applications and data must consider this factor critically and make it a part of SLAs [31], [26]. • Response time: It refers to the time between the service request and provision of service to the CU. It is one of the critical KPIs of CSs especially in case of frequently used business applications. In case of critical applications hosted by the CSPs, the response time requirements should by specified in SLA [19]. • Runtime Performance Monitoring: CSPs pool their resources to serve multiple CUs with scalable services. This feature creates a sense of infinite immediately available resources to CUs. The resource pooling strategy includes data storage, processing and data transfer services. CSPs may allocate resources from the same pool to multiple CUs, which can cause performance problems because of resource sharing. The CSPs should monitor the runtime service performance to ensure that the CUs do not experience a degraded performance when more CUs are assigned to a resource pool [13], [16], [37], [14], [8], [43]. Many of the CUs want to monitor the runtime performance of the CSs. Some CSPs provide no performance monitoring, while others provide it to different extents. For example, some CSPs offer WebSmart monitoring [32]. • Trust: Trust is among the major features in the public CS. Large businesses take this parameter very seriously and always choose most trustworthy CSPs. CUs will trust the CSPs that provide efficient, reliable and fault tolerant services. Trust also matters when CU and CSPs make SLAs. • Customer support: Because of variety of the available CSPs and new Cloud technologies, customer support becomes a major need of CUs, especially for the new ones. Customer support may be provided through telephone, email, live chat, and formal documentation for using CS. The CSPs should also provide a knowledge base for the CS and CU forums as resources [32]. • Customized quality of service A CU must take care of QoS of the CSP if there are specific QoS requirements because some CSPs offer their services with a predefined quality of service (QoS), while others offer customized QoS according to requirements of CU. QoS is ensured with the help of flexible SLAs between CUs and CSPs [32]. • Time to Consistency: Time to consistency is the time between the data storage in the Cloud and when it is available for reading from the storage services. This is really important for the CU as CU may require data to be VOLUME 4, 2016 available as soon as possible after storing it. CU should make sure that consistency is there when data is read and written within same data centers [19]. In unknown category CU has vague idea of necessary information about CSs and it can become a concern for both CU and the CSP. In the second category (basic category) CU knows about only a few features of the CS, and in moderate the CU knows about the basic and some additional features of the CS. However, in complete category CU has full information about CSs [15]. • Value added services: Some CSPs also offer valueadded services besides basic service, e.g. end to end data encryption, etc. [32]. • Geo Location of Data Center: While using CSs, CUs' data is stored and managed in different data centers, which may be located in various geo locations. Based on the geo locations of the data center, the CUs' data may be subject to different regulatory laws of the countries/states in which the data center is located. It's necessary for both CUs and CSPs to mention the geo location of the data centers in the SLAs [16], [14], [32].
A summary of these KPIs and their classifications is given in Table 1. The distribution of KPIs with respect to criticality is shown in Figure 2. We observed that 61% of KPIs are highly critical, and 22% and 17% of KPIs have medium and low criticality, respectively. We also observed that the number of functional KPIs are 33% more than non-functional KPIs. However, the number of dynamic KPIs were 62.5% more than the static KPIs. This distribution is shown in Figure 3. The distribution of KPIs in different KPI classes is shown in Figure 4. We observed that the highest number of KPIs (30%) fall in performance class. Whereas, the lowest number of KPIs fall in agility class (3%).

2) KPI's for Evaluating Infrastructure as a Service (IaaS)
The KPIs for evaluating IaaS are already listed under general KPIs in Section II-A1. We do not list any KPIs specific to IaaS.

3) KPI's for Evaluating Platform as a Service (PaaS)
In addition to the general KPIs described in Section II-A1, following are the KPIs specifically used for PaaS.
• Programming frameworks. The users of PaaS may have requirements of specific programming frameworks to run their applications. For example, Python, Java This article has been accepted for publication in IEEE Access. This is the author's version which has not been fully edited and content may change prior to final publication.    or .NET etc. The CUs must carefully identify whether their selected CSs provide their required programming frameworks or not. • Root access: A CU may require root access (administrative rights) for his applications while using PaaS. Therefore, the CU should carefully identify whether the administrative rights are provided by the CSPs or not.
• Support for Integration of legacy applications: Many businesses are still using some legacy applications, which are working fine for them. Due to expansion and other reasons, they want to run these applications in the Cloud. However, this requires necessary customized support from CSPs. This can be a decisive factor for the selection of a CS as unavailability of such support from a CSP will divert CU to other CSPs. • Provider Licenses: The software licensing system restricts the usage of software on different computers. Many CSPs rely on both open and licensed software. Amazon and Google's PaaS both were built using open source programming languages, i.e., Xen for Amazon and Python for Google's PaaS. However, this is not the case for all. The CUs should take care of the license type of the software when using PaaS because they may have to pay extra charges for the use of licensed software [14], [15], [26], [32]. • Interoperability: A CS is interoperable with other services if it could be made a part of a bigger system easily.
For the CUs who use CSs as a part of bigger systems, interoperability is very important [14], [27]. • Lock-in for Data Migration: In lock-in, a CU is restricted to move his data from one CSP to other CSPs. Every CSP is using its own format for storing data.
Hence, it has become difficult to move data to other CSPs. Lock-in is one of the major restrictions in wide adoption of CSs. Most businesses are not putting their data on clouds because of "Lock-in". A CU should carefully consider this factor before choosing a CSP.
A summary of these KPIs and their classifications is given in Table 2.

4) KPI's for Evaluating Software as a Service (SaaS)
In addition to general KPIs described in Section II-A1, following are the KPIs used specifically for SaaS.
• User Interface: A wide range of users use CSs. These include technical experts, business professionals, laymen and even disabled. While designing their services, the CSPs should consider the accessibility of users of various types. The CSP should design easy to access interfaces of their services so that various types of CUs can understand and use them [6]. Other than that, mobile interfaces should be made available by CSPs as large number of CUs have shifted to handhelds and mobiles [37], [9], [27]. The CUs should also compare the CSs for the method or tool of access of CSs, for example, web browsers, command line tools or Application Programming Interface (APIs) [32]. • Ownership: When the data of CU is on CSP's infrastructure, the ownership rights of CU may be compromised. CSP can access the data and can use it for its own purpose. So, it is the responsibility of CSP to identify the rights of CUs over software, properties, and data. When making legal agreements, the CU and CSP should VOLUME 4, 2016 5 This article has been accepted for publication in IEEE Access. This is the author's version which has not been fully edited and content may change prior to final publication.   CSs, especially about security aspects, is an important demand from the CUs to CSPs. The higher transparency about CSs will help CUs to choose better services and will lead to more satisfied clients. On the other hand, less transparent CSs may lead to unbearable consequences [6], [27].
A summary of these KPIs and their classifications is given in Table 3.

B. DETERMINE THE RELATIVE IMPORTANCE OF KPIS USING CRITIC METHOD
Naturally, the KPIs will have different dispersion in their values. Morevoer, some KPIs will support/conflict with other KPIs. Therefore, it is of critical importance to consider these behaviors of KPIs in the evaluation framework. We model these behaviors of KPIs in terms of KPI weights, which are determined using CRITIC (Criteria Importance through Inter-Criteria Correlation) method [7]. The Critic method determines KPIs' weights considering their conflicting behavior and degree of contrast. We applied CRITIC method for determining KPI's weights through the following seven steps. In step-1, we determined the decision matrix of values of n KPIs {K 1 , K 2 , K 3 , ..., K n } for the set of m CSs {CS 1 , CS 2 , CS 3 , ..., CS m } the CU want to evaluate and rank, as shown below.
Here, ν ij represents value of KPI j for CS i.
The scales of KPIs vary. To bring them to a uniform scale, in step-2, we normalized the above decision matrix using the equation 2 below.ν Here,ν ij , ν best j , and ν worst j represent normalized value of ν ij , best value of K j and worst value of K j , respectively. The best and worst values of K j for the KPIs whose higher values are desired by a CU (e.g., throughput, etc.) correspond to the maximum and the minimum values of K j , respectively. In contrast, the best and worst values of K j for the KPIs whose lower values are desired by a CU, (e.g. cost, etc.) correspond to the minimum and the maximum values of K j , respectively. This article has been accepted for publication in IEEE Access. This is the author's version which has not been fully edited and content may change prior to final publication.
In step-4, we calculate an n × n symmetric matrix of the linear correlation coefficients γ ij among KPIs.
Here, γ ij represents linear correlation coefficient between K i and K j .
In step-5, we calculate the total conflict of each KPI with the rest of the KPIs. For K j , the total conflict ζ j is calculated as After step-5, we have standard deviation and total conflict of all KPIs, as shown below.
In step-6, the quantity of information φ related to each KPI is determined as Here, φ j represent quantity of information related to K j . After step-6, we have the standard deviation, total conflict, and quantity of information corresponding to all KPIs.
Finally, in step-7, the objective weights w j of the KPIs are determined as The weights of KPIs are shown as below.
. . φ n w 1 w 2 w 3 . . . w n These weights will be used in overall evaluation and ranking of CSs using Vikor method, as described in the following section II-C. If a CU has his preferences about any KPIs, he should multiply the weight of his preferences with the weights determined above. For example, the CU may want to give more preference to KPIs with high criticality, etc.

C. RANK CLOUD SERVICES USING VIKOR METHOD
For overall evaluation and ranking of CSs, we use Vikor (VIsekriterijumsko KOmpromisno Rangiranje) method [25]. The Vikor method is suitable for complex multicriteria decision making problems where there are a large number of criteria. The detailed steps of Vikor method are described as following.
In the first step of the Vikor method, we begin with the decision matrix as shown in matrix 1. In step-2 of Vikor, the normalized values of KPIs are calculated as Here, ν ij represents the normalized value of ν ij . The normalized decision matrix is shown below.
In step-3, the weighted normalized values of the KPIs are calculated as w j * ν ij . The weighted normalized values are KPIs are shown below.
In step-4, the unity value U i for each CS is obtained as a sum of weighted normalized values of KPIs obtained in step-3.
After step-4, we have utility values of all CSs, as shown below.

CS m U m
In step-5, the regret value G i of each CS is determined as VOLUME 4, 2016 the maximum value of its weighted normalized KPIs.
After step-5, we have In step-6, the values of the minimum utility (U − ), the maximum utility (U + ), the minimum regret (G − ) and the maximum regret (G + ) are found as below.
Finally, in step-7, the Vikor rank Ω i for each CS is calculated as Here, δ represents the weight for the strategy of maximum group utility. After step-7, we have The lower the value of Ω i , the higher is the overall rank of the CS. The CU should select the highest overall ranked CS. The ultimate objective of this study is to rank CS based on the KPIs. Therefore, we limit our analysis to this step-of Vikor. However, if CU is interested in a compromised solution, he may further extend this analysis to find a compromised solution.

III. CASE STUDY
To demonstrate the application of the proposed framework, we present here a case study of ranking of five Cloud services: CS 1 , CS 2 , CS 3 , CS 4 and CS 5 . For the sake of simplicity of the case, we will evaluate these CSs for five KPIs: K 1 , K 2 , K 3 , K 4 and K 5 , where these KPIs represent price, response time, performance monitoring, security and customer support, respectively. The values of these KPIs for the selected CSs are shown below.
Here K 1 is measured in dollars, K 2 in seconds, and K 3 , K 4 , and K 5 are measured using a five point scale, where 1, 2, 3, 4 and 5 represent low, below average, average, good and excellent scales, respectively. It is noteworthy that the measurements of the KPIs and suitability of their measurement scale is out of scope of this paper. Several methods for the measurement of the KPIs and their scales have been proposed in literature. The user may choose any suitable method for measurement of KPIs. Here, we demonstrate a step-by-step application of layer-2 and layer-3 of our proposed framework in Section III-A and Section III-B, respectively.

A. DETERMINE THE RELATIVE IMPORTANCE OF KPIS USING CRITIC METHOD
In this section, we will demonstrate the application of CRITIC method of layer-1 (See Section II-B).
Step-1 of the CRITIC method is determining the decision matrix of five available CSs for selected KPIs, which is given in matrix 14.
In step-2, the KPIs are normalized to adjust the difference of their scales using equation 2. For normalization, we need to find ν best and ν worst for each KPI. These are shown below.
$217 2.5s 5 5 5 ν worst $251 5s 1 1 1 The normalized value of K 1 for CS 1 is calculated using equation 2 as shown below.  This article has been accepted for publication in IEEE Access. This is the author's version which has not been fully edited and content may change prior to final publication. In step-3, the standard deviation in the normalized values of each KPI is calculated, which is shown below.
In step-4, an n × n symmetric matrix of the linear correlation coefficients among the KPIs is calculated. This is shown below.
In step-5 of the critic method, the conflict of each KPI with the rest of the KPIs is calculated as 1 − γ. These conflict values are shown in the matrix below. The total conflict of each KPI with the rest of the KPIs is calculated using equation 3. The calculation of total conflict (ζ 1 ) for KPI K 1 are shown below. Similarly, other values of ζ i are calculated. The final values of ζ i are shown below. In step-6, the quantity of information (φ i ) related to each KPI is determined using equation 4. The calculation for φ 1 is shown below. Similarly other values of φ i are calculated and are shown below. Similarly, other values of w i are calculated and are shown below.
The CU may include his preferences about KPIs at this step. To do so, the CU will multiply the weights of his preferences with w i . For the sake of simplicity, in this case study, we assume no preferences (or equal preferences) for KPIs and thus the above weights remain the same.
After determining the KPI weights, our method proceeds to the layer-2, where we evaluate and rank the selected Cloud services using Vikor method, which is described in the Section III-B below.

B. RANK CLOUD SERVICES USING VIKOR METHOD
In step-1 of the Vikor method, we begin with the decision matrix as shown in matrix 14. In step-2, this decision matrix is normalized using equation 6. The ν best and ν worst of the KPIs are shown in matrix 15. The normalized value of CS 1 for K 1 , i.e. ν 11 , is calculated using equation 6 and is shown below. Similarly, the weighted normalized values of remaining KPIs and other CSs are calculated. The weighted normalized matrix is shown below.    In step-6, the minimum utility (U − ), the maximum utility (U + ), the minimum regret (G − ) and the maximum regret (G + ) are calculated using equations 9, 10, 11, 12, respectively. Their values are shown below Finally, in step-7, Vikor ranks Ω i are calculated for all CSs using equation 13. For our case study, we set the value of δ = 0.5. The calculations for Ω 1 are shown below.  The overall ranks are determined from values of Ω i . The lower the value of Ω i , the higher is the overall rank. The overall ranks of CS are shown below.

IV. RELATED WORK
Because of multiplicity of the available CSs, their performance indicators of CSs and varying importance of performance indicators for different CUs, ranking and selection of CSs has become a hard problem [30], [10]. Several multicriteria decision making techniques and meta-heuristic methods have been applied to evaluate and rank CSs. Many of these use hybrid methods for ranking CSs. The authors in [24] proposed a hybrid broker for ranking CSs. First, the authors used Markov chain to model varying user preferences. Then, they employed best worst method for ranking CSs. Kumar et al. [18] also used a hybrid method employing AHP ("analytical hierarchical process") and TOP-SIS ("Technique for order preference by similarity to ideal solution") to find the best CS. The AHP was employed to find weights of CS performance indicators and TOPSIS was used for ranking the CSs. The authors in [17] developed a VOLUME 4, 2016 This article has been accepted for publication in IEEE Access. This is the author's version which has not been fully edited and content may change prior to final publication. hybrid framework to rank the CSs based on their quality of service (QoS). They also employed best worst method to determine the user priorities for CS performance indicators and then used TOPSIS to finally rank the CSs. Another hybrid method based for clustering and ranking of the CSs was presented in in [1] by Al-Faifi et. al. The authors used k-means clustering algorithm to cluster CSs based on their similarities and then used DEMATEL-ANP for ranking the CSP clusters. The study by Youssef [42] proposed a hybrid method based on TOPSIS and best worst method for ranking CSs. The study in [36] developed a framework based on TOPSIS and Gaussian distribution for ranking the CSs. The authors also analyzed the sensitivity of their results. The author in [12] also employed a hybrid method using artificial immune network and fuzzy theory for selection of CSs.
Some authors evaluated QoS of CSs and used it for CS selection. The author in [23] proposed a comprehensive CS measurement index and a flexible framework for ranking CSs based on user preferences for QoS and usability criteria. The author identified 65 KPIs to evaluate CSs and used MAGIC method for ranking the CSs. Li et. al. [21] also proposed a hybrid CS selection method based on heterogeneous QoS parameters. The heterogeneity and fuzziness of the QoS parameters was addressed through entropy weights and GRA-ELECTRE III method was used for overall evaluation of CSs. The work in [2] also employed fuzzy rough sets and weighted Euclidean distance to rank CSs. The work in [29] proposed a method employing matter element extension for ranking CSs. Sun et. al. [35] measured the relationship between CS selection criteria. Then, the authors estimated the criteria interaction and their importance. Next, they proposed a priority based method to select CSs. The work in [41] developed a QoS based method to recommend CSs. The authors predicted QoS from the nearest neighbors using NearestGraph.
Some studies proposed to use trustworthiness of CSPs to select the most suitable CS. These studies used various methods to model the trustworthiness of CSPs. Wu et. al. [40] evaluated and compared CSs for customized recommendations about their trustworthiness. The authors used fuzzy processing to handle imprecise user preferences and recurrent neural network to adapt the evaluation to the user preferences. Mujawar et. al. [22] modeled trustworthiness of CSP based on CSP's behavior and CU's feedback. The authors considered different QoS and SLA parameters to compute the trust. The authors in [39] also evaluated the trust of CSs. Different aspects of the trust were modeled using correlation analysis, rough set theory, and AHP. The work in [34] evaluated trustworthiness of CSPs based on compliance values of CSPs, which were further processed using a variant of TOPSIS. The study in [33] compared three MCDM approaches (PROMETHEE, TOPSIS, and AHP) for determining multi-perspective trustworthiness of CSPs.
In some other efforts, the authors modeled the reliability of Cloud data centers. The authors in [20] evaluated the reliability of Cloud data centers using HCGS petri nets and Monte Carlo Simulation. Their evaluation criteria included overall performance of data center, connectivity of IT infrastructure and runtime delivery of service. Zhou et. al. [46] proposed a model for reliability evaluation of CSs using their subjective as well as objective attributes. Their model is based on hierarchy variable weight and classified statistics. Table 4 summarizes different related works.
Most of the existing methods are not comprehensive to address all KPIs or too complex to apply for CU. In contrast, our study, first of all, identifies important functional and nonfunctional KPIs for evaluating IaaS, PaaS, and SaaS models. Next, we propose a hybrid decision-ming tool that is flexible to consider all or some functional as well as non-functional KPIs of the CS. For a more meaningful and effective evaluation and comparison of CSs, we calculate KPIs' weights employing critic method [7] that considers KPIs conflicting behavior as well quantity of information in KPI values. Next, to rank the CSs efficiently and consistently we use Vikor method. The Vikor method can effectively address complex ranking of various available CSs using with multiple KPIs.

V. CONCLUSION
With the increasing demand of Cloud services, the number of CSP are also increasing. Because of lack of standards for CSs, the CSP highlight different performance indicators of their services. This makes the choice of the best Cloud service difficult for the CUs. To address the needs of CUs for effective comparative evaluation and ranking of CSs, this study presents an efficient three layered framework. We highlight several KPIs for comprehensive evaluation of Cloud IaaS, PaaS and SaaS service models, and classify them as functional or non-functional and static or dynamic, which makes it easy for the CUs to choose KPIs of their choice in their evaluation. The second layer of our framework determines KPIs' weights based on variations in KPIs values as well as their conflicting behaviors. The third layer effectively ranks the CSs considering KPIs' weights. A step by step demonstration of the proposed framework is shown through a case study. Our framework is very easy to understand and light weight to implement, and can be easily implemented using spreadsheets.