A Unified Framework for User-Preferred Multi-Level Ranking of Cloud Computing Services Based on Usability and Quality of Service Evaluation

As networking and Cloud computing technologies have evolved, a wide range of Cloud services has been introduced by different Cloud service providers. Many organizations and individuals are using these services as a part of their regular work. Thus, the performance of users’ systems is significantly dependent upon the performance of the services they employ. Therefore, it becomes crucial for Cloud users to thoroughly evaluate and compare the available Cloud services to select the best of them. However, different Cloud service models, range of pricing and feature schemes, different performance attributes used by service providers, fuzzy nature of some of the performance attributes, etc. make Cloud service performance analysis and comparison a challenging task. In addition, different user preferences regarding Cloud service attributes make this analysis even more complex. This situation leads to ambiguity and indecisiveness in selecting a Cloud service that best matches the end-user’s needs and thus leads to degraded performance of user’s systems and financial losses. To this end, this article proposes a unified Cloud service measurement index to provide a single comprehensive framework for multi-level evaluation of Cloud services. For a detailed and effective performance evaluation, we identified 8 top-level attributes of Cloud services and 65 detailed key performance indicators to evaluate these attributes. For an analytical ranking of the target Cloud services, we employed “Multi-Attribute Global Inference of Quality”, which considers the hierarchical relationship of performance attributes. Our method considers user preferences for Cloud service attributes in terms of attribute weights and is flexible to select all or only user-preferred Cloud service attributes. We show the application of the proposed framework and the ranking method using a case study.


I. INTRODUCTION
Cloud computing is continuously evolving as an on-demand computing paradigm to deliver compute resources as services. Cloud computing is characterized by high availability, elastic scalability, and a virtualized distributed environment. The end-user has maintenance-free access to Cloud services and pays as per his use. Based on the needs of most of the users, the major models for delivery of Cloud services include platform as a service (PaaS), software as a service (SaaS), and infrastructure as a service (IaaS), besides others.
The associate editor coordinating the review of this manuscript and approving it for publication was Muhamamd Aleem . it some competitive edge. Some CSPs have strong functional attributes while others have relaxed payment plans. Some CSPs offer more service management functions while others offer high reliability and strong service level agreements (SLA). In addition, CSPs usually offer the same services with different levels of performance and features and thus different prices. Thus, collectively, the same kind of Cloud services differ from each other in terms of service attributes.
The performance of end-users' systems is significantly driven by the performance of the Cloud service they use. Therefore, it is very crucial for the end-users to thoroughly analyze available Cloud services and carefully select the most suitable one that matches their needs. However, common Cloud users are ignorant of important Cloud service attributes and key performance indicators (KPIs) for comparing Cloud services. Therefore, it has become very difficult for them to efficiently compare and rank Cloud services. Also, some Cloud attributes are more important for some users while other users give high priority to other attributes. For example, finances may be more important than efficiency for some users, whereas the opposite may be true for others. These issues make Cloud service evaluation and ranking a challenging problem. In addition, different types of service models, a range of pricing schemes, different sets of service features, and the fuzzy nature of some performance attributes make service performance evaluation even more complex. This situation leads to ambiguity and indecisiveness in selecting the most suitable Cloud service that matches the end-users' needs and thus leads to degraded performance of their systems and financial losses. To this end, we propose a unified Cloud Service Measurement Index (UCSMI). UCSMI comprehensively covers the quality of service as well as the usability of Cloud services in terms of 8 top-level attributes. To effectively evaluate a Cloud service, UCSMI recommends 64 different KPIs.
To efficiently rank Cloud services defined by UCSMI, we used the Multi-Attribute Global Inference of Quality (MAGIQ) [1] method. MAGIQ allows the hierarchical relationship of service attributes and their KPIs. The users can define their preferences of attributes/KPIs, which are then converted to attributes/KPIs weights. The overall rank of a Cloud service is determined in terms of its KPI hierarchical relationship, KPIs measurements, and attribute/KPIs weights. MAGIQ is flexible to select all or only user-preferred or some available Cloud service attributes/KPIs. To exhibit the application of the our method, we present a case study at the end of the paper.
The major contributions of this study include the followings. First, as far as we know, UCSMI is the most comprehensive index that covers almost all major aspects of Cloud services using 8 top-level Cloud service attributes and 64 detailed KPIs. Second, UCSMI can be easily adapted to various Cloud service models (SaaS, PaaS, IaaS, etc.) and the available information. The proposed method is flexible to be used by both technical Cloud users having all technical information as well as by common Cloud users having information of only some KPIs/attributes. Third, using the proposed framework, a user can define his priorities for different service attributes/KPIs, and thus more weight is assigned to those attributes/KPIs. Last but not the least, UCSMI allows the hierarchical relationship of service attributes/KPIs and thus provides an in-depth and more detailed comparison of Cloud services.
The remaining of the paper is structured as follows. Section II formally defines ranking of Cloud services as a multicriteria evaluation problem. Section III presents our proposed Unified Cloud Service Measurement Index. Cloud Service Ranking using MAGIQ is explained in Section IV. A case study to demonstrate the application of MAGIQ using UCSMI is described in Section V. A review of related work is given in Section VI. At the end, we conclude this article and describe future work in Section VII.

II. PROBLEM STATEMENT
We define our Cloud service ranking problem as a multicriteria evaluation problem where a set of n available Cloud services δ = {s 1 , s 2 , . . . , s n } has to be evaluated and ranked over a set of m KPIs ζ = {i 1 , i 2 , . . . , i m }, where, s n and i m represent nth Could service and mth Cloud service KPI, respectively. Our objective is to find a function f that inputs δ and ζ and outputs an ordered set of Cloud services δ ordered = {s o1 , s o2 , . . . , s on } based on ζ , where s o1 is the best Cloud service and s on is the worst Cloud service, i.e. f (δ, ζ ) → δ ordered .

III. A UNIFIED CLOUD SERVICE MEASUREMENT INDEX
To provide a single comprehensive framework that can be used by the Cloud users to evaluate available Cloud services, we propose a Unified Cloud Service Measurement Index (UCSMI) that follows SMI [2] (Service Measurement Index by CSMIC (Cloud Services Measurement Initiative Consortium)) and Cloud Usability Framework (CUF) by NIST [3] and further extends them. Both SMI and CUF are based on ISO standards, however, their targets are different. SMI targets quality of service (QoS) evaluation of Cloud services, whereas, CUF targets usability evaluation of Cloud services. Space limitation restricts us from describing SMI and CUF here. For their details, we refer the reader to [2], [3], respectively. SMI or CUF individually is not enough to effectively rank the Cloud services; those must be combined with some additional parameters to effectively rank the Cloud services. We address this need in this article.
KPIs of Cloud Services can be broadly divided into functional and non-functional KPIs. The functional KPIs, e.g. cost, efficiency, response time, etc., need to be measured using suitable metrics. However, the non-functional KPIs, e.g. service level agreements, trust, etc., may not be measured precisely; instead, other scales, like Likert scale, semantic differential scale, etc. may be used. It should be noted that so far there is no agreed-upon standard or metrics for measuring Cloud service attributes. Different researchers have proposed VOLUME 8, 2020 different metrics for KPIs. Comparing different proposed KPI metrics and finding the most suitable of them is out of the scope of this article. We keep our focus on the identification of the most suitable KPIs for Cloud service evaluation from the perspective of Cloud users as well as the CSPs and use them for ranking Cloud services.
Under the proposed UCSMI, the major high-level Cloud service attributes include security and privacy, finance, performance, agility, usability, accountability, assurance and management. The detailed KPIs for their evaluation are described below.
• Security and privacy: Security and privacy is perhaps the most critical and decisive factor in choosing the public Clouds, especially for organizations with sensitive data. Almost all users require 100% security and privacy of their data. CSPs must protect their services from attacks of malicious users, software, and services. Security attacks may not only compromise the quality of the Cloud service but may also compromise client and data privacy. Clear policies for multi-dimensional security and privacy of client data, its life cycle management, and client applications should be the part of SLAs. The KPIs to measure security and privacy are shown in Figure 1 and are described as under. -Data center and geo-location: While using Cloud services, users' data may transfer over the internet and may be managed by different data centers. The physical location of data centers may be at different geo-locations. Due to the location of data centers in different countries, a variety of legal issues can arise as different countries and states have different laws about data security. Hence, Cloud users as well as the CSPs must identify the geo-location of the data centers in SLA [4]- [6]. -Data encryption: Although the Cloud users can use client-side encryption to safeguard themselves against many security issues, yet it is not possible in all cases. Additionally, using client-side encryption is a tedious task which burdens the Cloud user, thereby restricts the use of public Clouds. The CSP-side encryption is required especially in SaaS (e.g. for data storage, web applications, etc.) and PaaS (e.g. to safeguard against possible network interception between virtual machines). -Client Privacy/confidentiality: Client privacy is a critical requirement of Cloud users. The CSPs must prevent leakage of any data that compromises a client's private information like financial details, personal information, geolocation, etc. To be safe from any privacy attacks, which can compromise a client's privacy, the CSPs must deploy necessary applications, tools, utilities, and patches to ensure client's privacy. CSPs must also anonymize the clients for their use of Cloud services and their Cloud usage data must also be kept confidential. -Trust: One of the most challenging issues in the Cloud is trust or credibility management. While choosing a CSP, large businesses consider trust very seriously. The users will always trust the CSP which provides reliable, efficient and fault-tolerant service, handles clients' data carefully, facilitates clients' tasks, and provide an overall good experience. Trust also matters when end-user and CSP make SLAs. If clients trust a CSP, the credibility of the CSP is said to be high.
• Finances: Finance or cost refers to the amount spent on using the Cloud service. It is among the first few and critical questions that come to the mind of a Cloud user before moving to the Cloud. Every Cloud user would evaluate if it would be cost-effective for him to use public Clouds. Besides the base-plan cost, the cost of data transfer and additional features (like additional security) must also be taken into account. The KPIs to measure the finances are shown in Figure 2 and are described below. -Base-plan cost: Base-plan cost is the initial cost of using a Cloud service with a basic set of features for unit time. Usually, an extra cost is incurred for each additional feature or time unit. The base-plan cost or simply cost of using Cloud services is an important metric for low budget and long-term users when selecting a CSP from an available pool. For SaaS, the cost comparison may be straight forward in terms of features and per unit time cost. However, for IaaS and PaaS, comparison of the prices of different CSPs is difficult because they use different dimensions to define their prices [7], for example, size of virtual machines (small, medium, large, etc.), number of CPUs, CPU speed, RAM, storage, etc. For an easy comparison of the cost of different CSPs, different price dimensions should be unified in a single price metric, e.g. volume-based price as defined in [7], etc.
-Data storage cost: CSPs have different pricing schemes for data storage. A user should carefully identify his data storage needs and select the most suitable scheme accordingly. -Cost of data transfer: Besides the basic price of a Cloud service, most of the CSPs also charge for their service of data transfer between servers and clients. This is a critical metric for the users running data-intensive applications. -Cost of security: Some CSPs charge extra money for every security feature they provide, e.g. like firewall, password/email security, backups, authorization, intrusion detection, encryption, persistence, and protection of data, etc. [6]. Whereas, some other CSPs charge only for extra security features provided in addition to the basic free of charge features. For the clients with special requirements of security, this KPI is important. -Free features: In addition to charged items, some CSPs offer free features to attract the customers. These free features should also be considered while comparing CSPs.
• Performance: Performance measurement of a Cloud service is a direct measurement of its functions and features. Usually, a variety of Cloud services are available. A Cloud user must evaluate the performance of these services to find which service meets his needs and expectations. The Cloud service performance is measured using the KPIs shown in Figure 3, which are described below. -Efficiency: Efficiency of a Cloud service is its capability to produce the required outcome effectively with a minimum amount of time, cost and effort [5], [8]. The efficiency from the user perspective is widely measured as the time taken to complete the user tasks [9]. -Latency: Latency of the Cloud service refers to the time lapsed between the request and end of a user task. Latency is typically important in case of PaaS and SaaS models, for example, for service delay for data processing applications and database-oriented applications [10]. Different types of latency include communication latency (between Cloud computational servers and data centers), scaling latency, database processing latency, etc. -Load balancing: This KPI specifies whether an automatic load balancing (referred to as ''elastic load balancing'' by Amazon EC2 [11]) is provided by the CSP or not. The authors in [6] discussed this KPI as an architectural requirement for the Clouds. Client-managed load balancing (also referred to as customizable load balancing) may be provided by some CSPs as additional service. -Network quality: Data-oriented applications and SaaS Cloud models heavily depend on network quality, which may be observed in terms of network latency, available bandwidth, round trip time, etc. [12]. -Response time: Response time is typically important for data-intensive applications. For timesensitive applications hosted by Clouds, there should be special requirements for response time specified in SLA [13]. -Scaling latency: Cloud computing has a very distinct feature of providing on-demand services (referred to as scaling), i.e. user can acquire services when required. Similarly, the user may release some of the services when those are not needed anymore and thus save cost [6]. Some service providers provide automatic scaling, while others provide it on-demand. Scaling latency is the time taken by the CSP to assign a new instance/service to the Cloud user when requested. Scaling latency can significantly affect the Cloud services' performance. A Cloud user wants to select a service that can scale up and down easily and quickly. Some CSPs may accept custom scaling requests to match client's business needs [13]. -Throughput: In general computing, throughput is the amount of work done by computers in unit time. In Cloud computing, throughput describes the overall performance of Cloud services. Largely, job completion time, data transfer rate for disk drives and network are measured as throughput. While selecting CSPs, Cloud users should specify the requirements of throughput in SLA [13].
• Agility: It refers to how quickly a user can adapt to use the Cloud service. This is of core importance as the major advantage of Cloud Computing is that it is quickly available when needed and easily adaptable as per the user's needs. The KPIs to measure agility are shown in Figure 4 and are described below.
-Adaptability: So, Cloud computing should allow requests for new services or changes requested VOLUME 8, 2020 by clients. CSPs usually create a general purpose pool of resources that can serve different types of requests [8]. However, some clients may have special business needs. The CSPs that also serve custom resource requests from the clients will be preferred over others. -Elastic scale: This is the ability of a service to continue in a reliable existing fashion when its size is changed. In the context of Cloud computing, scalability means that the CSP can accommodate a large number of Cloud users simultaneously while reliably maintaining its services. It is easy to scale downwards rather than upwards. For example, for data storage, if service is scaled upwards and it is used in its full capacity it may not remain reliable [5], [8], [14], [15], [16]- [18]. It is important to evaluate how efficiently a CSP can scale up/down its services for a user with elastic needs. Cloud user should make sure that while confirming SLA, there should be a proper legal clause explaining the on-demand services act [4], [19]. -Flexibility: CSPs should add or remove features from a service on-demand. Besides, CSPs should allow users to be more flexible to use Cloud services from different devices like pads and mobiles. -Portability: It refers to how easily a client can move his application and data from one CSP to another. Portability is a very important KPI for the Clients using hybrid Clouds. Cloud services should allow users to switch between services rapidly and easily. Portability of a CSP is high if it provides an easy to use API to migrate user data between services. Cloud-based applications, for example, Cloud-hosted games, require high portability. It is one of the top factors that restrict the use of public Clouds.
• Usability: Usability of a Cloud service refers to its quality that how easily a user can interface with it. Cloud users will be more attracted to the services which are easier to use. Usability of Cloud service is usually measured in terms of the KPIs shown in Figure 5, and are described as below.
-Accessibility: The use of Cloud services is increasing day by day. A huge number of people currently use or plan to use Cloud services. It also includes people with disabilities and old people. While developing Cloud services, the Cloud provider should consider the case of accessibility as a business need. Cloud services should be easy to use for a wide range of users including people with disabilities. They should easily access the interface of the services and could easily understand them as well [8].  [4], [5], [14], [16], [19], [20]. -Suitability: Cloud providers should make sure that they provide services that are according to the demand of Cloud users. The degree to which these requirements are met is referred to as suitability. While choosing Cloud services, the user should identify which services are more suitable for his needs. This question can be answered by proposing a mechanism to filter all the CSPs according to the user requirements. The most suitable CSP will offer most of the functional as well as non-functional features demanded by the user [9]. -Transparency: It is the responsibility of Cloud users to make sure that the CSP provides complete transparency regarding the security and compliance measures. All the measures should be in place to protect users' sensitive information and intellectual property. Users with critical and sensitive data must thoroughly evaluate all the transparency issues. For example, if an organization is moving data from distributed corporate data centers to a smaller Cloud service, it should make sure that there should be no potential disasters resulting in loss or theft of data [8], [18]. -Upgrade domain: CSPs need to upgrade their resources (software/hardware) regularly, especially for which the updates/patches are available. For users with critical needs, it should be mentioned in SLAs if the service will not be available during the update process [4]. -CI/CD and DevOps support: Software development teams using agile software development methods require supporting tools for CI/CD (continuous integration / Continuous delivery) and DevOps. The CSPs that do not provide the required support will be out of the choice list.
• Accountability: It targets evaluation of user-related KPIs of CSP performance, e.g. auditability, sustainability, customer support, regulatory compliance, SLA fulfillment, etc. These properties do not relate to the service provided by the CSP but play a decisive role in developing the initial trust of the Cloud user on the CSP organization. The KPIs for evaluating accountability are shown in Figure 6 and are described below.
-Auditability: Auditability is the ability of a Cloud user to confirm that the CSP is following the claimed policies, standards and processes. A high degree of auditability will increase the client's trust in a CSP. -Customer support: A very basic feature that becomes a major need especially in the case of SaaS is help and support. CSPs' support should be available whenever the user needs it. Support may be available in the form of proper documentation or technical assistance via telephone, email and live chat. The CSPs should also provide a knowledge base and user forums as resources [6]. -Legacy support / integration support: In computing, the legacy systems are outdated computer applications that are used instead of available upgraded versions. These also include new systems which are based on older technologies that continue to be used. These applications are mature, work well and are expensive to modify. Therefore, companies are not willing to develop new applications and are happy to use the existing ones. Similarly, in Cloud computing legacy support is a major issue. Companies want their applications to run in the Cloud without major modifications [5], [19]. -License type: License type is important when we are considering IaaS because we may need to pay additional fees for the usage of licensed software [5], [6], [22], [23]. -Negotiations of QoS factors: The CSPs should also be evaluated for the quality of service (QoS) provided to end-users. In addition to a usually provided set of QoS, CSPs should be flexible to negotiate the QoS factors and should mention this in SLAs [6]. -Regulatory compliance: CSPs must comply with the industrial standards such as ISO, SAS, HIPPA, etc. for the services it is providing, e.g. authentication, logging, backups and recovery, data access, etc. The CSPs must also follow the legal requirements of the Govt. in the host country. If a CSP fails to comply with the regulatory requirements, it loses the trust of its clients. -SLA fulfillment: SLA is a bond between a user and the CSP stating the level of service provided with all features and terms and conditions. Cloud users want Cloud services without any disturbance as it can cause a great deal of financial harm to the Cloud user [5], [8], [17], [22]. This KPI compares CSPs for the following factors; SLA methodology, minimum outage duration, service outage credit, performance guarantee, type and details of professional support services, etc. [6].
• Assurance: It refers to the CSP's declaration that the Cloud service will be provided as advertised or specified in SLA. The success of a CSP needs to maximize the assurance of its services. Assurance can be evaluated in terms of the KPIs shown in Figure 7. These KPIs are described below. -Availability/Reliability: Cloud services are used by different companies in different regions and time zones. Therefore, uptime will be different in different regions. CSPs should make sure that they maintain high uptime for both domestic and international users. Most Cloud users track the uptime of CSP and measure the availability of the provider. CSPs should make sure that they have different parameters in place to guarantee services availability, i.e. redundancy of power, internet connection, servers, storage, and security systems [5], [6], [8], [15], [18], [23]- [25]. -Backup system: Cloud storage systems usually consist of several data centers, which include hundreds of data servers. These servers require regular maintenance. To provide continuous data access to client, it is important to replicate user data on multiple servers. The CSPs should ensure that they have a backup for each service i.e. data storage, networks, power supply, etc. -Disaster recovery / fault tolerance: In case of disaster, there should be a proper mechanism to recover from disasters. The data redundancy is crucial while we are considering disaster recovery. The data should be redundant and in different data centers, so that Cloud users can get their data back in case of any disaster or fault. The failure of ''Linkup'' [5] is an example of such a disaster when they lost half of their customer's data. It is also the responsibility of Cloud users to mention these points in SLAs [23], [26]. -Service stability: Performance stability refers to how consistent is the performance of a Cloud service over long periods. A change in service performance is referred to as service fluctuation or variability. Naturally, a stable Cloud service is better than the unstable one. A service with fault-tolerant and resiliency capabilities has high stability. High service stability is critical for SaaS models.
• Management: This attribute focuses on the control and monitoring features provided by the CSP. The services with higher control and monitoring features will be preferred over others. The management is measured in terms of KPIs shown in Figure 8. These KPIs are described as under. -Data management capabilities: It refers to CSP's capabilities to manage user data for reliable and fast access. It also includes provided tools for effective and efficient data management tasks including access control, privacy, backup, disaster recovery, efficient availability, etc. Deficiency in data management capabilities of a CSP will increase the risk in using its services. -Lock-in for data migration: Every CSP uses its custom format for storing user data, which restricts migration of user data from one Cloud service to another. The Cloud customers should make sure that there are proper statements in SLA that address this problem. If there are no such clauses in SLA, the user should enquire service providers about data migration tools [5]. -Monitoring: Cloud users want to monitor the performance of their applications in the Cloud. However, such monitoring is not offered by many CSPs, besides basic monitoring offered by some. For users with performance-critical applications, monitoring is very important for performance management and optimization. The Cloud users will prefer the CSPs which offer required monitoring tools. -Value-added services: Some CSPs provide extra tools for value-added services, like push notifications, server-side data encryption, end to end encryption, etc. These services may be provided free of cost or for some extra charges. Such value-added services also attract the Cloud user.

IV. CLOUD SERVICE RANKING USING MAGIQ
Multi-attribute Global Inference of Quality (MAGIQ) [1] is a method of assigning a single aggregated evaluation measure to an object which is evaluated in terms of an arbitrary number of attributes. In our case, MAGIQ assigns a single aggregated measure to each Cloud service based on its KPIs. The step 1 in using MAGIQ for ranking the Cloud services is to identify the Cloud services to be ranked. The Cloud services evaluator will identify these services.
Step 2 in using MAGIQ is to determine the KPIs to be used as a basis for ranking. The KPIs used for ranking the Cloud services are determined using UCSMI (section III). One of the motivations of using MAGIQ is that it supports the hierarchical relationship of KPIs. The KPIs can be compared and preferred at each level of the hierarchy. The hierarchical relationship of KPIs used in MAGIQ is depicted in Figure 9. At the top-level (or level 1) of the hierarchy are the Cloud service attributes. The level 2 consists of the actual KPIs. The Cloud services to be compared are at level-3 of the hierarchy. In step 3 of using MAGIQ, the Cloud service evaluator evaluates entities VOLUME 8, 2020 (service attributes/KPIs/actual services) at each level of the hierarchy and ranks them from most important to least important. The service attributes and KPIs are ranked based on user preferences while the actual services are ranked based on the measurements of each KPI. First, the service attributes are ranked at level 1. Next at level 2, each set of KPIs (for each service attribute) is ranked. Next at level 3, each set of Cloud service is ranked. In step 4 of using MAGIQ, the weights (to represent the relative importance) are assigned to the entities at each level based on their ranks using ''rank order centroids'' (ROC) [27]. For a set of M entities, the ROC value of an entity of rank j is given by Thus, the higher the rank, the higher the weight. It should be noted that all weights are normalized on a unit scale. More details about ROC may be found in [27]. Using ROC, the weights of the ranks can be determined easily and efficiently in contrast to the same in other methods like the analytical hierarchical process and analytical network process. This is the second major motivation for using MAGIQ in this work. The third major motivation for using MAGIQ is that the ROC values are independent of the actual measurements of KPIs and thus can be easily used with same effectiveness for measurements of different units and scales. In step 5, each set of Cloud services in level-3 of the KPI hierarchy is ranked based on each KPI's actual measurements for those services. Then, the weights are determined for each Cloud service based on its rank in each set using ROC. In step 6, the aggregated evaluation of each Cloud service for each KPI is determined as where E(S n , I m ) represents the aggregated evaluation of Cloud service S n for the Cloud KPI I m ; w k (S n , I m ) represents the weight in level-k in the vertical hierarchy of S n and I m . In step 7, the overall evaluation of a Cloud service S n , i.e.

O(S n ) is determined by
In the last step, the Cloud services are ranked on the basis of their O(S n ) values. A higher value of O(S n ) indicates a higher rank.

V. CASE STUDY
To understand how the MAGIQ technique works for ranking Cloud computing services, let's consider a simple case of ranking six Cloud services. In step 1, we identify the Cloud services as CS1, CS2, CS3, CS4, CS5, and CS6. In step 2, to keep the case simple, we choose three Cloud service attributes -security, performance, and finance -for ranking the six Cloud services. It should be noted that our proposed method is flexible to include all or only user-preferred Cloud service attributes and KPIs. In this case study, we consider security > performance > finance, where ''>'' indicates ''is more important than''. The corresponding ROCs are calculated as below.
In the next step, we choose two KPIs to evaluate each of security, performance, and finance, which are respectively data encryption and client privacy, efficiency and response time, base-plan cost and data storage cost. Further, we consider data encryption > client privacy, efficiency > response time, and base-plan cost > data storage cost, where ''>'' indicates ''is more important than''. Their ROCs are given as below. The weights of selected Cloud services for selected attributes and KPIs are summarized in Table 1. The aggregated evaluation of each Cloud service for each KPI is  The table 1 summarizes the overall rank of the selected Cloud services. Thus, based on the overall rank values the six Cloud services can be ranked as CS1, CS2, CS3, CS4, CS6, CS5 or CS1, CS2, CS3, CS6, CS4, CS5.

VI. RELATED WORK
There have been two main classes of methods used for evaluating Cloud service performance: measurement-based evaluation (MBE) and analytical model-based evaluation (AMBE). In MBE, an appropriate benchmark workload for the evaluation of specific KPIs of the service is selected and run on the Cloud service. Then, the KPIs are measured and analyzed. In AMBE, system modeling approaches like queueing theory, network calculus, stochastic reward net, etc. are used to develop an overall profile of a Cloud service. AMBE approaches target Cloud service evaluation for a class of applications, for example, HPC applications, e-commerce applications, etc.
The work in [28] proposes 8 principles and best practices for measurement of Cloud service performance. Li et al. [29] used 10 service attributes in overall MBE and comparison of four public CSPs: AWS (Amazon Web Services), CloudServers, Google AppEngine, and Microsoft Azure. The authors also compared the selected Cloud providers for their overall performance for web and scientific applications. Otay and Yıldız [30] used six KPIs to evaluate Cloud services and a Fuzzy Analytic Hierarchy Process (AHP) and VIKOR based multicriteria decision making method to select Cloud services in fuzzy environments. Alam et al. [31] also used Fuzzy AHP to evaluate overall performance of public cloud services using 30 factors. The proposed work missed several important Cloud service attributes in their evaluation. Abdel-Basset et al. [32] modeled Cloud service evaluation using neutrosophic analytical hierarchical process. The authors expressed Cloud service performance information in terms of triangular neutrosophic numbers, which requires strong technical background for its application. Garg et al. [7] used a framework for evaluating and ranking public Cloud infrastructure services (IaaS) employing 15 quality of service attributes. The authors used analytical hierarchical process to model overall performance of an IaaS. Atas and Gungor [33] ran a set of benchmarks algorithms on PaaS Cloud services of OpenShift, Heroku and Cloud Foundry and used logical scoring of preferences and analytical hierarchical process to evaluate their overall performance. Jatoth et al. [34] used a multicriteria evaluation model using Grey Topsis and AHP to select a CSP from an available group using different QoS parameters.
In contrast to these efforts, our framework is comprehensive to cover all major attributes of the Cloud services and thus can be effectively used for different service models. Our performance model is also flexible to use some or all attributes and thus can be easily used by technical as well as common cloud users.
Amazon has been among early public CSPs. Several researchers evaluated of Amazon Cloud services for different classes of applications. For example, Garfinkel [35] and Nguyen et al. [36], used Amazon's computing (EC2), storage (S3) and queuing (SQS) Cloud services in their performance evaluation tests. There were some efforts in evaluating public Clouds for tightly-coupled high-performance computing workloads. The authors in [37] evaluated Amazon's EC2 for HPC scientific applications and noticed it suitable for small-sized HPC applications. Similarly, Expasito et al. [38] and Expósito et al. [39] also evaluated Amazon EC2 for data intensive applications. Jackson et al. [40] evaluated EC2 for supercomputing applications and found it unsuitable. The performance of database applications on EC2 has been evaluated in [41]. The authors in [42] evaluated the impact of virtualization on the stability and homogeneity of EC2 for web applications. The authors in [43] evaluated EC2 for its I/O performance and compared it with an HPC cluster and a private Cloud. The authors found that the I/O performance of private and public Clouds was significantly lower than HPC systems.
Scientific workflow applications consisting of several loosely-coupled parallel tasks with control and data dependencies were used to evaluate the performance of EC2 by Juve et al. [44]. The authors discovered that EC2 performance is lower than classic HPC systems. In another effort, Iosup et al. [45] evaluated four public Cloud services GoGrid, EC2, Mosso and ElasticHosts for many task computing applications and concluded that the performance of these Clouds need to be improved to support HPC many tasks computing workloads. Authors in [46] evaluated performance of Amazon AWS, Google AppEngine and Microsoft Azure for database-intensive applications. Contrary to these approaches, our focus is comparative evaluation of Cloud services in terms of major service attributes instead of application-specific and low-level performance evaluation.
Cloud services are also being used for emerging application domains, like big data, smart systems, etc. and their performance evaluation has also been addressed in some recent efforts. The authors in [47] [50] exploited network calculus to develop Cloud performance profiles. Bruneo [51] used the stochastic reward net for the evaluation of IaaS Cloud services. Similarly, authors in [52] also used queu-ing theory for Cloud performance models. In their model, the authors evaluated performance of Cloud applications as well as virtual machines. The authors in [53] modeled performance of Cloud computing systems using queuing networks, stochastic processes, and time series to predict their service quality. Azadi et al. [54] used network data envelopment analysis based models to evaluate performance of 18 CSPs. Saravanan et al. [55] employed Bayesian network model of previous performance of a CSP and user feedback for ranking and selection of CSPs. AMBE approaches are restricted to the specific Cloud services and applications and thus can not be widely used. In addition, these models require model related background knowledge and thus are not suitable for common Cloud users. Contrarily, our performance model is simple and can be easily used by common Cloud users without any technical or background knowledge.

VII. CONCLUSION AND FUTURE WORK
Cloud services are becoming an elemental part of many computing systems. In turn, the performance of these systems is driven by the performance of the Cloud services. Thus, it is crucial to choose Cloud services whose performance match the user requirements. Such a choice requires a thorough evaluation of Cloud services according to user needs and preferences. However, evaluation of Cloud services is challenging due to several reasons including unavailability of standards for performance of Cloud services, different features of Cloud services provided by service providers, differences in underlying technologies, etc. To this end, this article provides a comprehensive framework for Cloud service evaluation using a unified Cloud service measurement index. The proposed index covers different aspects of the performance of Cloud services in terms of 8 top-level attributes and 65 detailed KPIs. The comprehensive nature of the framework addresses the needs of different Cloud models. The overall performance of Cloud services is evaluated and ranked using the MAGIQ method, which considers user preferences of service attributes/KPIs. The flexibility of the proposed framework to consider user preferences about Cloud attributes makes it suitable for users with different needs. Our method can easily be used by both technical cloud users, who have all technical information, as well as common Cloud users, who have information of only some attributes/KPIs. The future includes using other evaluation and ranking methods that can exploit fuzzy as well as non-quantifiable performance information. We also plan to develop separate models for functional and nonfunctional KPIs.
FARRUKH NADEEM is a gold medalist in B.Sc. and completed M.Sc. Computer Science from the University of Punjab, Pakistan. He completed his Ph.D. with distinction in computer science in 2009 from the University of Innsbruck, Austria. He is an associate professor at the Faculty of Computing and Information Technology, King Abdulaziz University, Jeddah. He holds several distinctions and awards during his educational career. He has been involved in several Austrian research projects and is working on a couple of Saudi research and development projects. He has gained professional training on Cloud Computing and High-Performance Computing. He has set up a Grid Computing Infrastructure at Faculty of Computing and Information Technology, King Abdulaziz University, Jeddah. He is a member of the program committees of several conferences and editorial board member of Journal of Modern Education and Computer Science. Farrukh has authored more than 29 conference and journal research papers, including four book chapters. He has been awarded President (King Abdulaziz University) Certificate of Appreciation and cash award for one of his journal publications. His main research interests include performance modeling and prediction, the Internet of Things, and smart healthcare.