A Hybrid Framework for Ranking Cloud Services Based on Markov Chain and the Best-Only Method

Cloud computing technology has undergone tremendous growth in recent years, and there are now many cloud service providers (CSPs). This makes CSP selection a challenging process for cloud users. Further complications arise when users modify the priorities of their requirements. Moreover, concerns such as complex computation, inconsistencies, and rank reversal have been raised in current approaches, resulting in less reliable results. This study presents a new hybrid multiple-criteria decision-making (MCDM) framework for ranking cloud services based on Markov chains combined with the best-only method (BOM). The Markov chain is used to record and track the changes in the priorities of user requirements and determine their final values. Then, the BOM method is utilized to determine the final weights of the QoS criteria based on pairwise comparisons made by the cloud user or decision maker and the final priorities of the user’s requirements. Finally, the cloud services are ranked, and the best CSP is selected. The proposed framework was validated using a case study and a real dataset. Performance, consistency, rank conformance, and sensitivity analyses were performed to evaluate the proposed framework. The obtained results prove that the proposed framework is computationally efficient and fully consistent, considers the user requirements and its transition pattern, and is robust to rank reversal.


I. INTRODUCTION
The cloud computing model is a pool of online resources, which are provided according to service-level agreements formed through a contract [1], and which should be created and virtualized dynamically in response to customer requests [2] on an on-demand basis [3]. Moreover, the number of cloud service providers offering a wide variety of cloud services is growing. In addition, customers are increasingly demanding cloud services. Therefore, it is essential to create cloud services in an automated manner that is dynamic and flexible to support the model of everything-as-a-service and satisfy customer requirements as they emerge [4], [5]. Owing to the various services offered by multiple cloud service The associate editor coordinating the review of this manuscript and approving it for publication was Alba Amato . providers (CSPs), cloud users may have difficulty selecting an appropriate CSP. Researchers have recognized the significance of this problem in developing mechanisms for selecting the best CSP. Selecting the best CSP requires assessing the cloud services based on QoS metrics and formulating an algorithm to rank them according to these criteria [6]. There are various forms of QoS criteria: functional and nonfunctional [7]. The cloud customer or decision maker compares the different services available in the cloud based on these QoS criteria. Accordingly, they select the most suitable cloud service based on multiple conflicting QoS criteria [8]. Multi-criteria decision-making (MCDM) methods have been widely used to address various decision-making problems [9], [10]. The importance of each QoS criterion in the cloud service selection process varies based on its weight. Users' requirements and priorities determine the weights of QoS criteria. Different decision-making techniques are then applied to QoS criteria and user requirements to assess whether a particular cloud service is suitable; however, the uncertain nature of the cloud environment often prevents service users from relying on these techniques [11]. Uncertainty arises from the fact that QoS criteria weights are linked to customer requirements and depend on the customer's level of satisfaction with the service. This level of satisfaction may fluctuate over time during service use. Therefore, user requirements and priorities change rapidly over time [12]. As a cloud service provider, it is crucial to fully understand and satisfy customer needs in an increasingly competitive marketplace [13], [14], [15]. Cloud service providers should be familiar with the transition patterns of customer requirements to meet the needs of current and future customers effectively. They must also design their services based on the features that customers need and expect. As users use cloud services, they may change their requirement priorities based on their experience. It is possible to collect information about the changed priorities from users to determine whether they are satisfied with the recommended service before and after use. The level of satisfaction of individuals with similar service demands can be analyzed by grouping them according to similar service requirements, as demonstrated in [16]. Thus, other users with similar service requirements can be guided to choose the most appropriate cloud service.
Although most MCDM approaches have been thoroughly validated, they still suffer from some weaknesses, including low comparison consistency, complex comparison systems, and an overall increase in computing complexity, which remains a significant barrier when choosing cloud providers [17].
A number of MCDM approaches suffer from a rank reversal problem [18], [19], which occurs when a CSP is added to or deleted from the cloud service repository, ranking a non-optimal CSP to an optimal CSP. A rank reversal in cloud service selection deceives a cloud user and results in substantial losses in the long run, owing to improper service selection. Consequently, it would be beneficial to have a framework for selecting cloud services that are robust to rank reversals.
In this study, we propose a novel hybrid MCDM framework for ranking cloud services based on Markov chains combined with the best-only method (BOM), which is robust to ranking reversals. To the best of our knowledge, no method combines Markov chains and the BOM method for ranking cloud services in the current literature. The Markov chain is used to record and track the changes in the priorities of the user requirements and to determine their final priorities. Then, the best-only method [17], an MCDM approach, is employed to determine the final weights of the QoS criteria based on pairwise comparisons performed by the cloud user or the decision maker and the final priorities of the user's requirements. Finally, the cloud services are ranked, and the best CSP is selected. The main contributions of this study are as follows: -We introduced a hybrid MCDM framework based on Markov chains and the best-only method for ranking cloud services. -Markov chains are used to determine the priorities of user requirements (PURs) and their transition patterns. -We applied the BOM method to determine the final weights of the QoS criteria based on pairwise comparisons and the final priorities of the user's requirements. -The proposed framework was evaluated and proved to be effective, efficient, and robust against rank reversals based on an experiment with a real-world dataset. The remainder of this paper is organized as follows: Section 2 summarizes related work. In Section 3, the proposed framework is presented. A case study based on a real dataset is used to validate the proposed framework in Section 4. Section 5 analyzes the performance, conformity, and sensitivity of the proposed framework and discusses the results. Finally, we summarize our findings in section 6 and provide recommendations for future research.

II. RELATED WORK
Owing to the existence of numerous cloud service providers offering similar services, the selection of a cloud service has become an important problem in the field of cloud computing. Researchers have proposed various approaches to solving the cloud service selection problem. MCDM methods contribute significantly to solving the service selection problem because they rank services based on multiple conflicting quality of service criteria [20]. The technique involves comparing different QoS metrics among many CSPs and combining them to identify the best cloud service provider.
The analytic hierarchy process (AHP) method is an MCDM process that relies on an expert's thoughts for decision making. Three phases are involved in this process. In the first phase of the process, a complex decision problem is decomposed into a hierarchy. Pairwise comparisons were used to determine the weight of each criterion at each hierarchy level. Finally, the weights were normalized, and the final ranking was determined. Garg et al. [21] developed a method for quantifying QoS criteria and ranking cloud services based on the AHP. Using SMICloud, they created metrics for each quantifiable QoS attribute proposed by the cloud service measurement index consortium (CSMIC) and ranked the cloud services based on these attributes. Moreover, they offered a uniform method for evaluating the relative ranking of cloud services according to different QoS attributes. This overcomes the problem of different dimensional units of various QoS attributes. However, they did not account for variations in QoS attributes, such as performance, nor did they design models for non-quantifiable QoS attributes.
The technique for order preference by similarity to ideal solution (TOPSIS) is a commonly used MCDM technique that helps decision makers find solutions for complex problems. It has applications in several areas including healthcare, transportation, and supplier selection. In this method, the VOLUME 11, 2023 best alternative is the one closest to the ideal option and the one farthest from the worst option [22]. Kumar et al. [23] proposed a hybrid MCDM approach that enables cloud users to evaluate cloud services based on the QoS criteria. The best-worst method (BWM) [24] was used to weigh the QoS criteria, and TOPSIS was conducted to obtain the final rank of cloud services. A major limitation of this study is that it cannot handle frequent and continuous changes in customer requirements. Furthermore, because our approach is based on the best-only method rather than the best-worst method, it requires fewer pairwise comparisons, resulting in a more computationally efficient framework and fully consistent results.
To deal with problems with incomplete information and small samples, the gray theory has been developed. Gray's theory emphasizes the creation of models based on limited datasets. The TOPSIS method was combined with gray theory to form gray TOPSIS. Jatoth et al. [25] proposed a hybrid MCDM approach in which cloud services are ranked based on the QoS parameters assigned to the services and user requirements. They used AHP to specify the relevance of various QoS criteria. To rank the available services and select the most suitable ones, AHP was combined with the grey TOPSIS method. In response to the uncertain judgements of the quantitative criteria, they conducted a pairwise comparison using comparative linguistic variables. Grey numbers are included in the TOPSIS to account for these uncertainties. Sensitivity analysis was performed to validate their approach.
Rehman et al. [26] developed an IaaS cloud service selection method based on user criteria measured over multiple nonoverlapping periods. For service ranking, they applied an independent MCDM method in parallel for each time period. Subsequently, the results from each service selection were combined using an aggregation method to determine the overall service rank, which was used to select the most appropriate service for the customer. Sun et al. [27] presented an MCDM approach (Cloud-FuSeR) that uses fuzzy knowledge provided by the user to select appropriate cloud services according to functional and non-functional requirements. Cloud-FuSeR consists primarily of three parts: (1) a fuzzy cloud ontology, which enables the comparison of cloud services and the selection of services with the most significant match to user requirements; (2) a fuzzy AHP method, which computes the weights of the nonfunctional criteria; and (3) a fuzzy TOPSIS method, which ranks the cloud services according to the weights of the nonfunctional criteria.
Tiwari and Kumar [22] proposed a new mechanism for selecting cloud services based on Gaussian TOPSIS. The proposed method considers the QoS feedback provided by cloud service users to rank cloud services. The priority assigned by the end user to the QoS criteria was also considered. To normalize the QoS values, a cumulative Gaussian density function was used. A real dataset is used to verify the proposed mechanism. The obtained results confirmed that the framework ranks cloud services equally with the TOPSIS method and that the rank reversal problem has been addressed. However, it does not consider the qualitative QoS criteria or interdependency among them.
Song et. al. [28] demonstrated an integrated methodology for predicting the state of the customer requirements. This methodology integrates the power of the Kano model in categorizing customer requirements, the effectiveness of gray theory for predicting trends with fewer data inputs, and the potential of a model such as Markov chains for predicting local fluctuations. Based on advanced knowledge of the transition rules of customer requirements, the proposed method leads to more precise and trustworthy predictions of the requirements. Thus, companies can ensure that they are designing the most suitable product for the customer at the correct time. The authors demonstrated the potential of this method by predicting customer-requirement patterns for mobile phones.
Nawaz et al. [29] developed a cloud broker to facilitate the selection of cloud services according to user requirements and shifting preferences of users over time. Their study utilised a Markov chain model to determine a pattern of user requirements related to the QoS criteria of cloud services and BWM as a method of MCDM for ranking the services. Based on real-world performance metrics for Amazon EC2 cloud services, the proposed model is validated through a case study. As our proposed framework uses the best-only method rather than the best-worst method, we can reduce the number of pairwise comparisons to determine the weights of the service criteria, resulting in a more consistent and costeffective approach.
A framework for supporting group decision making was proposed by Liu et al. [30], which incorporates regret theory and average solution distances. A major objective of their strategy is to overcome the diversity of criteria involved, as well as the psychological behavior of the CSP selection team. This behavior significantly affects the outcome of the selection process. To prevent the distortion of diverse information caused by conventional conversion methods, various forms of hybrid information are first processed separately. Subsequently, they defined the respective regretrejoice function values. A decision-support procedure based on regret-rejoice values was introduced into the framework. Finally, a methodology for determining expert weights is proposed utilizing the maximizing consensus model in conjunction with a methodology for calculating criteria weights based on the group best-worst method.
Uma and Evangelin Geetha [31] developed a framework for selecting cloud service providers. The service criteria are weighted using the full consistency method (FUCOM), whereas multi-objective optimization based on ratio analysis (MOORA) is proposed to rank CSPs.
Hussain and Merigówe [32] proposed a framework for a centrally managed quality of experience and service repository. A PROMETHEE-II approach is used, in which each alternative is analyzed based on a set of custom-weighted quality of service attributes defined by the consumer. The framework ensures continued economic growth of the cloud marketplace and facilitates the development of trust and a sustainable relationship between all parties involved.
Mandal and Khan proposed a model for ranking CSPs based on combined compromise solutions (CoCoSo) [33]. They found that trust issues caused by conflicting interests impede the selection of a CSP that satisfies both functional and non-functional needs. Based on the proposed model, decision-makers are guided through the evaluation process using linguistic judgments, which consider the lack of clarity, ambiguity, subjectiveness, and indeterminacy.
Bootheraa et al. [34] proposed a custom cloud service selection model using a neural network and identified the trend of changing user priorities. To train the neural network, a variety of multicriteria decision-making methods were employed. Using the trained neural network, the most efficient CSP was selected based on user preferences.
Thus, from the discussion above, we can conclude that many MCDM approaches have been proposed in the literature to assist in the service selection process in a cloud environment. However, these approaches lack at least one of the following characteristics.
-Consideration of changing priorities in user requirements when selecting cloud services. -Computational efficiency across a wide range of QoS criteria. -Full consistency in pairwise comparisons.
-Robustness to the rank reversal in different situations. This study presents a hybrid MCDM framework based on Markov chains and the best-only method for assisting cloud users in ranking their cloud services. This framework is computationally efficient and consistent in terms of pairwise comparison. Furthermore, the ranking of cloud services depends on user requirements and their transition patterns. In addition, the proposed framework demonstrated robustness to rank reversal in several cases.

III. THE PROPOSED FRAMEWORK A. SYSTEM ARCHITECTURE
This study proposes an innovative hybrid MCDM framework for ranking cloud services based on the Markov chain and the best-only method. The framework explains how cloud customers can find the most appropriate cloud service provider based on QoS criteria requirements. It also ranks services based on users' previous experience and service performance. This framework is a decision tool that assists cloud users in selecting the best service that meets their functional and nonfunctional needs. Fig. 1 illustrates the proposed framework for selecting and ranking cloud services. The following is a brief description of the main modules of the proposed framework:

1) CLOUD SERVICES DISCOVERY MODULE
The responsibility of this module is to interact with the customer and collect a list of required services and service requirements (number of CPU cores, amount of memory, budget, number of virtual machines, and disk amounts).
The module searches the cloud service repository for various cloud services offered by various cloud service providers. It then generates a list of cloud services eligible for inclusion. The ranking module uses the generated list of cloud services to rank them.

2) CLOUD SERVICES REPOSITORY
It contains each cloud provider's detailed functional and nonfunctional specifications, in addition to the QoS criteria values monitored by the cloud service benchmark module.

3) MARKOV CHAIN MODULE FOR TRACKING PURS
Utilizing the Markov chain method, this module tracks changes in the priorities of the user requirements. Additionally, it reads the initial priorities of the user requirements from the cloud customer and calculates the final priorities of the user requirements using the transition matrix.

4) HISTORICAL DATA OF PURS
Stores historical information regarding the priorities of user requirements to facilitate the calculation of the transition matrix.

5) CLOUD SERVICES BENCHMARK MODULE
A trustworthy third party manages the cloud services benchmark module, which analyzes the performance metrics of cloud services under various conditions and provides public access to the results. The results of cloud benchmarking are based on low-level performance metrics such as (CPU performance, memory performance, storage performance, network latency, and bandwidth). Cloud service ranking relies on information residing in the repository.

6) CLOUD SERVICES RANKING MODULE
The ranking module is responsible for two main tasks in the proposed framework. First, it applies the best-only method to separately calculate the weights of the QoS criteria for each user requirement. It then computes the aggregated QoS weights by utilizing the final priorities of the user requirements retrieved from the Markov chain module.
Second, the QoS metrics from the cloud services repository module are normalized and combined with the aggregated QoS weights to rank cloud services. The ranking is then provided to the cloud customers.

B. METHODOLOGY
This section introduces the proposed hybrid MCDM framework to select and rank cloud services. A flowchart of the process is shown in Fig. 2. The flowchart was divided into four key phases.

1) IDENTIFICATION OF THE LIST OF QOS CRITERIA AND CLOUD SERVICE PROVIDERS
Cloud customers provide a list of services they require. This list contains the general types of cloud services required, VOLUME 11, 2023 without naming specific cloud service providers. In addition, they specified their service requirements. The cloud service discovery module examines the cloud service repository for various cloud services offered by different service providers. It then generates a list of cloud service providers (CSP) that match the service requirements and list of QoS criteria (QC). CSP = csp 1 ,csp 2 , · · · ,csp m (1) QC = qc 1 ,qc 2 , · · · ,qc n (2)

2) MARKOV CHAIN METHOD
Cloud service providers must ensure that their services are of high quality and satisfy a wide range of customer needs. This enables them to create a competitive advantage in the market [35]. Consequently, we require the cloud customer to submit an initial list of priorities for user requirements (IPUR). However, there is no certainty that these priorities remain constant, as they may change over time.
It is possible to trace changes in user requirements by applying the Markov chain, which has previously been used to predict patterns of changing user requirements in other studies [16], [28], [29], [36], [37].
The Markov chain model relies on transition probabilities for predicting the evolution of stochastic systems, which makes it suitable for predicting the data sequence even with more fluctuations [28]. Using a Markov chain, we search for a pattern in the priorities of user requirements which are discrete events. It is common for users to alter their requirements and priorities continuously. Therefore, it is necessary to forecast the dynamic transition pattern of user requirements early to best select and rank appropriate services based on actual requirements. Globally, these changes may be observed as they include many other users of similar services. Accordingly, a general pattern of changes in user requirements can be discerned.
Consequently, we should record the initial priorities of user requirements and calculate their probabilities for a group of users with similar requirements. We then need to keep track of the changes in these probabilities over time as users downgrade or upgrade their priorities.
Assume that the set of user requirements (UR) and the initial list of priorities of user requirements (IPUR) are obtained directly from the users. The values were normalized in the IPUR vector. UR = {ur 1 ,ur 2 , · · · ,ur r } Priorities may shift from one ur to another over time. The time required for this shift is determined by the frequency with which the user utilizes the service. This varies according to service category [29]. By observing user priorities over time, we can identify the proportion of users who regard ur i as their most prior requirement and wish to switch to ur j . Consider Z ix as the number of users who initially place ur i at the top of their priority list at time t x and Y ijx as the number of users who wish to change their priority to ur j at the same time. The probability of this transition was then calculated using Eq. (5).
Following a sequence of periods, if we can estimate the T ij value that satisfies the condition in Eq. (6), we construct a transition matrix using the values of T ij [29].
where: ε is a very small value.
The decision maker can determine the number of periods (c) used to estimate the values in the transition matrix and recalculate it.
The transition matrix is constructed as follows.
The final list of priorities of user requirements (FPUR) can be calculated by initially setting the values of FPUR to the values of IPUR and then repeating the multiplication of the transpose of the vector (FPUR) by the transition matrix (TM), as shown in Eq. (8) [29]. where: Because the stochastic matrices are inevitably convergent, it is expected that the matrices will be the same after multiplication three to five times. According to [29] and [38], the adjusted priorities of user requirements are independent of the initial state. Therefore, it would be prudent to focus on creating a transitional matrix. Once the pattern of user requirements has been identified, the connection between this pattern and the QoS criteria must be defined through the BOM.

3) THE BEST-ONLY METHOD
In this phase, we need to calculate the overall weights of the QoS criteria according to the user requirements using the best-only method [17]. Initially, we computed the weights for each QoS criterion individually. We then aggregate the individual calculated weights to determine the final weights for the QoS criteria (FQCW). The following are the steps we follow to calculate the QoS criteria weights based on BOM [17]: Step 1: Select one user requirement ur from the set UR = {ur 1 ,ur 2 , · · · ,ur r } to calculate the weights of the QoS criteria QC = qc 1 ,qc 2 , · · · ,qc n accordingly.
Step 2: Determine the best criterion B with respect to the selected user requirement ur, where B ∈ QC.
Step 3: Determine the pairwise comparison vector (A) of the best criterion B to other QoS criteria in QC using a number between one and nine, with one being the best preference and nine being the least preferred. The resulting vector would be A = (a B1 ,a B2 ,· · · ,a Bn ), where a Bi denotes the preference of the best criterion B over criterion i, and a BB = 1.
Step 4: To compute the optimal weights (UR_QC ur ) of the QoS criteria (QC) relative to the selected user requirement (ur), we need to form and solve the following set of linear equations in equations 9 and 10.
Step 5: Repeat Steps 2 through 4 for all the remaining user requirements ur ∈ UR.

4) OBTAINING THE SERVICE RANKING
The final phase of the proposed framework involves calculating the final ranking of the CSPs as follows: Step 1: Construct the matrix UR_QC containing the calculated weights of the QoS criteria (QC) in relation to all user requirements (UR).
Step 2: Calculate the final aggregated weights of the QoS criteria (FQCW) by multiplying the transpose of the final list of priorities of user requirements (FPURs) by matrix (UR_QC).
Step 3: Construct a decision matrix (DM) that contains the QoS criteria values for all selected CSPs. These values can be obtained from the cloud service repository, as the cloud service benchmark module stores them.
Step 4: Construct the normalized decision matrix (NDM) by scaling all values of the decision matrix within the range of 0 to 1 using the cumulative normal distribution function provided in Eq. (14) [22] as follows: Step 5: Calculate the final CSP ranking vector (RCSP) by multiplying the normalized decision matrix (NDM) with the final aggregated weights of the QoS criteria (FQCW).

IV. CASE STUDY A. USER REQUIREMENTS IDENTIFICATION
Initially, the cloud customer submits a list of services they require, along with their specifications, to the cloud broker system. Subsequently, the cloud service discovery module examines the cloud service repository and generates a list of cloud service providers that meet the service specifications and QoS criteria for its evaluation. The primary objective of our research is to rank and select the most suitable cloud service providers among various alternatives based on user requirements. Consequently, it is essential to obtain a list of user requirements and their initial priority levels. Let us assume that the set of user requirements (UR) and the initial list of priorities of user requirements (IPUR) are obtained directly from user as follows:

B. DATASET
An application of the proposed framework is demonstrated with the real-world problem of selecting cloud services, demonstrating its efficiency and usefulness. Our analyses were based on a real dataset (CloudHarmony) which was used in this study [39]. It is used to dynamically estimate the performance of various cloud services by running benchmark applications on virtual machines for a predetermined time period. In the CloudHarmony dataset, five QoS metrics were collected for the 11 real-world CSPs. CPU performance, memory performance, disk performance, disk I/O consistency, and price were the QoS criteria utilized. Table 2 provides a summary of QoS criteria and their polarity. According to their polarity, QoS criteria are categorized as positive or negative. Positive QoS criteria with a higher value indicate higher quality, and negative QoS criteria with a higher value indicate lower quality.
For this case study, the CSPs were Amazon, HP, Century Link, SoftLayer, Rackspace, Google, Microsoft Azure, GoGrid, City-Cloud, Linode, and Joyent. Table 3 presents the datasets used in this case study.
As shown by Eqs. (19)(20)(21)(22)(23), the FPUR vector was settled after five multiplications. After applying the Markov chain, the final pattern of the priorities of user requirements (FPUR) was used as an input to the BOM method. The purpose is to establish a connection between this pattern and the QoS criteria.
The best QoS criterion is first selected with respect to each user requirement. Then, QoS criteria are compared to the best one, and their weights are calculated by repeatedly calling the best-only method [17] (Algorithm 2), as in (lines [10][11][12]. Tables 4-8 show the pairwise comparison values and optimal weights for the QoS criteria according to each user requirement. We combined the weights for each user requirement to form the UR_QC matrix. for (j = 1; j ≤ n; j + +) do 17: The normalized decision matrix (NDM) is constructed by applying Eq. (14), using the values from the decision Finally, the CSP ranking vector (RCSP) was calculated using Eq. (15), as shown in (line 19).  Table 9 shows the ranking of cloud service providers as obtained in our case study.

RCSP
As can be observed, HP is rated as the highest-performing service provider, whereas SoftLayer is rated the worst. HP is the first on the list because it offers the highest CPU performance among all the service providers. The cloud user gives the highest priority to response time and stability, and previously selected CPU performance as the best criterion.

V. ANALYSIS AND VALIDATION A. PERFORMANCE AND CONSISTENCY ANALYSIS
It is becoming increasingly difficult to select cloud services due to the increasing number of service providers offering a variety of similar services [23], [40], [41]. Microsoft and other major software companies are investing heavily in the development of cloud-based services, including Google and Amazon [42], [43]. To determine the rank of cloud services,   efficient methods must be employed to deal with the complexity of processing, especially when budgets are limited or time limits are tight [44], [45].
We compared BOM [17] with BWM [24] and AHP [46] to assess the extent to which the computation time and complexity have been improved for calculating the weight of the criteria. This subsection uses the same QoS dataset as in our case study to compare BOM, BWM, and AHP. Tables 10 and 11 present the optimal weight values for the QoS criteria for each user requirement. In addition, they show the final aggregated weights for the QoS criteria for AHP and BWM. Fig. 3 shows the ranking of cloud services using the AHP, BWM, and BOM methods. As shown in Fig. 3, all methods agreed that CSP2 is the best cloud service provider, whereas CSP7, CSP6, CSP4, and CSP10 are the worst.
To determine the performance of the proposed framework using the BOM against the BWM and AHP, we calculated the number of comparisons necessary to determine the optimal weighting of the QoS criteria relative to each individual user requirement. The BOM method only requires one vector (for the best comparison) versus two vectors in the case of the BWM (for the best and worst comparisons). Consider the number of QoS criteria is n. For AHP, n × (n − 1) 2 comparisons are required; for BWM, 2n − 3 comparisons are required, and for the BOM method, n − 1 comparisons are required [17]. Generally, this performance improvement can be attributed to BOM and BWM being vector-based methods, which involve fewer comparisons than matrix-based methods, such as AHP. Fig. 4 depicts the variations in the number of pairwise comparisons in AHP, BWM, and BOM as a function of the number of criteria.
In MCDM methods, the consistency ratio (CR) is a measure of the reliability of the output from the MCDM method. The BOM method is always consistent and yields a constant CR value of zero, making it more consistent and reliable than the AHP and BWM approaches [17].

B. RANK CONFORMANCE ANALYSIS
This section aims to validate the results to evaluate whether the rank obtained using the proposed framework is similar to that obtained using the other methods. The framework was validated using other cloud service ranking methods based on TOPSIS, G-TOPSIS, and the method proposed by Aires et al. [47] and the method proposed by García-Cascales and Lamata [18].
It has been demonstrated by Garcia-Cascales and Lamata [18] that TOPSIS exhibits rank reversal when alternative solutions are introduced because the normalized value of the decision matrix is altered. The authors present two hypothetical alternatives F1 and F2 for holding the minimum and maximum values of each criterion, respectively. By using absolute normalization instead of relative normalization, the proposed method was shown to be robust against   rank reversal. A new variant of TOPSIS named R-TOPSIS was proposed by Aires et al. [47]. Each criterion is weighted and has an associated domain with R-TOPSIS. The decision matrix was normalized using max or max-min procedure. For all cases except those arising from the removal of non-discriminating criteria, R-TOPSIS is robust to rank reversal. Fig. 5 shows the rank of each CSP based on the proposed framework as well as the methods using the same dataset used in the case study. According to Fig. 5, the results of the five methods were similar. All methods ranked CSP2 as the best cloud service provider. Furthermore, CSP 10 is ranked tenth in all methods, except for the TOPSIS method, where it is ranked ninth. Thus, it can be concluded that the obtained results are accurate and comparable to those obtained with other MCDM methods.

C. SENSITIVITY ANALYSIS
To validate that the proposed framework is robust and consistent in different rank reversal scenarios, a sensitivity VOLUME 11, 2023  analysis was conducted. In the sensitivity analysis, different scenarios were created, in which each scenario was viewed as a different situation that could alter the ranking of the service provider. The following scenarios were evaluated through a sensitivity analysis.
Scenario 1: Removing one of the CSPs from the list. Scenario 2: Successively addition of one CSP to the existing set of CSPs.
Scenario 3: One additional CSP is being introduced.

Scenario 4:
Two subsets of the current set of CSPs are defined to examine the transitivity property.
Scenario 5: Excluding the lowest weight QoS criteria. Scenario 6: Altering the QoS criteria for non-optimal CSP.
Here is a more detailed discussion of each scenario: Scenario 1: Each CSP was removed individually according to the first scenario in the sensitivity analysis. The dataset utilized in the case study was used for ten different experiments. The ranks  of all the CSPs were calculated in each experiment after one CSP was removed. A summary of the ranks of individual CSPs is displayed in Fig. 6. Based on Fig. 6, it appears that the ranking remained unchanged after the removal of each CSP.

Scenario 2:
In the second scenario of the sensitivity analysis, one CSP was successively added to the existing set of CSPs to assess the robustness of the rankings. This was performed in nine experiments. Initially, only CSP1 and CSP2 data were available. Subsequently, CSP was gradually added to each experiment. Fig. 7 illustrates the rank of each CSP obtained in all experiments. As shown in Fig. 7, the ranking of the CSPs in each experiment was consistent with the original ranking obtained through the case study.

Scenario 3:
In this sensitivity analysis scenario, an additional CSP is incorporated to ensure that the ranking is consistent, and the framework is robust. Table 12 lists the QoS metrics for the added CSP.
As shown in Table 13, the rank of CSPs does not change when an additional CSP is added, regardless of its rank.

Scenario 4:
In this scenario, the existing set of CSPs is subdivided into two subsets. Each subset is conducted separately to determine whether each subset's rank is consistent with the rank of the entire set. As shown in Table 14, the entire set of CSPs has been split into two subsets: odd and even.
Based on the proposed framework, the CSPs in the odd and even sets were ranked separately. As shown in Table 15, the ranking of each set was consistent with the ranking of the two sets as one set obtained in the case study.

Scenario 5:
This sensitivity analysis scenario was conducted by removing the lowest-weight QoS criterion from the existing set utilized in the case study. It can be seen that disk consistency (C4) is the lowest-weight QoS criterion among the five and may be removed to assess the robustness of the framework. Thus, the disk consistency QoS criterion was removed from the experiment, and the CSPs ranking was calculated based on the four other QoS criteria. The weight of the disk consistency QoS criterion was evenly distributed among the remaining four criteria, as shown in Table 16.
Based on the experimental results, the rank obtained after removing the lowest-weight QoS criterion is the same as that obtained when considering all five QoS criteria. Therefore, it can be concluded that the proposed framework is robust despite the removal of the lowest-weight QoS criteria. The rankings of CSPs obtained by gradually decreasing the QoS criteria value for the best cloud service provider (HP) by 5%.

Scenario 6:
As a final sensitivity analysis scenario, the QoS criteria value for the best cloud service provider HP (CSP2) was gradually reduced by 5% from its original value to determine whether the rankings generated by the proposed framework were sufficiently robust. In total, 13 experiments were conducted to change the position of the HP from the first CSP to the last CSP in the ranking. Table 17 lists the rankings obtained for each experiment.
From Table 17, it is apparent that HP has lowered its rank from one to six owing to a 40% reduction in its QoS values. Its ranking was reduced again from six to seven owing to a decrease of 45% in its QoS values. After the QoS values were reduced by 55%, their rank decreased from 7 to 8. Finally, when the QoS values were reduced by 65%, the ranking decreased from 8 to 10 (last). In all experiments, the rankings of the other CSPs remained the same. This confirms that the CSPs ranking is consistent and that the framework can handle ranking reversals resulting from changes in the QoS criteria values for the CSPs.

VI. CONCLUSION AND FUTURE WORK
The cloud computing industry has experienced enormous growth in recent years, with an increasing number of service providers. Thus, cloud users are faced with a challenging process when selecting a CSP. Additionally, current approaches have raised many concerns, including changes in the priorities of users' requirements, complex computation, inconsistencies, and rank reversals, which result in less reliable results. In this study, we developed a hybrid framework based on a Markov chain that can be used to find transition patterns in the priorities of user requirements, independent of their initial state. Using this transition pattern, the final priorities of user requirements can be determined. The weights of the QoS criteria are then calculated for various CSPs based on their benchmark values and the final priorities of user requirements using the BOM method. Finally, the CSPs were ranked according to their weights, and the best CSP was selected. A case study based on a real dataset was used to validate the proposed framework through performance, consistency, rank conformance, and sensitivity analyses. The results obtained in this work demonstrate that the proposed hybrid framework is fully consistent, able to satisfy users' requirements dynamically, and robust to rank reversal. Future work could extend the current framework to fuzzy environments with a security analysis to further increase their usefulness.